url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.12B
| node_id
stringlengths 18
32
| number
int64 1
3.66k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,644B
| updated_at
int64 1,587B
1,644B
| closed_at
int64 1,587B
1,644B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/522/comments | https://api.github.com/repos/huggingface/datasets/issues/522/events | https://github.com/huggingface/datasets/issues/522 | 682,478,833 | MDU6SXNzdWU2ODI0Nzg4MzM= | 522 | dictionnary typo in docs | {
"login": "yonigottesman",
"id": 4004127,
"node_id": "MDQ6VXNlcjQwMDQxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigottesman",
"html_url": "https://github.com/yonigottesman",
"followers_url": "https://api.github.com/users/yonigottesman/followers",
"following_url": "https://api.github.com/users/yonigottesman/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigottesman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigottesman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigottesman/subscriptions",
"organizations_url": "https://api.github.com/users/yonigottesman/orgs",
"repos_url": "https://api.github.com/users/yonigottesman/repos",
"events_url": "https://api.github.com/users/yonigottesman/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigottesman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks!"
] | 1,597,907,465,000 | 1,597,909,934,000 | 1,597,909,933,000 | CONTRIBUTOR | null | Many places dictionary is spelled dictionnary, not sure if its on purpose or not.
Fixed in this pr:
https://github.com/huggingface/nlp/pull/521 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/522/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/521/comments | https://api.github.com/repos/huggingface/datasets/issues/521/events | https://github.com/huggingface/datasets/pull/521 | 682,477,648 | MDExOlB1bGxSZXF1ZXN0NDcwNzEyNzgz | 521 | Fix dictionnary (dictionary) typo | {
"login": "yonigottesman",
"id": 4004127,
"node_id": "MDQ6VXNlcjQwMDQxMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigottesman",
"html_url": "https://github.com/yonigottesman",
"followers_url": "https://api.github.com/users/yonigottesman/followers",
"following_url": "https://api.github.com/users/yonigottesman/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigottesman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigottesman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigottesman/subscriptions",
"organizations_url": "https://api.github.com/users/yonigottesman/orgs",
"repos_url": "https://api.github.com/users/yonigottesman/repos",
"events_url": "https://api.github.com/users/yonigottesman/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigottesman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hahah thanks Yonatan. It was not on purpose, we are just not very good at spelling :)"
] | 1,597,907,342,000 | 1,597,909,924,000 | 1,597,909,924,000 | CONTRIBUTOR | null | This error happens many times I'm thinking maybe its spelled like this on purpose? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/521/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/521",
"html_url": "https://github.com/huggingface/datasets/pull/521",
"diff_url": "https://github.com/huggingface/datasets/pull/521.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/521.patch",
"merged_at": 1597909924000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/520/comments | https://api.github.com/repos/huggingface/datasets/issues/520/events | https://github.com/huggingface/datasets/pull/520 | 682,264,839 | MDExOlB1bGxSZXF1ZXN0NDcwNTI4MDE0 | 520 | Transform references for sacrebleu | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"repos_url": "https://api.github.com/users/jbragg/repos",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I think I agree @lhoestq so I pushed a change.\r\nThanks for your work on the library!"
] | 1,597,883,215,000 | 1,597,915,854,000 | 1,597,915,853,000 | CONTRIBUTOR | null | Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and references. If one uses a more standard format where predictions and references are lists of the same length, sacrebleu throws an error.
This PR transforms reference data in a more standard format into the [unusual format](https://github.com/mjpost/sacreBLEU#using-sacrebleu-from-python) expected by sacrebleu. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/520/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/520",
"html_url": "https://github.com/huggingface/datasets/pull/520",
"diff_url": "https://github.com/huggingface/datasets/pull/520.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/520.patch",
"merged_at": 1597915853000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/519/comments | https://api.github.com/repos/huggingface/datasets/issues/519/events | https://github.com/huggingface/datasets/issues/519 | 682,193,882 | MDU6SXNzdWU2ODIxOTM4ODI= | 519 | [BUG] Metrics throwing new error on master since 0.4.0 | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"repos_url": "https://api.github.com/users/jbragg/repos",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric",
"Closing - seems to be just forgetting to tokenize. And found the helpful discussion in #137 "
] | 1,597,872,555,000 | 1,597,874,680,000 | 1,597,874,680,000 | CONTRIBUTOR | null | The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.
Wasn't happening on 0.4.0 but happening now on master.
```
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute
self.add_batch(predictions=predictions, references=references)
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 242, in add_batch
batch = self.info.features.encode_batch(batch)
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in encode_batch
encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in <listcomp>
encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]
File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 456, in encode_nested_example
raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/519/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/518/comments | https://api.github.com/repos/huggingface/datasets/issues/518/events | https://github.com/huggingface/datasets/pull/518 | 682,131,165 | MDExOlB1bGxSZXF1ZXN0NDcwNDE0ODE1 | 518 | [METRICS, breaking] Refactor caching behavior, pickle/cloudpickle metrics and dataset, add tests on metrics | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"(test failure is unrelated)",
"As discussed with @thomwolf merging since the hyperparameter-search has been merged in transformers."
] | 1,597,866,188,000 | 1,598,284,900,000 | 1,598,284,899,000 | MEMBER | null | Move the acquisition of the filelock at a later stage during metrics processing so it can be pickled/cloudpickled after instantiation.
Also add some tests on pickling, concurrent but separate metric instances and concurrent and distributed metric instances.
Changes significantly the caching behavior for the metrics:
- if the metric is used in a non-distributed setup (most common case) we try to find a free cache file using UUID instead of asking for an `experiment_id` if we can't lock the cache file this allows to use several instances of the same metrics in parallel.
- if the metrics is used in a distributed setup we ask for an `experiment_id` if we can't lock the cache file (because all the nodes need to have related cache file names for the final sync.
- after the computation, we free the locks and delete all the cache files.
Breaking: Some arguments for Metrics initialization have been removed for simplicity (`version`...) and some have been renamed for consistency with the rest of the library (`in_memory` => `keep_in_memory`).
Also remove the `_has_transformers` detection in utils to avoid importing transformers everytime during loading. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/518/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/518",
"html_url": "https://github.com/huggingface/datasets/pull/518",
"diff_url": "https://github.com/huggingface/datasets/pull/518.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/518.patch",
"merged_at": 1598284899000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/517/comments | https://api.github.com/repos/huggingface/datasets/issues/517/events | https://github.com/huggingface/datasets/issues/517 | 681,896,944 | MDU6SXNzdWU2ODE4OTY5NDQ= | 517 | add MLDoc dataset | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Any updates on this?",
"This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies."
] | 1,597,848,119,000 | 1,627,970,373,000 | null | CONTRIBUTOR | null | Hi,
I am recommending that someone add MLDoc, a multilingual news topic classification dataset.
- Here's a link to the Github: https://github.com/facebookresearch/MLDoc
- and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf
Looks like the dataset contains news stories in multiple languages that can be classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). There are 13 languages: Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/517/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/517/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/516/comments | https://api.github.com/repos/huggingface/datasets/issues/516/events | https://github.com/huggingface/datasets/pull/516 | 681,846,032 | MDExOlB1bGxSZXF1ZXN0NDcwMTY5NTA0 | 516 | [Breaking] Rename formated to formatted | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,597,844,123,000 | 1,597,912,877,000 | 1,597,912,876,000 | MEMBER | null | `formated` is not correct but `formatted` is | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/516/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/516",
"html_url": "https://github.com/huggingface/datasets/pull/516",
"diff_url": "https://github.com/huggingface/datasets/pull/516.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/516.patch",
"merged_at": 1597912876000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/515/comments | https://api.github.com/repos/huggingface/datasets/issues/515/events | https://github.com/huggingface/datasets/pull/515 | 681,845,619 | MDExOlB1bGxSZXF1ZXN0NDcwMTY5MTQ0 | 515 | Fix batched map for formatted dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,597,844,090,000 | 1,597,955,443,000 | 1,597,955,442,000 | MEMBER | null | If you had a dataset formatted as numpy for example, and tried to do a batched map, then it would crash because one of the elements from the inputs was missing for unchanged columns (ex: batch of length 999 instead of 1000).
The happened during the creation of the `pa.Table`, since columns had different lengths. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/515/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/515",
"html_url": "https://github.com/huggingface/datasets/pull/515",
"diff_url": "https://github.com/huggingface/datasets/pull/515.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/515.patch",
"merged_at": 1597955442000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/514/comments | https://api.github.com/repos/huggingface/datasets/issues/514/events | https://github.com/huggingface/datasets/issues/514 | 681,256,348 | MDU6SXNzdWU2ODEyNTYzNDg= | 514 | dataset.shuffle(keep_in_memory=True) is never allowed | {
"login": "vegarab",
"id": 24683907,
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegarab",
"html_url": "https://github.com/vegarab",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"repos_url": "https://api.github.com/users/vegarab/repos",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf ",
"Maybe I'm a bit tired but I fail to see the issue here.\r\n\r\nSince `cache_file_name` is `None` by default, if you set `keep_in_memory` to `True`, the assert should pass, no?",
"I failed to realise that this only applies to `shuffle()`. Whenever `keep_in_memory` is set to True, this is passed on to the `select()` function. However, if `cache_file_name` is None, it will be defined in the `shuffle()` function before it is passed on to `select()`. \r\n\r\nThus, `select()` is called with `keep_in_memory=True` and a not None value for `cache_file_name`. \r\nThis is essentially fixed in #513 \r\n\r\nEasily reproducible:\r\n```python\r\n>>> import nlp\r\n>>> data = nlp.load_dataset(\"cosmos_qa\", split=\"train\")\r\nUsing custom data configuration default\r\n>>> data.shuffle(keep_in_memory=True)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 1398, in shuffle\r\n verbose=verbose,\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 1178, in select\r\n ), \"Please use either `keep_in_memory` or `cache_file_name` but not both.\"\r\nAssertionError: Please use either `keep_in_memory` or `cache_file_name` but not both.\r\n>>>data.select([0], keep_in_memory=True)\r\n# No error\r\n```",
"Oh yes ok got it thanks. Should be fixed if we are happy with #513 indeed.",
"My bad. This is actually not fixed in #513. Sorry about that...\r\nThe new `indices_cache_file_name` is set to a non-None value in the new `shuffle()` as well. \r\n\r\nThe buffer and caching mechanisms used in the `select()` function are too intricate for me to understand why the check is there at all. I've removed it in my local build and it seems to be working fine for my project, without really considering other implications of the change. \r\n\r\n",
"Ok I'll investigate and add a series of tests on the `keep_in_memory=True` settings which is under-tested atm",
"Hey, still seeing this issue with the latest version."
] | 1,597,776,460,000 | 1,627,063,631,000 | null | CONTRIBUTOR | null | As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either `keep_in_memory` or `cache_file_name` but not both."
```
This affects both `shuffle()` as `select()` is a sub-routine, and `map()` that has the same check.
I'd love to fix this myself, but unsure what the intention of the assert is given the rest of the logic in the function concerning `ccache_file_name` and `keep_in_memory`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/514/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/513/comments | https://api.github.com/repos/huggingface/datasets/issues/513/events | https://github.com/huggingface/datasets/pull/513 | 681,215,612 | MDExOlB1bGxSZXF1ZXN0NDY5NjQxMjg1 | 513 | [speedup] Use indices mappings instead of deepcopy for all the samples reordering methods | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Ok I fixed `concatenate_datasets` and added tests\r\nFeel free to merge if it's good for you @thomwolf ",
"Ok, adding some benchmarks for map/filters and then I'll merge",
"Warning from pytorch that we should maybe consider at some point @lhoestq:\r\n```\r\n/__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,\r\nand PyTorch does not support non-writeable tensors. This means you can write to the underlying\r\n(supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to\r\nprotect its data or make it writeable before converting it to a tensor. This type of warning will be\r\nsuppressed for the rest of this program.\r\n(Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)\r\n532\r\n return torch.tensor(x, **format_kwargs)\r\n```",
"> Warning from pytorch that we should maybe consider at some point @lhoestq:\r\n> \r\n> ```\r\n> /__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,\r\n> and PyTorch does not support non-writeable tensors. This means you can write to the underlying\r\n> (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to\r\n> protect its data or make it writeable before converting it to a tensor. This type of warning will be\r\n> suppressed for the rest of this program.\r\n> (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)\r\n> 532\r\n> return torch.tensor(x, **format_kwargs)\r\n> ```\r\n\r\nNot sure why we have that, it's probably linked to zero copy from arrow to numpy"
] | 1,597,772,162,000 | 1,598,604,111,000 | 1,598,604,110,000 | MEMBER | null | Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`).
Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests.
All the samples re-ordering/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck.
*Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/513/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/513/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/513",
"html_url": "https://github.com/huggingface/datasets/pull/513",
"diff_url": "https://github.com/huggingface/datasets/pull/513.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/513.patch",
"merged_at": 1598604110000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/512/comments | https://api.github.com/repos/huggingface/datasets/issues/512/events | https://github.com/huggingface/datasets/pull/512 | 681,137,164 | MDExOlB1bGxSZXF1ZXN0NDY5NTc2NzE3 | 512 | Delete CONTRIBUTING.md | {
"login": "ChenZehong13",
"id": 56394989,
"node_id": "MDQ6VXNlcjU2Mzk0OTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/56394989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChenZehong13",
"html_url": "https://github.com/ChenZehong13",
"followers_url": "https://api.github.com/users/ChenZehong13/followers",
"following_url": "https://api.github.com/users/ChenZehong13/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenZehong13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChenZehong13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenZehong13/subscriptions",
"organizations_url": "https://api.github.com/users/ChenZehong13/orgs",
"repos_url": "https://api.github.com/users/ChenZehong13/repos",
"events_url": "https://api.github.com/users/ChenZehong13/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChenZehong13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"😱",
"Yeah, this is spammy behavior. I've reported the user handle."
] | 1,597,764,805,000 | 1,597,765,701,000 | 1,597,765,147,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/512/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/512",
"html_url": "https://github.com/huggingface/datasets/pull/512",
"diff_url": "https://github.com/huggingface/datasets/pull/512.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/512.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/511/comments | https://api.github.com/repos/huggingface/datasets/issues/511/events | https://github.com/huggingface/datasets/issues/511 | 681,055,553 | MDU6SXNzdWU2ODEwNTU1NTM= | 511 | dataset.shuffle() and select() resets format. Intended? | {
"login": "vegarab",
"id": 24683907,
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegarab",
"html_url": "https://github.com/vegarab",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"repos_url": "https://api.github.com/users/vegarab/repos",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi @vegarab yes feel free to open a discussion here.\r\n\r\nThis design choice was not very much thought about.\r\n\r\nSince `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table and infos).\r\n\r\nThinking about it I don't see a strong reason against transmitting the format from the parent dataset to its newly created child. It's probably what's expected by the user in most cases. What do you think @lhoestq?\r\n\r\nBy the way, I've been working today on a refactoring of all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`). The idea is to speed them up by a lot (like, really a lot) by working as much as possible with an indices mapping table instead of doing a deep copy of the full dataset as we've been doing currently. You can give it a look and try it here: https://github.com/huggingface/nlp/pull/513\r\nFeedbacks are very much welcome",
"I think it's ok to keep the format.\r\nIf we want to have this behavior for `.map` too we just have to make sure it doesn't keep a column that's been removed.",
"Shall we have this in the coming release by the way @lhoestq ?",
"Yes sure !",
"Since datasets 1.0.0 the format is not reset anymore.\r\nClosing this one, but feel free to re-open if you have other questions"
] | 1,597,758,361,000 | 1,600,073,138,000 | 1,600,073,138,000 | CONTRIBUTOR | null | Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later loading the dataset object using `torch.load("dataset.pt")`, which conserves the defined format before saving.
I do shuffling and selecting (for controlling dataset size) after loading the data from .pt-file, as it's convenient whenever you train multiple models with varying sizes of the same dataset.
The obvious workaround for this is to set the format again after using `dataset.select()` or `dataset.shuffle()`.
_I guess this is more of a discussion on the design philosophy of the functions. Please let me know if this is not the right channel for these kinds of discussions or if they are not wanted at all!_
#### How to reproduce:
```python
import nlp
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("t5-base")
def create_features(batch):
context_encoding = tokenizer.batch_encode_plus(batch["context"])
return {"input_ids": context_encoding["input_ids"]}
dataset = nlp.load_dataset("cosmos_qa", split="train")
dataset = dataset.map(create_features, batched=True)
dataset.set_format(type="torch", columns=["input_ids"])
dataset[0]
# {'input_ids': tensor([ 1804, 3525, 1602, ... 0, 0])}
dataset = dataset.shuffle()
dataset[0]
# {'id': '3Q9(...)20', 'context': "Good Old War an (...) play ?', 'answer0': 'None of the above choices .', 'answer1': 'This person likes music and likes to see the show , they will see other bands play .', (...) 'input_ids': [1804, 3525, 1602, ... , 0, 0]}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/511/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/510/comments | https://api.github.com/repos/huggingface/datasets/issues/510/events | https://github.com/huggingface/datasets/issues/510 | 680,823,644 | MDU6SXNzdWU2ODA4MjM2NDQ= | 510 | Version of numpy to use the library | {
"login": "isspek",
"id": 6966175,
"node_id": "MDQ6VXNlcjY5NjYxNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6966175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isspek",
"html_url": "https://github.com/isspek",
"followers_url": "https://api.github.com/users/isspek/followers",
"following_url": "https://api.github.com/users/isspek/following{/other_user}",
"gists_url": "https://api.github.com/users/isspek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isspek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isspek/subscriptions",
"organizations_url": "https://api.github.com/users/isspek/orgs",
"repos_url": "https://api.github.com/users/isspek/repos",
"events_url": "https://api.github.com/users/isspek/events{/privacy}",
"received_events_url": "https://api.github.com/users/isspek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Seems like this method was added in 1.17. I'll add a requirement on this.",
"Thank you so much. After upgrading the numpy library, it worked."
] | 1,597,741,153,000 | 1,597,862,156,000 | 1,597,862,156,000 | NONE | null | Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library.
Thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/510/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/509 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/509/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/509/comments | https://api.github.com/repos/huggingface/datasets/issues/509/events | https://github.com/huggingface/datasets/issues/509 | 679,711,585 | MDU6SXNzdWU2Nzk3MTE1ODU= | 509 | Converting TensorFlow dataset example | {
"login": "saareliad",
"id": 22762845,
"node_id": "MDQ6VXNlcjIyNzYyODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/22762845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saareliad",
"html_url": "https://github.com/saareliad",
"followers_url": "https://api.github.com/users/saareliad/followers",
"following_url": "https://api.github.com/users/saareliad/following{/other_user}",
"gists_url": "https://api.github.com/users/saareliad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saareliad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saareliad/subscriptions",
"organizations_url": "https://api.github.com/users/saareliad/orgs",
"repos_url": "https://api.github.com/users/saareliad/repos",
"events_url": "https://api.github.com/users/saareliad/events{/privacy}",
"received_events_url": "https://api.github.com/users/saareliad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Do you want to convert a dataset script to the tfds format ?\r\nIf so, we currently have a comversion script nlp/commands/convert.py but it is a conversion script that goes from tfds to nlp.\r\nI think it shouldn't be too hard to do the changes in reverse (at some manual adjustments).\r\nIf you manage to make it work in reverse, feel free to open a PR to share it with the community :)",
"In our docs: [Using a Dataset with PyTorch/Tensorflow](https://huggingface.co/docs/datasets/torch_tensorflow.html)."
] | 1,597,565,120,000 | 1,627,970,478,000 | 1,627,970,477,000 | NONE | null | Hi,
I want to use TensorFlow datasets with this repo, I noticed you made some conversion script,
can you give a simple example of using it?
Thanks
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/509/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/508/comments | https://api.github.com/repos/huggingface/datasets/issues/508/events | https://github.com/huggingface/datasets/issues/508 | 679,705,734 | MDU6SXNzdWU2Nzk3MDU3MzQ= | 508 | TypeError: Receiver() takes no arguments | {
"login": "sebastiantomac",
"id": 1225851,
"node_id": "MDQ6VXNlcjEyMjU4NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1225851?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebastiantomac",
"html_url": "https://github.com/sebastiantomac",
"followers_url": "https://api.github.com/users/sebastiantomac/followers",
"following_url": "https://api.github.com/users/sebastiantomac/following{/other_user}",
"gists_url": "https://api.github.com/users/sebastiantomac/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sebastiantomac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebastiantomac/subscriptions",
"organizations_url": "https://api.github.com/users/sebastiantomac/orgs",
"repos_url": "https://api.github.com/users/sebastiantomac/repos",
"events_url": "https://api.github.com/users/sebastiantomac/events{/privacy}",
"received_events_url": "https://api.github.com/users/sebastiantomac/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Which version of Apache Beam do you have (can you copy your full environment info here)?",
"apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). ",
"Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a dummy pipeline with [this code](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/wordcount_minimal.py)\r\n\r\nIf you get the same error, it means that the issue comes from apache beam.\r\nOtherwise we'll investigate what went wrong here",
"Still, same error, so I guess it is on apache beam then. \r\nThanks for the investigation.",
"Thanks for trying\r\nLet us know if you find clues of what caused this issue, or if you find a fix"
] | 1,597,562,296,000 | 1,598,972,013,000 | 1,598,971,743,000 | NONE | null | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
This fails in the apache beam runner.
```
Traceback (most recent call last):
File "D:/ML/wikiembedding/gpt2_sv.py", line 36, in <module>
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=my_cache_dir, beam_runner='DirectRunner')
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 969, in _download_and_prepare
pipeline_results = pipeline.run()
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\pipeline.py", line 534, in run
return self.runner.run_pipeline(self, self._options)
....
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 218, in process_encoded
self.output(decoded_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\operations.py", line 332, in output
cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\Cython\Shadow.py", line 167, in cast
return type(*args)
TypeError: Receiver() takes no arguments
```
This is run on a Windows 10 machine with python 3.8. I get the same error loading the swedish wikipedia dump. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/508/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/507/comments | https://api.github.com/repos/huggingface/datasets/issues/507/events | https://github.com/huggingface/datasets/issues/507 | 679,400,683 | MDU6SXNzdWU2Nzk0MDA2ODM= | 507 | Errors when I use | {
"login": "mchari",
"id": 30506151,
"node_id": "MDQ6VXNlcjMwNTA2MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mchari",
"html_url": "https://github.com/mchari",
"followers_url": "https://api.github.com/users/mchari/followers",
"following_url": "https://api.github.com/users/mchari/following{/other_user}",
"gists_url": "https://api.github.com/users/mchari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mchari/subscriptions",
"organizations_url": "https://api.github.com/users/mchari/orgs",
"repos_url": "https://api.github.com/users/mchari/repos",
"events_url": "https://api.github.com/users/mchari/events{/privacy}",
"received_events_url": "https://api.github.com/users/mchari/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Looks like an issue with 3.0.2 transformers version. Works fine when I use \"master\" version of transformers."
] | 1,597,439,037,000 | 1,597,441,150,000 | 1,597,441,150,000 | NONE | null | I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors
I am using **transformers 3.0.2** code .
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoTokenizer
model_name = "deepset/roberta-base-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
The errors are :
res = nlp(QA_input)
File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in __call__
for s, e, score in zip(starts, ends, scores)
File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in <listcomp>
for s, e, score in zip(starts, ends, scores)
KeyError: 0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/507/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/506/comments | https://api.github.com/repos/huggingface/datasets/issues/506/events | https://github.com/huggingface/datasets/pull/506 | 679,164,788 | MDExOlB1bGxSZXF1ZXN0NDY3OTkwNjc2 | 506 | fix dataset.map for function without outputs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,597,412,422,000 | 1,597,663,479,000 | 1,597,663,478,000 | MEMBER | null | As noticed in #505 , giving a function that doesn't return anything in `.map` raises an error because of an unreferenced variable.
I fixed that and added tests.
Thanks @avloss for reporting | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/506/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/506",
"html_url": "https://github.com/huggingface/datasets/pull/506",
"diff_url": "https://github.com/huggingface/datasets/pull/506.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/506.patch",
"merged_at": 1597663478000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/505/comments | https://api.github.com/repos/huggingface/datasets/issues/505/events | https://github.com/huggingface/datasets/pull/505 | 678,791,400 | MDExOlB1bGxSZXF1ZXN0NDY3NjgxMjY4 | 505 | tmp_file referenced before assignment | {
"login": "avloss",
"id": 17853685,
"node_id": "MDQ6VXNlcjE3ODUzNjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/17853685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avloss",
"html_url": "https://github.com/avloss",
"followers_url": "https://api.github.com/users/avloss/followers",
"following_url": "https://api.github.com/users/avloss/following{/other_user}",
"gists_url": "https://api.github.com/users/avloss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avloss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avloss/subscriptions",
"organizations_url": "https://api.github.com/users/avloss/orgs",
"repos_url": "https://api.github.com/users/avloss/repos",
"events_url": "https://api.github.com/users/avloss/events{/privacy}",
"received_events_url": "https://api.github.com/users/avloss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks for reporting the issue ! I'm creating a new PR to fix it and add tests.\r\n(I'm doing a new PR because I know there's some other place where it needs to be fixed)",
"I'm closing this one as I created the other PR."
] | 1,597,361,253,000 | 1,597,412,566,000 | 1,597,412,566,000 | NONE | null | Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file". | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/505/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/505/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/505",
"html_url": "https://github.com/huggingface/datasets/pull/505",
"diff_url": "https://github.com/huggingface/datasets/pull/505.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/505.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/504/comments | https://api.github.com/repos/huggingface/datasets/issues/504/events | https://github.com/huggingface/datasets/pull/504 | 678,756,211 | MDExOlB1bGxSZXF1ZXN0NDY3NjUxOTA5 | 504 | Added downloading to Hyperpartisan news detection | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thank you @ghomasHudson for making our dataset available! This is great!",
"The test passes since #527 :)"
] | 1,597,355,626,000 | 1,598,516,321,000 | 1,598,516,321,000 | CONTRIBUTOR | null | Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel !
Currently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `default` in this test. Might be related to #474 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/504/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/504",
"html_url": "https://github.com/huggingface/datasets/pull/504",
"diff_url": "https://github.com/huggingface/datasets/pull/504.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/504.patch",
"merged_at": 1598516321000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/503/comments | https://api.github.com/repos/huggingface/datasets/issues/503/events | https://github.com/huggingface/datasets/pull/503 | 678,726,538 | MDExOlB1bGxSZXF1ZXN0NDY3NjI3MTEw | 503 | CompGuessWhat?! 0.2.0 | {
"login": "aleSuglia",
"id": 1479733,
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aleSuglia",
"html_url": "https://github.com/aleSuglia",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I don't see any significant change in the dataset script (except the version value update), can you check that again please ?",
"Hi @aleSuglia , can you check that all the changes you wanted to do are in the dataset script ?",
"Hey sorry but I'm in the middle of a conference deadline. I'll let you know asap!",
"Ok np :)\r\nGood luck with your work for the conference",
"I finally managed to find some time to complete this. The only weird thing about this release is that I had to run the tests with the ignore checksum flag. Could it be because the Dropbox link doesn't change but the file does? Sorry didn't have the time to check the code to see what's happening behind the scenes.\r\n",
"Yes if the file changed, then the checksum verification won't pass as it expects to see the checksum of the old file.\r\nThe checksum is computed by hashing the complete file.\r\nYou can update the checksum by doing \r\n\r\n```\r\nnlp-cli test ./datasets/compguesswhat --save_infos --all_configs\r\n```",
"Any updates on this?",
"Hi :)\r\n\r\nI think what's left to do is\r\n1- rebase from master, since we changed the name of the library\r\n2- update the metadata file of the dataset using the command \r\n```\r\ndatasets-cli test ./datasets/compguesswhat --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\nThis command should update the checksum of the dropbox file",
"That's perfect. I'll have a look at it later today!",
"Nice thanks !",
"@lhoestq not sure why the quality check doesn't pass. Unfortunately CircleCI doesn't show the actual error. If I run `black` on my machine it works just fine. Ideas?",
"@lhoestq any updates? :) ",
"Your version of `black` might be outdated, or you run using `black` instead of `make style` since it reformatted 100+ files.\r\nCould you try to update black, then `make style` ?",
"Yes I think my versions of isort and black were outdated. Thanks @lhoestq :)\r\n",
"It still doesn't look right in terms of line-length.\r\nAre you running `black` or `make style` ?",
"I'm running `make style`. This is the output of the command:\r\n\r\n```\r\nblack --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! ✨ 🍰 ✨\r\n250 files left unchanged.\r\nisort tests src benchmarks datasets metrics\r\n```",
"Weird I have the same output without file changes with black `20.8b1` and isort `5.6.4` using `make style` too",
"I think that's because black doesn't revert the changes you first did with the old version.\r\nCould you open a new PR with only the ComGuessWhat files updated ? Hopefully now that black is up to date it should work directly (and to avoid 100+ files changes)",
"I will have a look at it tomorrow. Thanks for your help!",
"I'm closing this one and I'll open a new one."
] | 1,597,351,886,000 | 1,603,263,269,000 | 1,603,263,269,000 | CONTRIBUTOR | null | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/503/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/503",
"html_url": "https://github.com/huggingface/datasets/pull/503",
"diff_url": "https://github.com/huggingface/datasets/pull/503.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/503.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/502/comments | https://api.github.com/repos/huggingface/datasets/issues/502/events | https://github.com/huggingface/datasets/pull/502 | 678,546,070 | MDExOlB1bGxSZXF1ZXN0NDY3NDc1MDg0 | 502 | Fix tokenizers caching | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"This should fix #501 and also the issue you sent me on slack @sgugger ."
] | 1,597,334,017,000 | 1,597,844,239,000 | 1,597,844,238,000 | MEMBER | null | I've found some cases where the caching didn't work properly for tokenizers:
1. if a tokenizer has a regex pattern, then the caching would be inconsistent across sessions
2. if a tokenizer has a cache attribute that changes after some calls, the the caching would not work after cache updates
3. if a tokenizer is used inside a function, the caching of this function would result in the same cache file for different tokenizers
4. if `unique_no_split_tokens`'s attribute is not the same across sessions (after loading a tokenizer) then the caching could be inconsistent
To fix that, this is what I did:
1. register a specific `save_regex` function for pickle that makes regex dumps deterministic
2. ignore cache attribute of some tokenizers before dumping
3. enable recursive dump by default for all dumps
4. make `unique_no_split_tokens` deterministic in https://github.com/huggingface/transformers/pull/6461
I also added tests to make sure that tokenizers hashing works as expected.
In the future we should find a way to test if hashing also works across session (maybe using two CI jobs ? or by hardcoding a tokenizer's hash ?) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/502/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/502",
"html_url": "https://github.com/huggingface/datasets/pull/502",
"diff_url": "https://github.com/huggingface/datasets/pull/502.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/502.patch",
"merged_at": 1597844237000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/501/comments | https://api.github.com/repos/huggingface/datasets/issues/501/events | https://github.com/huggingface/datasets/issues/501 | 677,952,893 | MDU6SXNzdWU2Nzc5NTI4OTM= | 501 | Caching doesn't work for map (non-deterministic) | {
"login": "wulu473",
"id": 8149933,
"node_id": "MDQ6VXNlcjgxNDk5MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8149933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wulu473",
"html_url": "https://github.com/wulu473",
"followers_url": "https://api.github.com/users/wulu473/followers",
"following_url": "https://api.github.com/users/wulu473/following{/other_user}",
"gists_url": "https://api.github.com/users/wulu473/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wulu473/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wulu473/subscriptions",
"organizations_url": "https://api.github.com/users/wulu473/orgs",
"repos_url": "https://api.github.com/users/wulu473/repos",
"events_url": "https://api.github.com/users/wulu473/events{/privacy}",
"received_events_url": "https://api.github.com/users/wulu473/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting !\r\n\r\nTo store the cache file, we compute a hash of the function given in `.map`, using our own hashing function.\r\nThe hash doesn't seem to stay the same over sessions for the tokenizer.\r\nApparently this is because of the regex at `tokenizer.pat` is not well supported by our hashing function.\r\n\r\nI'm working on a fix",
"Thanks everyone. Works great now."
] | 1,597,263,607,000 | 1,598,286,900,000 | 1,598,286,875,000 | NONE | null | The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it.
```python
import nlp
import transformers
def main():
ds = nlp.load_dataset("reddit", split="train[:500]")
tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2")
def convert_to_features(example_batch):
input_str = example_batch["body"]
encodings = tokenizer(input_str, add_special_tokens=True, truncation=True)
return encodings
ds = ds.map(convert_to_features, batched=True)
if __name__ == "__main__":
main()
```
Roughly 3/10 times, this example recomputes the tokenization.
Is this expected behaviour? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/501/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/500/comments | https://api.github.com/repos/huggingface/datasets/issues/500/events | https://github.com/huggingface/datasets/pull/500 | 677,841,708 | MDExOlB1bGxSZXF1ZXN0NDY2ODk0NTk0 | 500 | Use hnsw in wiki_dpr | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,597,251,487,000 | 1,597,910,359,000 | 1,597,910,358,000 | MEMBER | null | The HNSW faiss index is much faster that regular Flat index. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/500/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/500",
"html_url": "https://github.com/huggingface/datasets/pull/500",
"diff_url": "https://github.com/huggingface/datasets/pull/500.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/500.patch",
"merged_at": 1597910358000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/499/comments | https://api.github.com/repos/huggingface/datasets/issues/499/events | https://github.com/huggingface/datasets/pull/499 | 677,709,938 | MDExOlB1bGxSZXF1ZXN0NDY2Nzg1MjAy | 499 | Narrativeqa (with full text) | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I took a look at the dummy data creation for this dataset.\r\n\r\nMaybe it didn't work on your side might be because `master.zip` and `narrativeqa_full_text.zip` are supposed to be directories and not acutal zip files in the dummy data folder.\r\n\r\nI managed to make it work with this `dummy_data.zip` file:\r\nhttps://drive.google.com/file/d/1G9ZHAjelazNApbFI0ep2dnSAWklXgGMd/view?usp=sharing",
"@lhoestq Hmmm wasn't that. Must have been something else I missed.\r\n\r\nHave committed your working version though now.",
"Ok thanks.\r\nCould you rebase from master to fix the CI please ?",
"Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ?",
"> Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ?\r\n\r\nHave added the test set code but getting an OverflowError when trying to regen the dataset_infos.json:\r\n\r\n---\r\nOverflowError: There was an overflow in the <class 'pyarrow.lib.StructArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB\r\n\r\n---\r\n",
"Thanks for reporting @ghomasHudson , I'll look into it",
"It looks like it's an issue with Pyarrow.\r\nBy changing the `DEFAULT_MAX_BATCH_SIZE` to 1000 instead of 10 000 in `arrow_writer.py` I was able to run the command.\r\n\r\nBasically it seems that is an Arrow StructArray has more than 1-2GB of data, then it shuffles some of its content.\r\nI can't find any issue on Apache Arrow's JIRA about this problem. It will require more investigation.\r\n\r\nMaybe we can simply automatically decrease the writer's batch size when this happens. We can just check if the arrow array is more than a certain amount of bytes. ",
"@lhoestq I've finally got round to regenerating the `dataset_infos.json` for this and adding all 3 splits. I've done this and updated for the new version of datasets.\r\n\r\nThe CI tests still aren't passing though (they pass on my machine). `test_load_dataset_narrativeqa` seems to fail but I have no idea how. Would appreciate if you have any ideas - would be great to finally finish this one!",
"The dummy data test fails, apparently it's because no examples are yielded for the dummy data.\r\n\r\nAlso it looks like the PR now show changes in many other files than the ones for NarrativeQA, could you create another branch and another PR please ?\r\n\r\nFeel free to ping me on the new PR so we can fi the dummy data together"
] | 1,597,240,183,000 | 1,607,512,862,000 | 1,607,512,862,000 | CONTRIBUTOR | null | Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.
Few notes:
- Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine.
- Can't get the dummy data to work. Currently putting stuff at:
```
dummy
|---- 0.0.0
|- dummy_data.zip
|-master.zip
| |- narrativeqa-master
| |- documents.csv
| |- qaps.csv
| |- third_party ......
|
| - narrativeqa_full_text.zip
| | - 001.content
| | - ....
```
Not sure what I'm messing up here (probably something obvious). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/499/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/499",
"html_url": "https://github.com/huggingface/datasets/pull/499",
"diff_url": "https://github.com/huggingface/datasets/pull/499.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/499.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/498/comments | https://api.github.com/repos/huggingface/datasets/issues/498/events | https://github.com/huggingface/datasets/pull/498 | 677,597,479 | MDExOlB1bGxSZXF1ZXN0NDY2Njg5NTcy | 498 | dont use beam fs to save info for local cache dir | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,597,230,000,000 | 1,597,411,041,000 | 1,597,411,040,000 | MEMBER | null | If the cache dir is local, then we shouldn't use beam's filesystem to save the dataset info
Fix #490
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/498/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/498/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/498",
"html_url": "https://github.com/huggingface/datasets/pull/498",
"diff_url": "https://github.com/huggingface/datasets/pull/498.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/498.patch",
"merged_at": 1597411040000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/497/comments | https://api.github.com/repos/huggingface/datasets/issues/497/events | https://github.com/huggingface/datasets/pull/497 | 677,057,116 | MDExOlB1bGxSZXF1ZXN0NDY2MjQ2NDQ3 | 497 | skip header in PAWS-X | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,597,166,785,000 | 1,597,830,602,000 | 1,597,830,601,000 | MEMBER | null | This should fix #485
I also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one).
Note that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I removed them in this case when I ran `nlp-cli ./datasets/xtreme --save_infos` to keep backward compatibility (versions 0.3.0 can't load these fields).
I think I'll change the logic so that `nlp-cli test` doesn't create these fields for dataset with no post processing | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/497/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/497/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/497",
"html_url": "https://github.com/huggingface/datasets/pull/497",
"diff_url": "https://github.com/huggingface/datasets/pull/497.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/497.patch",
"merged_at": 1597830601000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/496/comments | https://api.github.com/repos/huggingface/datasets/issues/496/events | https://github.com/huggingface/datasets/pull/496 | 677,016,998 | MDExOlB1bGxSZXF1ZXN0NDY2MjE1Mjg1 | 496 | fix bad type in overflow check | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,597,163,098,000 | 1,597,411,775,000 | 1,597,411,774,000 | MEMBER | null | When writing an arrow file and inferring the features, the overflow check could fail if the first example had a `null` field.
This is because we were not using the inferred features to do this check, and we could end up with arrays that don't match because of a type mismatch (`null` vs `string` for example).
This should fix #482 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/496/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/496",
"html_url": "https://github.com/huggingface/datasets/pull/496",
"diff_url": "https://github.com/huggingface/datasets/pull/496.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/496.patch",
"merged_at": 1597411774000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/495/comments | https://api.github.com/repos/huggingface/datasets/issues/495/events | https://github.com/huggingface/datasets/pull/495 | 676,959,289 | MDExOlB1bGxSZXF1ZXN0NDY2MTY5MTA3 | 495 | stack vectors in pytorch and tensorflow | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,597,158,773,000 | 1,597,224,649,000 | 1,597,224,648,000 | MEMBER | null | When the format of a dataset is set to pytorch or tensorflow, and if the dataset has vectors in it, they were not stacked together as tensors when calling `dataset[i:i + batch_size][column]` or `dataset[column]`.
I added support for stacked tensors for both pytorch and tensorflow.
For ragged tensors, they are stacked only for tensorflow as pytorch doesn't support ragged tensors.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/495/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/495",
"html_url": "https://github.com/huggingface/datasets/pull/495",
"diff_url": "https://github.com/huggingface/datasets/pull/495.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/495.patch",
"merged_at": 1597224648000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/494/comments | https://api.github.com/repos/huggingface/datasets/issues/494/events | https://github.com/huggingface/datasets/pull/494 | 676,886,955 | MDExOlB1bGxSZXF1ZXN0NDY2MTExOTQz | 494 | Fix numpy stacking | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"This PR also fixed a bug where numpy arrays were returned instead of pytorch tensors when getting with a clumn as a key."
] | 1,597,153,230,000 | 1,597,157,810,000 | 1,597,153,792,000 | MEMBER | null | When getting items using a column name as a key, numpy arrays were not stacked.
I fixed that and added some tests.
There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the to fix this issue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/494/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/494",
"html_url": "https://github.com/huggingface/datasets/pull/494",
"diff_url": "https://github.com/huggingface/datasets/pull/494.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/494.patch",
"merged_at": 1597153792000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/493/comments | https://api.github.com/repos/huggingface/datasets/issues/493/events | https://github.com/huggingface/datasets/pull/493 | 676,527,351 | MDExOlB1bGxSZXF1ZXN0NDY1ODIxOTA0 | 493 | Fix wmt zh-en url | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"this doesn't work. I can decompress the file after download locally."
] | 1,597,112,092,000 | 1,597,112,548,000 | 1,597,112,532,000 | CONTRIBUTOR | null | I verified that
```
wget https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00
```
runs in 2 minutes. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/493/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/493",
"html_url": "https://github.com/huggingface/datasets/pull/493",
"diff_url": "https://github.com/huggingface/datasets/pull/493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/493.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/492/comments | https://api.github.com/repos/huggingface/datasets/issues/492/events | https://github.com/huggingface/datasets/issues/492 | 676,495,064 | MDU6SXNzdWU2NzY0OTUwNjQ= | 492 | nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"In 0.4.0, the assertion in `concatenate_datasets ` is on the features, and not the schema.\r\nCould you try to update `nlp` ?\r\n\r\nAlso, since 0.4.0, you can use `dset_wikipedia.cast_(dset_books.features)` to avoid the schema cast hack.",
"Or maybe the assertion comes from elsewhere ?",
"I'm using the master branch. The assertion failure comes from the underlying `pa.concat_tables()`, which is in the pyarrow package. That method does check schemas.\r\n\r\nSince `features.type` does not contain information about nullable vs non-nullable features, the `cast_()` method won't resolve the schema mismatch. There is information in a schema which is not stored in features.",
"I'm doing a refactor of type inference in #363 . Both text fields should match after that",
"By default nullable will be set to True",
"It should be good now. I was able to run\r\n\r\n```python\r\n>>> from nlp import concatenate_datasets, load_dataset\r\n>>>\r\n>>> bookcorpus = load_dataset(\"bookcorpus\", split=\"train\")\r\n>>> wiki = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\")\r\n>>> wiki.remove_columns_(\"title\") # only keep the text\r\n>>>\r\n>>> assert bookcorpus.features.type == wiki.features.type\r\n>>> bert_dataset = concatenate_datasets([bookcorpus, wiki])\r\n```",
"Thanks!"
] | 1,597,105,666,000 | 1,598,458,639,000 | 1,598,458,639,000 | CONTRIBUTOR | null | Here's the code I'm trying to run:
```python
dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir)
dset_wikipedia.drop(columns=["title"])
dset_wikipedia.features.pop("title")
dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir)
dset = nlp.concatenate_datasets([dset_wikipedia, dset_books])
```
This fails because they have different schemas, despite having identical features.
```python
assert dset_wikipedia.features == dset_books.features # True
assert dset_wikipedia._data.schema == dset_books._data.schema # False
```
The Wikipedia dataset has 'text: string', while the BookCorpus dataset has 'text: string not null'. Currently I hack together a working schema match with the following line, but it would be better if this was handled in Features themselves.
```python
dset_wikipedia._data = dset_wikipedia.data.cast(dset_books._data.schema)
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/492/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/491/comments | https://api.github.com/repos/huggingface/datasets/issues/491/events | https://github.com/huggingface/datasets/issues/491 | 676,486,275 | MDU6SXNzdWU2NzY0ODYyNzU= | 491 | No 0.4.0 release on GitHub | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I did the release on github, and updated the doc :)\r\nSorry for the delay",
"Thanks!"
] | 1,597,103,997,000 | 1,597,164,607,000 | 1,597,164,607,000 | CONTRIBUTOR | null | 0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/491/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/491/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/490/comments | https://api.github.com/repos/huggingface/datasets/issues/490/events | https://github.com/huggingface/datasets/issues/490 | 676,482,242 | MDU6SXNzdWU2NzY0ODIyNDI= | 490 | Loading preprocessed Wikipedia dataset requires apache_beam | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,597,103,210,000 | 1,597,411,040,000 | 1,597,411,040,000 | CONTRIBUTOR | null | Running
`nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")`
gives an error if apache_beam is not installed, stemming from
https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988
This succeeded without the dependency in version 0.3.0. This seems like an unnecessary dependency to process some dataset info if you're using the already-preprocessed version. Could it be removed? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/490/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/489/comments | https://api.github.com/repos/huggingface/datasets/issues/489/events | https://github.com/huggingface/datasets/issues/489 | 676,456,257 | MDU6SXNzdWU2NzY0NTYyNTc= | 489 | ug | {
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"whoops",
"please delete this"
] | 1,597,098,783,000 | 1,597,100,114,000 | 1,597,098,820,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/489/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/488/comments | https://api.github.com/repos/huggingface/datasets/issues/488/events | https://github.com/huggingface/datasets/issues/488 | 676,299,993 | MDU6SXNzdWU2NzYyOTk5OTM= | 488 | issues with downloading datasets for wmt16 and wmt19 | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I found `UNv1.0.en-ru.tar.gz` here: https://conferences.unite.un.org/uncorpus/en/downloadoverview, so it can be reconstructed with:\r\n```\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.01\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.02\r\ncat UNv1.0.en-ru.tar.gz.0* > UNv1.0.en-ru.tar.gz\r\n```\r\nit has other languages as well, in case https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/ is gone",
"Further, `nlp.load_dataset('wmt19', 'ru-en')` has only the `train` and `val` datasets. `test` is missing.\r\n\r\nFixed locally for summarization needs, by running:\r\n```\r\npip install sacrebleu\r\nsacrebleu -t wmt19 -l ru-en --echo src > test.source\r\nsacrebleu -t wmt19 -l ru-en --echo ref > test.target\r\n```\r\nh/t @sshleifer "
] | 1,597,080,771,000 | 1,597,122,454,000 | null | CONTRIBUTOR | null | I have encountered multiple issues while trying to:
```
import nlp
dataset = nlp.load_dataset('wmt16', 'ru-en')
metric = nlp.load_metric('wmt16')
```
1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and now it worked. So it must have been some outdated dependencies that `pip install -e ".[dev]" ` fixed.
2. it was downloading at 60kbs - almost 5 hours to get the dataset. It was downloading all pairs and not just the one I asked for.
I tried the same code with `wmt19` in parallel and it took a few secs to download and it only fetched data for the requested pair. (but it failed too, see below)
3. my machine has crushed and when I retried I got:
```
Traceback (most recent call last):
File "./download.py", line 9, in <module>
dataset = nlp.load_dataset('wmt16', 'ru-en')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 549, in load_dataset
download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 449, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/stas/anaconda3/envs/main/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 422, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/stas/anaconda3/envs/main/lib/python3.7/os.py", line 221, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/stas/.cache/huggingface/datasets/wmt16/ru-en/1.0.0/4d8269cdd971ed26984a9c0e4a158e0c7afc8135fac8fb8ee43ceecf38fd422d.incomplete'
```
it can't handle resumes. but neither allows a new start. Had to delete it manually.
4. and finally when it downloaded the dataset, it then failed to fetch the metrics:
```
Traceback (most recent call last):
File "./download.py", line 15, in <module>
metric = nlp.load_metric('wmt16')
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 442, in load_metric
module_path, hash = prepare_module(path, download_config=download_config, dataset=False)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 258, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 198, in cached_path
local_files_only=download_config.local_files_only,
File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 356, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/metrics/wmt16/wmt16.py
```
5. If I run the same code with `wmt19`, it fails too:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/488/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/487/comments | https://api.github.com/repos/huggingface/datasets/issues/487/events | https://github.com/huggingface/datasets/pull/487 | 676,143,029 | MDExOlB1bGxSZXF1ZXN0NDY1NTA1NjQy | 487 | Fix elasticsearch result ids returning as strings | {
"login": "sai-prasanna",
"id": 3595526,
"node_id": "MDQ6VXNlcjM1OTU1MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sai-prasanna",
"html_url": "https://github.com/sai-prasanna",
"followers_url": "https://api.github.com/users/sai-prasanna/followers",
"following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}",
"gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions",
"organizations_url": "https://api.github.com/users/sai-prasanna/orgs",
"repos_url": "https://api.github.com/users/sai-prasanna/repos",
"events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}",
"received_events_url": "https://api.github.com/users/sai-prasanna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"It looks like you need to rebase from master to fix the CI. Could you do that please ?"
] | 1,597,066,631,000 | 1,598,870,566,000 | 1,598,870,566,000 | CONTRIBUTOR | null | I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant "id_" returned for searches are strings, but our library assumes them to be integers. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/487/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/487",
"html_url": "https://github.com/huggingface/datasets/pull/487",
"diff_url": "https://github.com/huggingface/datasets/pull/487.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/487.patch",
"merged_at": 1598870566000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/486/comments | https://api.github.com/repos/huggingface/datasets/issues/486/events | https://github.com/huggingface/datasets/issues/486 | 675,649,034 | MDU6SXNzdWU2NzU2NDkwMzQ= | 486 | Bookcorpus data contains pretokenized text | {
"login": "orsharir",
"id": 99543,
"node_id": "MDQ6VXNlcjk5NTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orsharir",
"html_url": "https://github.com/orsharir",
"followers_url": "https://api.github.com/users/orsharir/followers",
"following_url": "https://api.github.com/users/orsharir/following{/other_user}",
"gists_url": "https://api.github.com/users/orsharir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orsharir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orsharir/subscriptions",
"organizations_url": "https://api.github.com/users/orsharir/orgs",
"repos_url": "https://api.github.com/users/orsharir/repos",
"events_url": "https://api.github.com/users/orsharir/events{/privacy}",
"received_events_url": "https://api.github.com/users/orsharir/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Yes indeed it looks like some `'` and spaces are missing (for example in `dont` or `didnt`).\r\nDo you know if there exist some copies without this issue ?\r\nHow would you fix this issue on the current data exactly ? I can see that the data is raw text (not tokenized) so I'm not sure I understand how you would do it. Could you provide more details ?",
"I'm afraid that I don't know how to obtain the original BookCorpus data. I believe this version came from an anonymous Google Drive link posted in another issue.\r\n\r\nGoing through the raw text in this version, it's apparent that NLTK's TreebankWordTokenizer was applied on it (I gave some examples in my original post), followed by:\r\n`' '.join(tokens)`\r\nYou can retrieve the tokenization by splitting on whitespace. You can then \"detokenize\" it with TreebankWordDetokenizer class of NLTK (though, as I suggested, use the fixed version in my repo). This will bring the text closer to its original form, but some steps of TreebankWordTokenizer are destructive, so it wouldn't be one-to-one. Something along the lines of the following should work:\r\n```\r\ntreebank_detokenizer = nltk.tokenize.treebank.TreebankWordDetokenizer()\r\ndb = nlp.load_dataset('bookcorpus', split=nlp.Split.TRAIN)\r\ndb = db.map(lambda x: treebank_detokenizer.detokenize(x['text'].split()))\r\n```\r\n\r\nRegarding other issues beyond the above, I'm afraid that I can't help with that.",
"Ok I get it, that would be very cool indeed\r\n\r\nWhat kinds of patterns the detokenizer can't retrieve ?",
"The TreebankTokenizer makes some assumptions about whitespace, parentheses, quotation marks, etc. For instance, while tokenizing the following text:\r\n```\r\nDwayne \"The Rock\" Johnson\r\n```\r\nwill result in:\r\n```\r\nDwayne `` The Rock '' Johnson\r\n```\r\nwhere the left and right quotation marks are turned into distinct symbols. Upon reconstruction, we can attach the left part to its token on the right, and respectively for the right part. However, the following texts would be tokenized exactly the same:\r\n```\r\nDwayne \" The Rock \" Johnson\r\nDwayne \" The Rock\" Johnson\r\nDwayne \" The Rock\" Johnson\r\n...\r\n```\r\nIn the above examples, the detokenizer would correct these inputs into the canonical text\r\n```\r\nDwayne \"The Rock\" Johnson\r\n```\r\nHowever, there are cases where there the solution cannot easily be inferred (at least without a true LM - this tokenizer is just a bunch of regexes). For instance, in cases where you have a fragment that contains the end of quote, but not its beginning, plus an accidental space:\r\n```\r\n... and it sounds fantastic, \" he said.\r\n```\r\nIn the above case, the tokenizer would assume that the quotes refer to the next token, and so upon detokenization it will result in the following mistake:\r\n```\r\n... and it sounds fantastic, \"he said.\r\n```\r\n\r\nWhile these are all odd edge cases (the basic assumptions do make sense), in noisy data they can occur, which is why I mentioned that the detokenizer cannot restore the original perfectly.\r\n",
"To confirm, since this is preprocessed, this was not the exact version of the Book Corpus used to actually train the models described here (particularly Distilbert)? https://huggingface.co/datasets/bookcorpus\r\n\r\nOr does this preprocessing exactly match that of the papers?",
"I believe these are just artifacts of this particular source. It might be better to crawl it again, or use another preprocessed source, as found here: https://github.com/soskek/bookcorpus ",
"Yes actually the BookCorpus on hugginface is based on [this](https://github.com/soskek/bookcorpus/issues/24#issuecomment-643933352). And I kind of regret naming it as \"BookCorpus\" instead of something like \"BookCorpusLike\".\r\n\r\nBut there is a good news ! @shawwn has replicated BookCorpus in his way, and also provided a link to download the plain text files. see [here](https://github.com/soskek/bookcorpus/issues/27). There is chance we can have a \"OpenBookCorpus\" !"
] | 1,596,956,004,000 | 1,601,467,264,000 | null | CONTRIBUTOR | null | It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end quotes, respectively.
On my own projects, I just run the data through NLTK's TreebankWordDetokenizer to reverse the tokenization (as best as possible). I think it would be beneficial to apply this transformation directly on your remote cached copy of the dataset. If you choose to do so, I would also suggest to use my fork of NLTK that fixes several bugs in their detokenizer (I've opened a pull-request, but they've yet to respond): https://github.com/nltk/nltk/pull/2575 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/486/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/485/comments | https://api.github.com/repos/huggingface/datasets/issues/485/events | https://github.com/huggingface/datasets/issues/485 | 675,595,393 | MDU6SXNzdWU2NzU1OTUzOTM= | 485 | PAWS dataset first item is header | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,924,325,000 | 1,597,830,601,000 | 1,597,830,601,000 | CONTRIBUTOR | null | ```
import nlp
dataset = nlp.load_dataset('xtreme', 'PAWS-X.en')
dataset['test'][0]
```
prints the following
```
{'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'}
```
dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names to themselves. Probably just need to ignore the first row in the dataset by default or something like that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/485/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/484/comments | https://api.github.com/repos/huggingface/datasets/issues/484/events | https://github.com/huggingface/datasets/pull/484 | 675,088,983 | MDExOlB1bGxSZXF1ZXN0NDY0NjY1NTU4 | 484 | update mirror for RT dataset | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks for adding this mirror link :)\r\n\r\nCould you run the following command to update the json file `dataset_infos.json` used to verify the integrity of the downloaded file ?\r\n\r\n```\r\nnlp-cli test ./datasets/rotten_tomatoes --save_infos --ignore_verifications\r\n```",
"done! @lhoestq ",
"the build_doc CI fail comes from master and has been fixed on master",
"done @thomwolf @lhoestq "
] | 1,596,813,945,000 | 1,598,276,017,000 | 1,598,276,017,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/484/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/484",
"html_url": "https://github.com/huggingface/datasets/pull/484",
"diff_url": "https://github.com/huggingface/datasets/pull/484.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/484.patch",
"merged_at": 1598276017000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/483/comments | https://api.github.com/repos/huggingface/datasets/issues/483/events | https://github.com/huggingface/datasets/issues/483 | 675,080,694 | MDU6SXNzdWU2NzUwODA2OTQ= | 483 | rotten tomatoes movie review dataset taken down | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"found a mirror: https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz",
"fixed in #484 ",
"Closing this one. Thanks again @jxmorris12 for taking care of this :)"
] | 1,596,813,121,000 | 1,599,557,794,000 | 1,599,557,793,000 | CONTRIBUTOR | null | In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz). It's not downloadable anymore. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/483/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/482/comments | https://api.github.com/repos/huggingface/datasets/issues/482/events | https://github.com/huggingface/datasets/issues/482 | 674,851,147 | MDU6SXNzdWU2NzQ4NTExNDc= | 482 | Bugs : dataset.map() is frozen on ELI5 | {
"login": "ratthachat",
"id": 56621342,
"node_id": "MDQ6VXNlcjU2NjIxMzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratthachat",
"html_url": "https://github.com/ratthachat",
"followers_url": "https://api.github.com/users/ratthachat/followers",
"following_url": "https://api.github.com/users/ratthachat/following{/other_user}",
"gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions",
"organizations_url": "https://api.github.com/users/ratthachat/orgs",
"repos_url": "https://api.github.com/users/ratthachat/repos",
"events_url": "https://api.github.com/users/ratthachat/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratthachat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This comes from an overflow in pyarrow's array.\r\nIt is stuck inside the loop that reduces the batch size to avoid the overflow.\r\nI'll take a look",
"I created a PR to fix the issue.\r\nIt was due to an overflow check that handled badly an empty list.\r\n\r\nYou can try the changes by using \r\n```\r\n!pip install git+https://github.com/huggingface/nlp.git@fix-bad-type-in-overflow-check\r\n```\r\n\r\nAlso I noticed that the first 1000 examples have an empty list in the `title_urls` field. The feature type inference in `.map` will consider it `null` because of that, and it will crash when it encounter the next example with a `title_urls` that is not empty.\r\n\r\nTherefore to fix that, what you can do for now is increase the writer batch size so that the feature inference will take into account at least one example with a non-empty `title_urls`:\r\n\r\n```python\r\n# default batch size is 1_000 and it's not enough for feature type inference because of empty lists\r\nvalid_dataset = valid_dataset.map(make_input_target, writer_batch_size=3_000) \r\n```\r\n\r\nI was able to run the frozen cell with these changes.",
"@lhoestq Perfect and thank you very much!!\r\nClose the issue.",
"@lhoestq mapping the function `make_input_target` was passed by your fixing.\r\n\r\nHowever, there is another error in the final step of `valid_dataset.map(convert_to_features, batched=True)`\r\n\r\n`ArrowInvalid: Could not convert Thepiratebay.vg with type str: converting to null type`\r\n(The [same colab notebook above with new error message](https://colab.research.google.com/drive/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing#scrollTo=5sRrJ3_C8rLt))\r\n\r\nDo you have some ideas? (I am really sorry I could not debug it by myself since I never used `pyarrow` before) \r\nNote that `train_dataset.map(convert_to_features, batched=True)` can be run successfully even though train_dataset is 27x bigger than `valid_dataset` so I believe the problem lies in some field of `valid_dataset` again .",
"I got this issue too and fixed it by specifying `writer_batch_size=3_000` in `.map`.\r\nThis is because Arrow didn't expect `Thepiratebay.vg` in `title_urls `, as all previous examples have empty lists in `title_urls `",
"I am clear now . Thank so much again Quentin!"
] | 1,596,788,615,000 | 1,597,241,626,000 | 1,597,190,115,000 | NONE | null | Hi Huggingface Team!
Thank you guys once again for this amazing repo.
I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)
However, when I run `dataset.map()` on ELI5 to prepare `input_text, target_text`, `dataset.map` is **frozen** in the first hundreds examples. On the contrary, this works totally fine on SQUAD (80,000 examples). Both `nlp` version 0.3.0 and 0.4.0 cause frozen process . Also try various `pyarrow` versions from 0.16.0 / 0.17.0 / 1.0.0 also have the same frozen process.
Reproducible code can be found on [this colab notebook ](https://colab.research.google.com/drive/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing), where I also show that the same mapping function works fine on SQUAD, so the problem is likely due to ELI5 somehow.
----------------------------------------
**More Info :** instead of `map`, if I run `for` loop and apply function by myself, there's no error and can finish within 10 seconds. However, `nlp dataset` is immutable (I couldn't manually assign a new key-value to `dataset `object)
I also notice that SQUAD texts are quite clean while ELI5 texts contain many special characters, not sure if this is the cause ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/482/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/481/comments | https://api.github.com/repos/huggingface/datasets/issues/481/events | https://github.com/huggingface/datasets/pull/481 | 674,567,389 | MDExOlB1bGxSZXF1ZXN0NDY0MjM2MTA1 | 481 | Apply utf-8 encoding to all datasets | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Not sure why the AWS test is failing - perhaps I made too many concurrent CI builds 😢. Can someone please rerun the CI to check the error is not on my end?",
"I pushed an improved docstring and the unit tests now pass, which suggests the previous failure on AWS was simply a timeout error. \r\n\r\nFor some reason the docs are now failing to build, but does not seem related to my changes:\r\n```\r\nWarning, treated as error:\r\n/home/circleci/nlp/src/nlp/dataset_dict.py:docstring of nlp.DatasetDict.filter:27:Inline interpreted text or phrase reference start-string without end-string.\r\nmake: *** [Makefile:20: html] Error 2\r\n```\r\n\r\nAny ideas what's going wrong?",
"The build_doc fail has been fixed on master.\r\nIt was due to the latest update of sphinx that has some issues, so I pinned the previous version for now.",
"I noticed that you also changed the Apache Beam `open` to also use utf-8. However it doesn't have an `encoding` parameter.\r\nTherefore you should ignore lines like\r\n\r\n```python\r\nbeam.io.filesystems.FileSystems.open(filepath)\r\n```\r\n\r\nI guess you could add a rule to your regex to only include the `open` call that have a space right before it.",
"Good catch @lhoestq! Your suggestion to match on `open(...)` with a whitespace was a great idea - it allowed me to simplify the regexp considerably 😄.\r\n\r\nI fixed the Apache Beam false positives and also caught a few problems in `json.load()`, e.g.\r\n```python\r\nrelation_name_map = json.load(open(rel_info), encoding='utf-8')\r\n```\r\n\r\nI've tested that the new regexp doesn't reintroduce these false positives, so I think the PR is ready for another review.",
"Ok to merge this @lhoestq ?"
] | 1,596,744,129,000 | 1,597,911,368,000 | 1,597,911,368,000 | MEMBER | null | ## Description
This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function
```python
def apply_encoding_on_file_open(filepath: str):
"""Apply UTF-8 encoding for all instances where a non-binary file is opened."""
with open(filepath, 'r', encoding='utf-8') as input_file:
regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)")
input_text = input_file.read()
match = regexp.search(input_text)
if match:
output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text)
with open(filepath, 'w', encoding='utf-8') as output_file:
output_file.write(output)
```
to perform the replacement.
Note:
1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly
2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.
3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/`
4. I have implemented a unit test that should catch missing encodings in future CI runs
Closes #468 and possibly #347 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/481/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/481",
"html_url": "https://github.com/huggingface/datasets/pull/481",
"diff_url": "https://github.com/huggingface/datasets/pull/481.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/481.patch",
"merged_at": 1597911368000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/480/comments | https://api.github.com/repos/huggingface/datasets/issues/480/events | https://github.com/huggingface/datasets/pull/480 | 674,245,959 | MDExOlB1bGxSZXF1ZXN0NDYzOTcwNjQ2 | 480 | Column indexing hotfix | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Looks good to me as well but we'll want to add a test indeed.\r\nYou can add one if you have time @TevenLeScao.\r\nOtherwise, we'll do it when we are back with Quentin. ",
"I fixed it in #494 "
] | 1,596,713,825,000 | 1,597,221,370,000 | 1,597,221,370,000 | MEMBER | null | As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/480/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/480",
"html_url": "https://github.com/huggingface/datasets/pull/480",
"diff_url": "https://github.com/huggingface/datasets/pull/480.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/480.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/479/comments | https://api.github.com/repos/huggingface/datasets/issues/479/events | https://github.com/huggingface/datasets/pull/479 | 673,905,407 | MDExOlB1bGxSZXF1ZXN0NDYzNjkxMjA0 | 479 | add METEOR metric | {
"login": "vegarab",
"id": 24683907,
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegarab",
"html_url": "https://github.com/vegarab",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"repos_url": "https://api.github.com/users/vegarab/repos",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Really nice !\r\nThanks for adding this one.\r\n\r\nI noticed that there are some '-' that are left in the description in the middle of some workds. It migh come from copy-pasting the pdf paper. ex: `im-provement`. Could you fix that please ?",
"@lhoestq \r\nLinebreaks have been removed! Note that there are still a few compound words that are hyphenated intentionally. ",
"I think you just need to rebase from master to fix the CI :)",
"Yes I made the mistake of simply merging master into this branch. A rebase seems to be neater :) Although all the commits ended up being added twice. I assume you just squash them into a single one on merge anyways?",
"Yes indeed they'll be squashed"
] | 1,596,669,180,000 | 1,597,844,349,000 | 1,597,844,349,000 | CONTRIBUTOR | null | Added the METEOR metric. Can be used like this:
```python
import nlp
meteor = nlp.load_metric('metrics/meteor')
meteor.compute(["some string", "some string"], ["some string", "some similar string"])
# {'meteor': 0.6411637931034483}
meteor.add("some string", "some string")
meteor.add('some string", "some similar string")
meteor.compute()
# {'meteor': 0.6411637931034483}
```
Uses [NLTK's implementation](https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/479/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/479",
"html_url": "https://github.com/huggingface/datasets/pull/479",
"diff_url": "https://github.com/huggingface/datasets/pull/479.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/479.patch",
"merged_at": 1597844349000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/478/comments | https://api.github.com/repos/huggingface/datasets/issues/478/events | https://github.com/huggingface/datasets/issues/478 | 673,178,317 | MDU6SXNzdWU2NzMxNzgzMTc= | 478 | Export TFRecord to GCP bucket | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Nevermind, I restarted my python session and it worked fine...\r\n\r\n---\r\n\r\nI had an authentification error, and I authenticated from another terminal. After that, no more error but it was not working. Restarting the sessions makes it work :)"
] | 1,596,589,712,000 | 1,596,590,497,000 | 1,596,590,496,000 | NONE | null | Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')`
Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket.
`dataset.export('local.tfrecord')` works fine,
but `dataset.export('gs://my_bucket/x.tfrecord')` does not work.
There is no error message, I just can't find the file on my bucket...
---
Looking at the code, `nlp` is using `tf.data.experimental.TFRecordWriter`, while I was using `tf.io.TFRecordWriter`.
**What's the difference between those 2 ? How can I write TFRecords files directly to GCP bucket ?**
@jarednielsen @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/478/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/477/comments | https://api.github.com/repos/huggingface/datasets/issues/477/events | https://github.com/huggingface/datasets/issues/477 | 673,142,143 | MDU6SXNzdWU2NzMxNDIxNDM= | 477 | Overview.ipynb throws exceptions with nlp 0.4.0 | {
"login": "mandy-li",
"id": 23109219,
"node_id": "MDQ6VXNlcjIzMTA5MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23109219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mandy-li",
"html_url": "https://github.com/mandy-li",
"followers_url": "https://api.github.com/users/mandy-li/followers",
"following_url": "https://api.github.com/users/mandy-li/following{/other_user}",
"gists_url": "https://api.github.com/users/mandy-li/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mandy-li/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mandy-li/subscriptions",
"organizations_url": "https://api.github.com/users/mandy-li/orgs",
"repos_url": "https://api.github.com/users/mandy-li/repos",
"events_url": "https://api.github.com/users/mandy-li/events{/privacy}",
"received_events_url": "https://api.github.com/users/mandy-li/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks for reporting this issue\r\n\r\nThere was a bug where numpy arrays would get returned instead of tensorflow tensors.\r\nThis is fixed on master.\r\n\r\nI tried to re-run the colab and encountered this error instead:\r\n\r\n```\r\nAttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to_tensor'\r\n```\r\n\r\nThis is because the dataset returns a Tensor and not a RaggedTensor.\r\nBut I think we should always return a RaggedTensor unless the length of the sequence is fixed (it that case they can be stack into a Tensor).",
"Hi, I got another error (on Colab):\r\n\r\n```python\r\n# You can read a few attributes of the datasets before loading them (they are python dataclasses)\r\nfrom dataclasses import asdict\r\n\r\nfor key, value in asdict(datasets[6]).items():\r\n print('👉 ' + key + ': ' + str(value))\r\n\r\n---------------------------------------------------------------------------\r\n\r\nTypeError Traceback (most recent call last)\r\n\r\n<ipython-input-6-b8ace6c227a2> in <module>()\r\n 2 from dataclasses import asdict\r\n 3 \r\n----> 4 for key, value in asdict(datasets[6]).items():\r\n 5 print('👉 ' + key + ': ' + str(value))\r\n\r\n/usr/local/lib/python3.6/dist-packages/dataclasses.py in asdict(obj, dict_factory)\r\n 1008 \"\"\"\r\n 1009 if not _is_dataclass_instance(obj):\r\n-> 1010 raise TypeError(\"asdict() should be called on dataclass instances\")\r\n 1011 return _asdict_inner(obj, dict_factory)\r\n 1012 \r\n\r\nTypeError: asdict() should be called on dataclass instances\r\n```",
"Indeed we'll update the cola with the new release coming up this week."
] | 1,596,583,095,000 | 1,627,970,535,000 | 1,627,970,535,000 | NONE | null | with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-48907f2ad433> in <module>
----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}
2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])}
3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1])
4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
<ipython-input-5-48907f2ad433> in <dictcomp>(.0)
----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}
2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])}
3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1])
4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
AttributeError: 'numpy.ndarray' object has no attribute 'to_tensor' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/477/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/476/comments | https://api.github.com/repos/huggingface/datasets/issues/476/events | https://github.com/huggingface/datasets/pull/476 | 672,991,854 | MDExOlB1bGxSZXF1ZXN0NDYyOTMyMTgx | 476 | CheckList | {
"login": "marcotcr",
"id": 698010,
"node_id": "MDQ6VXNlcjY5ODAxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/698010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcotcr",
"html_url": "https://github.com/marcotcr",
"followers_url": "https://api.github.com/users/marcotcr/followers",
"following_url": "https://api.github.com/users/marcotcr/following{/other_user}",
"gists_url": "https://api.github.com/users/marcotcr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcotcr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcotcr/subscriptions",
"organizations_url": "https://api.github.com/users/marcotcr/orgs",
"repos_url": "https://api.github.com/users/marcotcr/repos",
"events_url": "https://api.github.com/users/marcotcr/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcotcr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"> Also, a little out of my depth there, but would there be a way to have the default pip install checklist command not require mysql and mariadb to be installed? Feels like that might be a source of confusion for users.\r\n\r\nI removed the pattern dependency, mysql is not a requirement anymore. I'm not sure where mariadb is coming from. "
] | 1,596,565,925,000 | 1,599,161,648,000 | null | NONE | null | Sorry for the large pull request.
- Added checklists as datasets. I can't run `test_load_real_dataset` (see #474), but I can load the datasets successfully as shown in the example notebook
- Added a checklist wrapper | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/476/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/476/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/476",
"html_url": "https://github.com/huggingface/datasets/pull/476",
"diff_url": "https://github.com/huggingface/datasets/pull/476.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/476.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/475/comments | https://api.github.com/repos/huggingface/datasets/issues/475/events | https://github.com/huggingface/datasets/pull/475 | 672,884,595 | MDExOlB1bGxSZXF1ZXN0NDYyODQzMzQz | 475 | misc. bugs and quality of life | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Cool thanks, I made those changes. LMK if you think it's ready for merge.",
"Ok to merge for me"
] | 1,596,555,149,000 | 1,597,698,848,000 | 1,597,698,847,000 | CONTRIBUTOR | null | A few misc. bugs and QOL improvements that I've come across in using the library. Let me know if you don't like any of them and I can adjust/remove them.
1. Printing datasets without a description field throws an error when formatting the `single_line_description`. This fixes that, and also adds some formatting to the repr to make it slightly more readable.
```
>>> print(list_datasets()[0])
nlp.ObjectInfo(
id='aeslc',
description='A collection of email messages of employees in the Enron Corporation.There are two features: - email_body: email body text. - subject_line: email subject text.',
files=[nlp.S3Object('aeslc.py'), nlp.S3Object('dataset_infos.json'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/dev/allen-p_inbox_29.subject'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/test/allen-p_inbox_24.subject'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/train/allen-p_inbox_20.subject'), nlp.S3Object('dummy/1.0.0/dummy_data.zip'), nlp.S3Object('urls_checksums/checksums.txt')]
)
```
2. Add id-only option to `list_datasets` and `list_metrics` to allow the user to easily print out just the names of the datasets & metrics. I often found myself annoyed that this took so many strokes to do.
```python
[dataset.id for dataset in list_datasets()] # before
list_datasets(id_only=True) # after
```
3. Fix null-seed randomization caching. When using `train_test_split` and `shuffle`, the computation was being cached even without a seed or generator being passed. The result was that calling `.shuffle` more than once on the same dataset didn't do anything without passing a distinct seed or generator. Likewise with `train_test_split`.
4. Indexing by iterables of bool. I added support for passing an iterable of type bool to `_getitem` as a numpy/pandas-like indexing method. Let me know if you think it's redundant with `filter` (I know it's not optimal memory-wise), but I think it's nice to have as a lightweight alternative to do simple things without having to create a copy of the entire dataset, e.g.
```python
dataset[dataset['label'] == 0] # numpy-like bool indexing to look at instances with labels of 0
```
5. Add an `input_column` argument to `map` and `filter`, which allows you to filter/map on a particular column rather than passing the whole dict to the function. Also adds `fn_kwargs` to be passed to the function. I think these together make mapping much cleaner in many cases such as mono-column tokenization:
```python
# before
dataset = dataset.map(lambda batch: tokenizer(batch["text"])
# after
dataset = dataset.map(tokenizer, input_column="text")
dataset = dataset.map(tokenizer, input_column="text", fn_kwargs={"truncation": True, "padding": True})
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/475/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/475/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/475",
"html_url": "https://github.com/huggingface/datasets/pull/475",
"diff_url": "https://github.com/huggingface/datasets/pull/475.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/475.patch",
"merged_at": 1597698847000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/474/comments | https://api.github.com/repos/huggingface/datasets/issues/474/events | https://github.com/huggingface/datasets/issues/474 | 672,407,330 | MDU6SXNzdWU2NzI0MDczMzA= | 474 | test_load_real_dataset when config has BUILDER_CONFIGS that matter | {
"login": "marcotcr",
"id": 698010,
"node_id": "MDQ6VXNlcjY5ODAxMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/698010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcotcr",
"html_url": "https://github.com/marcotcr",
"followers_url": "https://api.github.com/users/marcotcr/followers",
"following_url": "https://api.github.com/users/marcotcr/following{/other_user}",
"gists_url": "https://api.github.com/users/marcotcr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcotcr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcotcr/subscriptions",
"organizations_url": "https://api.github.com/users/marcotcr/orgs",
"repos_url": "https://api.github.com/users/marcotcr/repos",
"events_url": "https://api.github.com/users/marcotcr/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcotcr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"The `data_dir` parameter has been removed. Now the error is `ValueError: Config name is missing`\r\n\r\nAs mentioned in #470 I think we can have one test with the first config of BUILDER_CONFIGS, and another test that runs all of the configs in BUILDER_CONFIGS",
"This was fixed in #527 \r\n\r\nClosing this one, but feel free to re-open if you have other questions"
] | 1,596,498,396,000 | 1,599,490,393,000 | 1,599,490,393,000 | NONE | null | It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error.
I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingface/nlp/blob/master/tests/test_dataset_common.py#L200)). This causes [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/builder.py#L201) to always be false because `config_kwargs` is not `None`. [This line](https://github.com/huggingface/nlp/blob/master/src/nlp/builder.py#L222) will be run instead, which doesn't use `BUILDER_CONFIGS`.
For an example, you can try running the test for lince:
` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lince`
which yields
> E TypeError: __init__() missing 3 required positional arguments: 'colnames', 'classes', and 'label_column' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/474/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/474/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/473/comments | https://api.github.com/repos/huggingface/datasets/issues/473/events | https://github.com/huggingface/datasets/pull/473 | 672,007,247 | MDExOlB1bGxSZXF1ZXN0NDYyMTIwNzU4 | 473 | add DoQA dataset (ACL 2020) | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,454,012,000 | 1,599,758,351,000 | 1,599,133,455,000 | CONTRIBUTOR | null | add DoQA dataset (ACL 2020) http://ixa.eus/node/12931 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/473/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/473/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/473",
"html_url": "https://github.com/huggingface/datasets/pull/473",
"diff_url": "https://github.com/huggingface/datasets/pull/473.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/473.patch",
"merged_at": 1599133454000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/472/comments | https://api.github.com/repos/huggingface/datasets/issues/472/events | https://github.com/huggingface/datasets/pull/472 | 672,000,745 | MDExOlB1bGxSZXF1ZXN0NDYyMTE1MjA4 | 472 | add crd3 dataset | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"This PR was already approved by @lhoestq in #456 . This one just make style to remove some typos"
] | 1,596,453,302,000 | 1,596,453,730,000 | 1,596,453,729,000 | CONTRIBUTOR | null | opening new PR for CRD3 dataset (ACL2020) to fix the circle CI problems | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/472/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/472/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/472",
"html_url": "https://github.com/huggingface/datasets/pull/472",
"diff_url": "https://github.com/huggingface/datasets/pull/472.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/472.patch",
"merged_at": 1596453729000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/471/comments | https://api.github.com/repos/huggingface/datasets/issues/471/events | https://github.com/huggingface/datasets/pull/471 | 671,996,423 | MDExOlB1bGxSZXF1ZXN0NDYyMTExNTU1 | 471 | add reuters21578 dataset | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,452,834,000 | 1,599,127,683,000 | 1,599,127,130,000 | CONTRIBUTOR | null | new PR to add the reuters21578 dataset and fix the circle CI problems. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/471/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/471",
"html_url": "https://github.com/huggingface/datasets/pull/471",
"diff_url": "https://github.com/huggingface/datasets/pull/471.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/471.patch",
"merged_at": 1599127130000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/470/comments | https://api.github.com/repos/huggingface/datasets/issues/470/events | https://github.com/huggingface/datasets/pull/470 | 671,952,276 | MDExOlB1bGxSZXF1ZXN0NDYyMDc0MzQ0 | 470 | Adding IWSLT 2017 dataset. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Ok I tried to add the dummy dataset (I actually modified the dummy_data command to generate them for me because it was too painful to do that manually).\r\n\r\nThe dummy_data test seems to work:\r\n```bash\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_iwslt2017\r\n```\r\n\r\nHowever the test on the full data fails, because the `**config_kwargs` don't include `pair, multilingual`.\r\nI could add a default parameter for the Config (but that feels broken, how can one config be the \"default\" ?). If I do I still have errors, saying that something within the downloader is a directory so I'm not sure where that comes from.\r\n\r\nI can share my auto_zip dummy data code if you want (I tried to keep it clean). [Edit: it's [here](https://github.com/Narsil/nlp/tree/auto_zip)]. \r\nThe way it works is that it just keeps X line from the beginning of the original files, and Y lines at the end. It's good enough for my usage, but I guess it could work for most data files out there (as long as they're real text and not binary format)",
"The slow test doesn't support dataset that require config parameters that don't have default values.\r\n\r\nTo improve that we can replace it by two tests:\r\n- one test that loads the default config (it can simply be the first config of the config lists for example)\r\n- one tests that iterate over all configs and load them all one by one\r\n\r\nBy using the configs inside the builder config lists, there is no need to instantiate new configs, so the missing parameter error doesn't happen.\r\n\r\nDoes that sound good to you ?",
"Seems fair.\r\nHowever I'm unsure what I should do ?\r\n\r\nShould I wait for #527 to pass and rebase and the command will be the same ?\r\nShould I update something ?",
"I think everything is fine on your side. Thanks for adding this dataset :)\r\n\r\nI think it's better to wait for the slow test to be updated if you don't mind.\r\n",
"Sure ! :)",
"Thanks for fixing the isort/black changes :)\r\nFeel free to merge if it's good for you @Narsil "
] | 1,596,448,359,000 | 1,599,482,010,000 | 1,599,482,010,000 | CONTRIBUTOR | null | Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*.
```
Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English
Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair)
```
I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.
Any opinion on how that should be done ?
EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.
EDIT : Could be interesting for #438 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/470/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/470/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/470",
"html_url": "https://github.com/huggingface/datasets/pull/470",
"diff_url": "https://github.com/huggingface/datasets/pull/470.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/470.patch",
"merged_at": 1599482010000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/469/comments | https://api.github.com/repos/huggingface/datasets/issues/469/events | https://github.com/huggingface/datasets/issues/469 | 671,876,963 | MDU6SXNzdWU2NzE4NzY5NjM= | 469 | invalid data type 'str' at _convert_outputs in arrow_dataset.py | {
"login": "Murgates",
"id": 30617486,
"node_id": "MDQ6VXNlcjMwNjE3NDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/30617486?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Murgates",
"html_url": "https://github.com/Murgates",
"followers_url": "https://api.github.com/users/Murgates/followers",
"following_url": "https://api.github.com/users/Murgates/following{/other_user}",
"gists_url": "https://api.github.com/users/Murgates/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Murgates/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Murgates/subscriptions",
"organizations_url": "https://api.github.com/users/Murgates/orgs",
"repos_url": "https://api.github.com/users/Murgates/repos",
"events_url": "https://api.github.com/users/Murgates/events{/privacy}",
"received_events_url": "https://api.github.com/users/Murgates/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! Did you try to set the output format to pytorch ? (or tensorflow if you're using tensorflow)\r\nIt can be done with `dataset.set_format(\"torch\", columns=columns)` (or \"tensorflow\").\r\n\r\nNote that for pytorch, string columns can't be converted to `torch.Tensor`, so you have to specify in `columns=` the list of columns you want to keep (`input_ids` for example)",
"Hello . Yes, I did set the output format as below for the two columns \r\n\r\n `train_dataset.set_format('torch',columns=['Text','Label'])`\r\n ",
"I think you're having this issue because you try to format strings as pytorch tensors, which is not possible.\r\nIndeed by having \"Text\" in `columns=['Text','Label']`, you try to convert the text values to pytorch tensors.\r\n\r\nInstead I recommend you to first tokenize your dataset using a tokenizer from transformers. For example\r\n\r\n```python\r\nfrom transformers import BertTokenizer\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\ntrain_dataset.map(lambda x: tokenizer(x[\"Text\"]), batched=True)\r\ntrain_dataset.set_format(\"torch\", column=[\"input_ids\"])\r\n```\r\n\r\nAnother way to fix your issue would be to not set the format to pytorch, and leave the dataset as it is by default. In that case, the strings are returned normally when you get examples from your dataloader. It means that you would have to tokenize the examples in the training loop (or using a data collator) though.\r\n\r\nLet me know if you have other questions",
"Hi, actually the thing is I am getting the same error and even after tokenizing them I am passing them through batch_encode_plus.\r\nI dont know what seems to be the problem is. I even converted it into 'pt' while passing them through batch_encode_plus but when I am evaluating my model , i am getting this error\r\n\r\n\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-145-ca218223c9fc> in <module>()\r\n----> 1 val_loss, predictions, true_val = evaluate(dataloader_validation)\r\n 2 val_f1 = f1_score_func(predictions, true_val)\r\n 3 tqdm.write(f'Validation loss: {val_loss}')\r\n 4 tqdm.write(f'F1 Score (Weighted): {val_f1}')\r\n\r\n6 frames\r\n/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py in <genexpr>(.0)\r\n 160 \r\n 161 def __getitem__(self, index):\r\n--> 162 return tuple(tensor[index] for tensor in self.tensors)\r\n 163 \r\n 164 def __len__(self):\r\n\r\nTypeError: new(): invalid data type 'str' ",
"> Hi, actually the thing is I am getting the same error and even after tokenizing them I am passing them through batch_encode_plus.\r\n> I dont know what seems to be the problem is. I even converted it into 'pt' while passing them through batch_encode_plus but when I am evaluating my model , i am getting this error\r\n> \r\n> TypeError Traceback (most recent call last)\r\n> in ()\r\n> ----> 1 val_loss, predictions, true_val = evaluate(dataloader_validation)\r\n> 2 val_f1 = f1_score_func(predictions, true_val)\r\n> 3 tqdm.write(f'Validation loss: {val_loss}')\r\n> 4 tqdm.write(f'F1 Score (Weighted): {val_f1}')\r\n> \r\n> 6 frames\r\n> /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py in (.0)\r\n> 160\r\n> 161 def **getitem**(self, index):\r\n> --> 162 return tuple(tensor[index] for tensor in self.tensors)\r\n> 163\r\n> 164 def **len**(self):\r\n> \r\n> TypeError: new(): invalid data type 'str'\r\n\r\nI got the same error and fix it .\r\nyou can check your input where there may be string contained.\r\nsuch as\r\n```\r\na = [1,2,3,4,'<unk>']\r\ntorch.tensor(a)\r\n```",
"I didn't know tokenizers could return strings in the token ids. Which tokenizer are you using to get this @Doragd ?",
"> I didn't know tokenizers could return strings in the token ids. Which tokenizer are you using to get this @Doragd ?\r\n\r\ni'm sorry that i met this issue in another place (not in huggingface repo). ",
"@akhilkapil do you have strings in your dataset ? When you set the dataset format to \"pytorch\" you should exclude columns with strings as pytorch can't make tensors out of strings"
] | 1,596,440,909,000 | 1,603,357,466,000 | null | NONE | null | I trying to build multi label text classifier model using Transformers lib.
I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error
File "C:\***\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type 'str'
I'm using pyarrow 1.0.0. And I have simple custom data set with Text and Integer Label.
Ex: Data
Text , Label #Column Header
I'm facing an Network issue, 1
I forgot my password, 2
Error StackTrace:
File "C:\**\transformers\trainer.py", line 492, in train
for step, inputs in enumerate(epoch_iterator):
File "C:\**\tqdm\std.py", line 1104, in __iter__
for obj in iterable:
File "C:\**\torch\utils\data\dataloader.py", line 345, in __next__
data = self._next_data()
File "C:\**\torch\utils\data\dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\**\nlp\arrow_dataset.py", line 414, in __getitem__
output_all_columns=self._output_all_columns,
File "C:\**\nlp\arrow_dataset.py", line 403, in _getitem
outputs, format_type=format_type, format_columns=format_columns, output_all_columns=output_all_columns
File "C:\**\nlp\arrow_dataset.py", line 343, in _convert_outputs
v = command(v)
TypeError: new(): invalid data type 'str'
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/469/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/469/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/468/comments | https://api.github.com/repos/huggingface/datasets/issues/468/events | https://github.com/huggingface/datasets/issues/468 | 671,622,441 | MDU6SXNzdWU2NzE2MjI0NDE= | 468 | UnicodeDecodeError while loading PAN-X task of XTREME dataset | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Indeed. Solution 1 is the simplest.\r\n\r\nThis is actually a recurring problem.\r\nI think we should scan all the datasets with regexpr to fix the use of `open()` without encodings.\r\nAnd probably add a test in the CI to forbid using this in the future.",
"I'm happy to tackle the broader problem - will open a PR when it's ready!",
"That would be awesome!",
"I've created a simple function that seems to do the trick:\r\n\r\n```python\r\ndef apply_encoding_on_file_open(filepath: str):\r\n \"\"\"Apply UTF-8 encoding for all instances where a non-binary file is opened.\"\"\"\r\n \r\n with open(filepath, 'r', encoding='utf-8') as input_file:\r\n regexp = re.compile(r\"\"\"\r\n (?!.*\\b(?:encoding|rb|wb|wb+|ab|ab+)\\b)\r\n (open)\r\n \\((.*)\\)\r\n \"\"\")\r\n input_text = input_file.read()\r\n match = regexp.search(input_text)\r\n \r\n if match:\r\n print('Found match!', match.group())\r\n # append utf-8 encoding to matching groups in-place\r\n output = regexp.sub(lambda m: m.group()[:-1]+', encoding=\"utf-8\")', input_text)\r\n with open(filepath, 'w', encoding='utf-8') as output_file:\r\n output_file.write(output)\r\n else:\r\n print(\"No match found!\")\r\n```\r\n\r\nThe regexp does a negative lookahead to avoid matching on cases where the encoding is already specified or when binary files are involved.\r\n\r\nFrom an implementation perspective:\r\n\r\n* Would it make sense to include this function in `nlp-cli` so that we can run something like\r\n```\r\nnlp-cli fix_encoding path/to/folder\r\n```\r\nand the command recursively fixes all files in the target?\r\n* What is the desired behaviour in the CI test? Here we could either have a simple script that we run as a `job` in the CI and raises an error if a missing encoding is detected. Alternatively we could incorporate this behaviour into the CLI and run that in the CI.\r\n\r\nPlease let me know what you prefer among the alternatives.\r\n",
"I realised I was overthinking the problem, so decided to just run the regexp over the codebase and make the PR. In other words, we can ignore my comments about using the CLI 😸 "
] | 1,596,377,110,000 | 1,597,911,368,000 | 1,597,911,368,000 | MEMBER | null | Hi 🤗 team!
## Description of the problem
I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset:
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-5-1d61f439b843> in <module>
----> 1 dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
528 ignore_verifications = ignore_verifications or save_infos
529 # Download/copy dataset processing script
--> 530 module_path, hash = prepare_module(path, download_config=download_config, dataset=True)
531
532 # Get dataset builder class from the processing script
/usr/local/lib/python3.6/dist-packages/nlp/load.py in prepare_module(path, download_config, dataset, force_local_path, **download_kwargs)
265
266 # Download external imports if needed
--> 267 imports = get_imports(local_path)
268 local_imports = []
269 library_imports = []
/usr/local/lib/python3.6/dist-packages/nlp/load.py in get_imports(file_path)
156 lines = []
157 with open(file_path, mode="r") as f:
--> 158 lines.extend(f.readlines())
159
160 logger.info("Checking %s for additional imports.", file_path)
/usr/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 111: ordinal not in range(128)
```
## Steps to reproduce
Install from nlp's master branch
```python
pip install git+https://github.com/huggingface/nlp.git
```
then run
```python
from nlp import load_dataset
# AmazonPhotos.zip is located in data/
dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
```
## OS / platform details
- `nlp` version: latest from master
- Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
## Proposed solution
Either change [line 762](https://github.com/huggingface/nlp/blob/7ada00b1d62f94eee22a7df38c6b01e3f27194b7/datasets/xtreme/xtreme.py#L762) in `xtreme.py` to include UTF-8 encoding:
```
# old
with open(filepath) as f
# new
with open(filepath, encoding='utf-8') as f
```
or raise a warning that suggests setting the locale explicitly, e.g.
```python
import locale
locale.setlocale(locale.LC_ALL, 'C.UTF-8')
```
I have a preference for the first solution. Let me know if you agree and I'll be happy to implement the simple fix! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/468/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/467/comments | https://api.github.com/repos/huggingface/datasets/issues/467/events | https://github.com/huggingface/datasets/pull/467 | 671,580,010 | MDExOlB1bGxSZXF1ZXN0NDYxNzgwMzUy | 467 | DOCS: Fix typo | {
"login": "Bharat123rox",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bharat123rox",
"html_url": "https://github.com/Bharat123rox",
"followers_url": "https://api.github.com/users/Bharat123rox/followers",
"following_url": "https://api.github.com/users/Bharat123rox/following{/other_user}",
"gists_url": "https://api.github.com/users/Bharat123rox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bharat123rox/subscriptions",
"organizations_url": "https://api.github.com/users/Bharat123rox/orgs",
"repos_url": "https://api.github.com/users/Bharat123rox/repos",
"events_url": "https://api.github.com/users/Bharat123rox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bharat123rox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks!"
] | 1,596,358,777,000 | 1,596,376,347,000 | 1,596,359,934,000 | CONTRIBUTOR | null | Fix typo from dictionnary -> dictionary | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/467/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/467",
"html_url": "https://github.com/huggingface/datasets/pull/467",
"diff_url": "https://github.com/huggingface/datasets/pull/467.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/467.patch",
"merged_at": 1596359934000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/466/comments | https://api.github.com/repos/huggingface/datasets/issues/466/events | https://github.com/huggingface/datasets/pull/466 | 670,766,891 | MDExOlB1bGxSZXF1ZXN0NDYxMDEzOTM0 | 466 | [METRICS] Various improvements on metrics | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"The cast function is now called inside `features.encode_example`.\r\nI also added `encode_batch` that was missing.\r\n\r\nMoreover I used the cast function in `Dataset.map` to support torch/tensorflow tensors or numpy arrays inputs.\r\n\r\nThere are tests for tensors inputs in metrics and in .map",
"I think we can merge"
] | 1,596,279,825,000 | 1,597,677,300,000 | 1,597,677,299,000 | MEMBER | null | - Disallow the use of positional arguments to avoid `predictions` vs `references` mistakes
- Allow to directly feed numpy/pytorch/tensorflow/pandas objects in metrics | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/466/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/466/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/466",
"html_url": "https://github.com/huggingface/datasets/pull/466",
"diff_url": "https://github.com/huggingface/datasets/pull/466.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/466.patch",
"merged_at": 1597677299000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/465/comments | https://api.github.com/repos/huggingface/datasets/issues/465/events | https://github.com/huggingface/datasets/pull/465 | 669,889,779 | MDExOlB1bGxSZXF1ZXN0NDYwMjEwODYw | 465 | Keep features after transform | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"One note on features inference:\r\n\r\nif an arrow type is `struct of items` where each item is a `list`, then we return a `dict` in which each item is a `Sequence`.\r\nIt means that we don't use the Sequence <-> dict swap when we infer features.\r\n\r\nIt's fine because the swap is generally used in dataset scripts, in which features are defined (inferred features are discarded)",
"If it's fine for you @thomwolf we can merge this one :) ",
"Yes this is fine I think!"
] | 1,596,206,601,000 | 1,596,220,053,000 | 1,596,220,052,000 | MEMBER | null | When applying a transform like `map`, some features were lost (and inferred features were used).
It was the case for ClassLabel, Translation, etc.
To fix that, I did some modifications in the `ArrowWriter`:
- added the `update_features` parameter. When it's `True`, then the features specified by the user (if any) can be updated with inferred features if their type don't match. `map` transform sets `update_features=True` when writing to cache file or buffer. Features won't change by default in `map`.
- added the `with_metadata` parameter. If `True`, the `features` (after update) will be written inside the metadata of the schema in this format:
```
{
"huggingface": {"features" : <serialized Features exactly like dataset_info.json>}
}
```
Then, once a dataset is instantiated without info/features, these metadata are used to set the features of the dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/465/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/465",
"html_url": "https://github.com/huggingface/datasets/pull/465",
"diff_url": "https://github.com/huggingface/datasets/pull/465.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/465.patch",
"merged_at": 1596220052000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/464/comments | https://api.github.com/repos/huggingface/datasets/issues/464/events | https://github.com/huggingface/datasets/pull/464 | 669,767,381 | MDExOlB1bGxSZXF1ZXN0NDYwMTAxNDYz | 464 | Add rename, remove and cast in-place operations | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,198,621,000 | 1,596,210,602,000 | 1,596,210,600,000 | MEMBER | null | Add a bunch of in-place operation leveraging the Arrow back-end to rename and remove columns and cast to new features without using the more expensive `map` method.
These methods are added to `Dataset` as well as `DatasetDict`.
Added tests for these new methods and add the methods to the doc.
Naming follows the new pattern with a trailing underscore indicating in-place methods. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/464/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/464",
"html_url": "https://github.com/huggingface/datasets/pull/464",
"diff_url": "https://github.com/huggingface/datasets/pull/464.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/464.patch",
"merged_at": 1596210600000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/463/comments | https://api.github.com/repos/huggingface/datasets/issues/463/events | https://github.com/huggingface/datasets/pull/463 | 669,735,455 | MDExOlB1bGxSZXF1ZXN0NDYwMDcyNjQ1 | 463 | Add dataset/mlsum | {
"login": "RachelKer",
"id": 36986299,
"node_id": "MDQ6VXNlcjM2OTg2Mjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RachelKer",
"html_url": "https://github.com/RachelKer",
"followers_url": "https://api.github.com/users/RachelKer/followers",
"following_url": "https://api.github.com/users/RachelKer/following{/other_user}",
"gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions",
"organizations_url": "https://api.github.com/users/RachelKer/orgs",
"repos_url": "https://api.github.com/users/RachelKer/repos",
"events_url": "https://api.github.com/users/RachelKer/events{/privacy}",
"received_events_url": "https://api.github.com/users/RachelKer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I think the problem is related to `wiki_dpr` dataset which is making the circle CI failed as you can see:\r\n```\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/dummy_psgs_w100_no_embeddings\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/dummy_psgs_w100_with_nq_embeddings\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/psgs_w100_no_embeddings\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/psgs_w100_with_nq_embeddings\r\n\r\n```\r\nI'm facing the same issues with my last commits, I tried to rebase from master but it still not working. Maybe @lhoestq can help with.",
"Hello, I am confused about the next steps I need to do. Did the forced merge solve the issue ?",
"Hello :)\r\nI think you can just rebase from master and it should solve the CI error"
] | 1,596,196,252,000 | 1,598,280,882,000 | 1,598,280,882,000 | CONTRIBUTOR | null | New pull request that should correct the previous errors.
The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/463/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/463",
"html_url": "https://github.com/huggingface/datasets/pull/463",
"diff_url": "https://github.com/huggingface/datasets/pull/463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/463.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/462/comments | https://api.github.com/repos/huggingface/datasets/issues/462/events | https://github.com/huggingface/datasets/pull/462 | 669,715,547 | MDExOlB1bGxSZXF1ZXN0NDYwMDU0NDgz | 462 | add DoQA (ACL 2020) dataset | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,194,756,000 | 1,596,454,107,000 | 1,596,454,107,000 | CONTRIBUTOR | null | adds DoQA (ACL 2020) dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/462/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/462",
"html_url": "https://github.com/huggingface/datasets/pull/462",
"diff_url": "https://github.com/huggingface/datasets/pull/462.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/462.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/461/comments | https://api.github.com/repos/huggingface/datasets/issues/461/events | https://github.com/huggingface/datasets/pull/461 | 669,703,508 | MDExOlB1bGxSZXF1ZXN0NDYwMDQzNDY5 | 461 | Doqa | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,193,872,000 | 1,596,193,995,000 | 1,596,193,995,000 | CONTRIBUTOR | null | add DoQA (ACL 2020) dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/461/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/461",
"html_url": "https://github.com/huggingface/datasets/pull/461",
"diff_url": "https://github.com/huggingface/datasets/pull/461.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/461.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/460/comments | https://api.github.com/repos/huggingface/datasets/issues/460/events | https://github.com/huggingface/datasets/pull/460 | 669,585,256 | MDExOlB1bGxSZXF1ZXN0NDU5OTM2OTU2 | 460 | Fix KeyboardInterrupt in map and bad indices in select | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks @TevenLeScao for finding this issue",
"Thanks @lhoestq for catching this ❤️"
] | 1,596,185,835,000 | 1,596,195,139,000 | 1,596,195,138,000 | MEMBER | null | If you interrupted a map function while it was writing, the cached file was not discarded.
Therefore the next time you called map, it was loading an incomplete arrow file.
We had the same issue with select if there was a bad indice at one point.
To fix that I used temporary files that are renamed once everything is finished. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/460/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/460",
"html_url": "https://github.com/huggingface/datasets/pull/460",
"diff_url": "https://github.com/huggingface/datasets/pull/460.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/460.patch",
"merged_at": 1596195138000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/459/comments | https://api.github.com/repos/huggingface/datasets/issues/459/events | https://github.com/huggingface/datasets/pull/459 | 669,545,437 | MDExOlB1bGxSZXF1ZXN0NDU5OTAxMjEy | 459 | [Breaking] Update Dataset and DatasetDict API | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,183,093,000 | 1,598,430,516,000 | 1,598,430,515,000 | MEMBER | null | This PR contains a few breaking changes so it's probably good to keep it for the next (major) release:
- rename the `flatten`, `drop` and `dictionary_encode_column` methods in `flatten_`, `drop_` and `dictionary_encode_column_` to indicate that these methods have in-place effects as discussed in #166. From now on we should keep the convention of having a trailing underscore for methods which have an in-place effet. I also adopt the conversion of not returning the (self) dataset for these methods. This is different than what PyTorch does for instance (`model.to()` is in-place but return the self model) but I feel like it's a safer approach in terms of UX.
- remove the `dataset.columns` property which returns a low-level Apache Arrow object and should not be used by users. Similarly, remove `dataset. nbytes` which we don't really want to expose in this bare-bone format.
- add a few more properties and methods to `DatasetDict` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/459/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/459",
"html_url": "https://github.com/huggingface/datasets/pull/459",
"diff_url": "https://github.com/huggingface/datasets/pull/459.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/459.patch",
"merged_at": 1598430515000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/458 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/458/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/458/comments | https://api.github.com/repos/huggingface/datasets/issues/458/events | https://github.com/huggingface/datasets/pull/458 | 668,972,666 | MDExOlB1bGxSZXF1ZXN0NDU5Mzk5ODg2 | 458 | Install CoVal metric from github | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,128,365,000 | 1,596,203,793,000 | 1,596,203,793,000 | MEMBER | null | Changed the import statements in `coval.py` to direct the user to install the original package from github if it's not already installed (the warning will only display properly after merging [PR455](https://github.com/huggingface/nlp/pull/455))
Also changed the function call to use named rather than positional arguments. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/458/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/458/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/458",
"html_url": "https://github.com/huggingface/datasets/pull/458",
"diff_url": "https://github.com/huggingface/datasets/pull/458.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/458.patch",
"merged_at": 1596203793000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/457/comments | https://api.github.com/repos/huggingface/datasets/issues/457/events | https://github.com/huggingface/datasets/pull/457 | 668,898,386 | MDExOlB1bGxSZXF1ZXN0NDU5MzMyOTM1 | 457 | add set_format to DatasetDict + tests | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,124,400,000 | 1,596,130,476,000 | 1,596,130,474,000 | MEMBER | null | Add the `set_format` and `formated_as` and `reset_format` to `DatasetDict`.
Add tests to these for `Dataset` and `DatasetDict`.
Fix some bugs uncovered by the tests for `pandas` formating. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/457/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/457",
"html_url": "https://github.com/huggingface/datasets/pull/457",
"diff_url": "https://github.com/huggingface/datasets/pull/457.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/457.patch",
"merged_at": 1596130474000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/456/comments | https://api.github.com/repos/huggingface/datasets/issues/456/events | https://github.com/huggingface/datasets/pull/456 | 668,723,785 | MDExOlB1bGxSZXF1ZXN0NDU5MTc1MTY0 | 456 | add crd3(ACL 2020) dataset | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,115,715,000 | 1,596,454,132,000 | 1,596,454,132,000 | CONTRIBUTOR | null | This PR adds the **Critical Role Dungeons and Dragons Dataset** published at ACL 2020 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/456/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/456",
"html_url": "https://github.com/huggingface/datasets/pull/456",
"diff_url": "https://github.com/huggingface/datasets/pull/456.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/456.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/455 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/455/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/455/comments | https://api.github.com/repos/huggingface/datasets/issues/455/events | https://github.com/huggingface/datasets/pull/455 | 668,037,965 | MDExOlB1bGxSZXF1ZXN0NDU4NTk4NTUw | 455 | Add bleurt | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Sorry one nit: Could we use named arguments for the call to BLEURT?\r\n\r\ni.e. \r\n scores = self.scorer.score(references=references, candidates=predictions)\r\n\r\n(i.e. so it is less bug prone)\r\n",
"Following up on Ankur's comment---we are going to drop support for\npositional (not named) arguments in the future releases because it seems to\ncause bugs and confusion. I hope it doesn't create too much of a mess.\n\nLe jeu. 30 juil. 2020 à 10:44, ankparikh <notifications@github.com> a\nécrit :\n\n> Sorry one nit: Could we use named arguments for the call to BLEURT?\n>\n> i.e.\n> scores = self.scorer.score(references=references, candidates=predictions)\n>\n> (i.e. so it is less bug prone)\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/nlp/pull/455#issuecomment-666414514>, or\n> unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA>\n> .\n>\n",
"> Following up on Ankur's comment---we are going to drop support for positional (not named) arguments in the future releases because it seems to cause bugs and confusion. I hope it doesn't create too much of a mess. Le jeu. 30 juil. 2020 à 10:44, ankparikh <notifications@github.com> a écrit :\r\n> […](#)\r\n> Sorry one nit: Could we use named arguments for the call to BLEURT? i.e. scores = self.scorer.score(references=references, candidates=predictions) (i.e. so it is less bug prone) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <[#455 (comment)](https://github.com/huggingface/nlp/pull/455#issuecomment-666414514)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA> .\r\n\r\nChanged @ankparikh @tsellam, thanks for taking a look!",
"We should avoid positional arguments in metrics on our side as well. It's a dangerous source of errors indeed."
] | 1,596,046,112,000 | 1,596,203,774,000 | 1,596,203,774,000 | MEMBER | null | This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up.
In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL
cc @ankparikh @tsellam | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/455/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/455",
"html_url": "https://github.com/huggingface/datasets/pull/455",
"diff_url": "https://github.com/huggingface/datasets/pull/455.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/455.patch",
"merged_at": 1596203774000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/454/comments | https://api.github.com/repos/huggingface/datasets/issues/454/events | https://github.com/huggingface/datasets/pull/454 | 668,011,577 | MDExOlB1bGxSZXF1ZXN0NDU4NTc3MzA3 | 454 | Create SECURITY.md | {
"login": "ChenZehong13",
"id": 56394989,
"node_id": "MDQ6VXNlcjU2Mzk0OTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/56394989?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChenZehong13",
"html_url": "https://github.com/ChenZehong13",
"followers_url": "https://api.github.com/users/ChenZehong13/followers",
"following_url": "https://api.github.com/users/ChenZehong13/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenZehong13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChenZehong13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenZehong13/subscriptions",
"organizations_url": "https://api.github.com/users/ChenZehong13/orgs",
"repos_url": "https://api.github.com/users/ChenZehong13/repos",
"events_url": "https://api.github.com/users/ChenZehong13/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChenZehong13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,043,414,000 | 1,596,059,152,000 | 1,596,059,152,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/454/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/454",
"html_url": "https://github.com/huggingface/datasets/pull/454",
"diff_url": "https://github.com/huggingface/datasets/pull/454.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/454.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/453/comments | https://api.github.com/repos/huggingface/datasets/issues/453/events | https://github.com/huggingface/datasets/pull/453 | 667,728,247 | MDExOlB1bGxSZXF1ZXN0NDU4MzQwNzky | 453 | add builder tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,596,018,127,000 | 1,596,021,246,000 | 1,596,021,245,000 | MEMBER | null | I added `as_dataset` and `download_and_prepare` to the tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/453/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/453",
"html_url": "https://github.com/huggingface/datasets/pull/453",
"diff_url": "https://github.com/huggingface/datasets/pull/453.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/453.patch",
"merged_at": 1596021245000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/452/comments | https://api.github.com/repos/huggingface/datasets/issues/452/events | https://github.com/huggingface/datasets/pull/452 | 667,498,295 | MDExOlB1bGxSZXF1ZXN0NDU4MTUzNjQy | 452 | Guardian authorship dataset | {
"login": "malikaltakrori",
"id": 25109412,
"node_id": "MDQ6VXNlcjI1MTA5NDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/25109412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/malikaltakrori",
"html_url": "https://github.com/malikaltakrori",
"followers_url": "https://api.github.com/users/malikaltakrori/followers",
"following_url": "https://api.github.com/users/malikaltakrori/following{/other_user}",
"gists_url": "https://api.github.com/users/malikaltakrori/gists{/gist_id}",
"starred_url": "https://api.github.com/users/malikaltakrori/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/malikaltakrori/subscriptions",
"organizations_url": "https://api.github.com/users/malikaltakrori/orgs",
"repos_url": "https://api.github.com/users/malikaltakrori/repos",
"events_url": "https://api.github.com/users/malikaltakrori/events{/privacy}",
"received_events_url": "https://api.github.com/users/malikaltakrori/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! Glad you managed to fix the version issue.\r\n\r\nThe command `\r\npython nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs` is supposed to generate a json file `dataset_infos.json` next to your dataset script, but I can't see it in the PR.\r\nCan you make sure you have the json file on your side and that you have pushed it ?",
"Done!",
"Is there anything else that I should do? and would the new dataset be available via the NLP package now? ",
"Sorry I forgot to merge this one ! Doing it now",
"Thanks for the heads up ;)",
"No worries, this is my first contribution to an online package, and I feel very proud it's part of this library :) Thank you very much!"
] | 1,595,989,437,000 | 1,597,936,197,000 | 1,597,936,076,000 | CONTRIBUTOR | null | A new dataset: Guardian news articles for authorship attribution
**tests passed:**
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship
**Tests failed:**
Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship
output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...'
Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed:
* _glue - OSError: Cannot find data file.
*_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist
Thank you for letting us contribute to such a huge and important library!
EDIT:
I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/452/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/452",
"html_url": "https://github.com/huggingface/datasets/pull/452",
"diff_url": "https://github.com/huggingface/datasets/pull/452.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/452.patch",
"merged_at": 1597936075000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/451/comments | https://api.github.com/repos/huggingface/datasets/issues/451/events | https://github.com/huggingface/datasets/pull/451 | 667,210,468 | MDExOlB1bGxSZXF1ZXN0NDU3OTIxNDMx | 451 | Fix csv/json/txt cache dir | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I think this is the way to go but I’m afraid this might be a little slow. I was thinking that we could use a high quality very fast non crypto hash like xxhash for these stuff (hashing data files)",
"Yep good idea, I'll take a look",
"I tested the hashing speed [here](https://colab.research.google.com/drive/1hlhP84kLIHmOzMRQN1h8x10hKWpXXyud?usp=sharing).\r\nI was able to get 8x speed with `xxhashlib` (42ms vs 345ms for 100MiB of data).\r\nWhat do you think @thomwolf ?",
"I added xxhash and some tests"
] | 1,595,953,851,000 | 1,596,031,043,000 | 1,596,031,042,000 | MEMBER | null | The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user.
To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir.
This should fix #444 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/451/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/451",
"html_url": "https://github.com/huggingface/datasets/pull/451",
"diff_url": "https://github.com/huggingface/datasets/pull/451.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/451.patch",
"merged_at": 1596031042000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/450/comments | https://api.github.com/repos/huggingface/datasets/issues/450/events | https://github.com/huggingface/datasets/pull/450 | 667,074,120 | MDExOlB1bGxSZXF1ZXN0NDU3ODA5ODA2 | 450 | add sogou_news | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,595,942,950,000 | 1,596,029,418,000 | 1,596,029,417,000 | CONTRIBUTOR | null | This PR adds the sogou news dataset
#353 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/450/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/450",
"html_url": "https://github.com/huggingface/datasets/pull/450",
"diff_url": "https://github.com/huggingface/datasets/pull/450.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/450.patch",
"merged_at": 1596029417000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/449 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/449/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/449/comments | https://api.github.com/repos/huggingface/datasets/issues/449/events | https://github.com/huggingface/datasets/pull/449 | 666,898,923 | MDExOlB1bGxSZXF1ZXN0NDU3NjY0NjYx | 449 | add reuters21578 dataset | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"> Awesome !\r\n> Good job on parsing these files :O\r\n> \r\n> Do you think it would be hard to get the two other split configurations ?\r\n\r\nIt shouldn't be that hard, I think I can consider different config names for each split ",
"> > Awesome !\r\n> > Good job on parsing these files :O\r\n> > Do you think it would be hard to get the two other split configurations ?\r\n> \r\n> It shouldn't be that hard, I think I can consider different config names for each split\r\n\r\nYes that would be perfect",
"closing this PR and opening a new one to fix the circle CI problems"
] | 1,595,926,692,000 | 1,596,453,031,000 | 1,596,453,031,000 | CONTRIBUTOR | null | This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html
#353
The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it)
In the Readme file 3 ways to split the dataset are given.:
- The Modified Lewis ("ModLewis") Split: train, test and unused-set
- The Modified Apte ("ModApte") Split : train, test and unused-set
- The Modified Hayes ("ModHayes") Split: train and test
Here I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/449/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/449",
"html_url": "https://github.com/huggingface/datasets/pull/449",
"diff_url": "https://github.com/huggingface/datasets/pull/449.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/449.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/448/comments | https://api.github.com/repos/huggingface/datasets/issues/448/events | https://github.com/huggingface/datasets/pull/448 | 666,893,443 | MDExOlB1bGxSZXF1ZXN0NDU3NjYwMDU2 | 448 | add aws load metric test | {
"login": "idoh",
"id": 5303103,
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idoh",
"html_url": "https://github.com/idoh",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"repos_url": "https://api.github.com/users/idoh/repos",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Could you run `make style` to fix the code_quality fail ?\r\nYou'll need `black` and `isort` that you can install by doing `pip install -e .[quality]`",
"Thanks @lhoestq\r\nI fixed the styling",
"Thank you :)"
] | 1,595,926,222,000 | 1,595,948,547,000 | 1,595,948,547,000 | CONTRIBUTOR | null | Following issue #445
Added a test to recognize import errors of all metrics | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/448/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/448",
"html_url": "https://github.com/huggingface/datasets/pull/448",
"diff_url": "https://github.com/huggingface/datasets/pull/448.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/448.patch",
"merged_at": 1595948546000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/447/comments | https://api.github.com/repos/huggingface/datasets/issues/447/events | https://github.com/huggingface/datasets/pull/447 | 666,842,115 | MDExOlB1bGxSZXF1ZXN0NDU3NjE2NDA0 | 447 | [BugFix] fix wrong import of DEFAULT_TOKENIZER | {
"login": "idoh",
"id": 5303103,
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idoh",
"html_url": "https://github.com/idoh",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"repos_url": "https://api.github.com/users/idoh/repos",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,595,922,070,000 | 1,595,941,081,000 | 1,595,940,725,000 | CONTRIBUTOR | null | Fixed the path to `DEFAULT_TOKENIZER`
#445 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/447/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/447",
"html_url": "https://github.com/huggingface/datasets/pull/447",
"diff_url": "https://github.com/huggingface/datasets/pull/447.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/447.patch",
"merged_at": 1595940725000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/446/comments | https://api.github.com/repos/huggingface/datasets/issues/446/events | https://github.com/huggingface/datasets/pull/446 | 666,837,351 | MDExOlB1bGxSZXF1ZXN0NDU3NjEyNTg5 | 446 | [BugFix] fix wrong import of DEFAULT_TOKENIZER | {
"login": "idoh",
"id": 5303103,
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idoh",
"html_url": "https://github.com/idoh",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"repos_url": "https://api.github.com/users/idoh/repos",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,595,921,567,000 | 1,595,921,686,000 | 1,595,921,639,000 | CONTRIBUTOR | null | Fixed the path to `DEFAULT_TOKENIZER`
#445 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/446/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/446",
"html_url": "https://github.com/huggingface/datasets/pull/446",
"diff_url": "https://github.com/huggingface/datasets/pull/446.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/446.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/445/comments | https://api.github.com/repos/huggingface/datasets/issues/445/events | https://github.com/huggingface/datasets/issues/445 | 666,836,658 | MDU6SXNzdWU2NjY4MzY2NTg= | 445 | DEFAULT_TOKENIZER import error in sacrebleu | {
"login": "idoh",
"id": 5303103,
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idoh",
"html_url": "https://github.com/idoh",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"repos_url": "https://api.github.com/users/idoh/repos",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"This issue was resolved by #447 "
] | 1,595,921,490,000 | 1,595,941,136,000 | 1,595,941,136,000 | CONTRIBUTOR | null | Latest Version 0.3.0
When loading the metric "sacrebleu" there is an import error due to the wrong path
![image](https://user-images.githubusercontent.com/5303103/88633063-2c5e5f00-d0bd-11ea-8ca8-4704dc975433.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/445/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/444 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/444/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/444/comments | https://api.github.com/repos/huggingface/datasets/issues/444/events | https://github.com/huggingface/datasets/issues/444 | 666,280,842 | MDU6SXNzdWU2NjYyODA4NDI= | 444 | Keep loading old file even I specify a new file in load_dataset | {
"login": "joshhu",
"id": 10594453,
"node_id": "MDQ6VXNlcjEwNTk0NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/10594453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshhu",
"html_url": "https://github.com/joshhu",
"followers_url": "https://api.github.com/users/joshhu/followers",
"following_url": "https://api.github.com/users/joshhu/following{/other_user}",
"gists_url": "https://api.github.com/users/joshhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshhu/subscriptions",
"organizations_url": "https://api.github.com/users/joshhu/orgs",
"repos_url": "https://api.github.com/users/joshhu/repos",
"events_url": "https://api.github.com/users/joshhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshhu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Same here !",
"This is the only fix I could come up with without touching the repo's code.\r\n```python\r\nfrom nlp.builder import FORCE_REDOWNLOAD\r\ndataset = load_dataset('csv', data_file='./a.csv', download_mode=FORCE_REDOWNLOAD, version='0.0.1')\r\n```\r\nYou'll have to change the version each time you want to load a different csv file.\r\nIf you're willing to add a ```print```, you can go to ```nlp.load``` and add ```print(builder_instance.cache_dir)``` right before the ```return ds``` in the ```load_dataset``` method. It'll print the cache folder, and you'll just have to erase it (and then you won't need the change here above)."
] | 1,595,855,286,000 | 1,596,031,042,000 | 1,596,031,042,000 | NONE | null | I used load a file called 'a.csv' by
```
dataset = load_dataset('csv', data_file='./a.csv')
```
And after a while, I tried to load another csv called 'b.csv'
```
dataset = load_dataset('csv', data_file='./b.csv')
```
However, the new dataset seems to remain the old 'a.csv' and not loading new csv file.
Even worse, after I load a.csv, the load_dataset function keeps loading the 'a.csv' afterward.
Is this a cache problem?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/444/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/444/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/443/comments | https://api.github.com/repos/huggingface/datasets/issues/443/events | https://github.com/huggingface/datasets/issues/443 | 666,246,716 | MDU6SXNzdWU2NjYyNDY3MTY= | 443 | Cannot unpickle saved .pt dataset with torch.save()/load() | {
"login": "vegarab",
"id": 24683907,
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegarab",
"html_url": "https://github.com/vegarab",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"repos_url": "https://api.github.com/users/vegarab/repos",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"This seems to be fixed in a non-released version. \r\n\r\nInstalling nlp from source\r\n```\r\ngit clone https://github.com/huggingface/nlp\r\ncd nlp\r\npip install .\r\n```\r\nsolves the issue. "
] | 1,595,852,017,000 | 1,595,855,111,000 | 1,595,855,111,000 | CONTRIBUTOR | null | Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling:
```python
>>> import torch
>>> import nlp
>>> squad = nlp.load_dataset("squad.py", split="train")
>>> squad
Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype='string', id=None)}, num_rows: 87599)
>>> squad = squad.map(create_features, batched=True)
>>> squad.set_format(type="torch", columns=["source_ids", "target_ids", "attention_mask"])
>>> torch.save(squad, "squad.pt")
>>> squad_pt = torch.load("squad.pt")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 773, in _legacy_load
result = unpickler.load()
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/splits.py", line 493, in __setitem__
raise ValueError("Cannot add elem. Use .add() instead.")
ValueError: Cannot add elem. Use .add() instead.
```
where `create_features` is a function that tokenizes the data using `batch_encode_plus` and returns a Dict with `input_ids`, `target_ids` and `attention_mask`.
```python
def create_features(batch):
source_text_encoding = tokenizer.batch_encode_plus(
batch["source_text"],
max_length=max_source_length,
pad_to_max_length=True,
truncation=True)
target_text_encoding = tokenizer.batch_encode_plus(
batch["target_text"],
max_length=max_target_length,
pad_to_max_length=True,
truncation=True)
features = {
"source_ids": source_text_encoding["input_ids"],
"target_ids": target_text_encoding["input_ids"],
"attention_mask": source_text_encoding["attention_mask"]
}
return features
```
I found a similar issue in [issue 5267 in the huggingface/transformers repo](https://github.com/huggingface/transformers/issues/5267) which was solved by downgrading to `nlp==0.2.0`. That did not solve this problem, however. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/443/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/442/comments | https://api.github.com/repos/huggingface/datasets/issues/442/events | https://github.com/huggingface/datasets/issues/442 | 666,201,810 | MDU6SXNzdWU2NjYyMDE4MTA= | 442 | [Suggestion] Glue Diagnostic Data with Labels | {
"login": "ggbetz",
"id": 3662782,
"node_id": "MDQ6VXNlcjM2NjI3ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3662782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggbetz",
"html_url": "https://github.com/ggbetz",
"followers_url": "https://api.github.com/users/ggbetz/followers",
"following_url": "https://api.github.com/users/ggbetz/following{/other_user}",
"gists_url": "https://api.github.com/users/ggbetz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggbetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggbetz/subscriptions",
"organizations_url": "https://api.github.com/users/ggbetz/orgs",
"repos_url": "https://api.github.com/users/ggbetz/repos",
"events_url": "https://api.github.com/users/ggbetz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggbetz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,595,847,598,000 | 1,598,282,000,000 | null | NONE | null | Hello! First of all, thanks for setting up this useful project!
I've just realised you provide the the [Glue Diagnostics Data](https://huggingface.co/nlp/viewer/?dataset=glue&config=ax) without labels, indicating in the `GlueConfig` that you've only a test set.
Yet, the data with labels is available, too (see also [here](https://gluebenchmark.com/diagnostics#introduction)):
https://www.dropbox.com/s/ju7d95ifb072q9f/diagnostic-full.tsv?dl=1
Have you considered incorporating it? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/442/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/442/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/441/comments | https://api.github.com/repos/huggingface/datasets/issues/441/events | https://github.com/huggingface/datasets/pull/441 | 666,148,413 | MDExOlB1bGxSZXF1ZXN0NDU3MDQyMjY3 | 441 | Add features parameter in load dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"This one is ready for review now",
"I changed to using features only, instead of info.\r\nLet mw know if it sounds good to you now @thomwolf "
] | 1,595,843,401,000 | 1,596,113,477,000 | 1,596,113,476,000 | MEMBER | null | Added `features` argument in `nlp.load_dataset`.
If they don't match the data type, it raises a `ValueError`.
It's a draft PR because #440 needs to be merged first. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/441/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/441",
"html_url": "https://github.com/huggingface/datasets/pull/441",
"diff_url": "https://github.com/huggingface/datasets/pull/441.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/441.patch",
"merged_at": 1596113476000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/440/comments | https://api.github.com/repos/huggingface/datasets/issues/440/events | https://github.com/huggingface/datasets/pull/440 | 666,116,823 | MDExOlB1bGxSZXF1ZXN0NDU3MDE2MjQy | 440 | Fix user specified features in map | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,595,840,666,000 | 1,595,928,323,000 | 1,595,928,322,000 | MEMBER | null | `.map` didn't keep the user specified features because of an issue in the writer.
The writer used to overwrite the user specified features with inferred features.
I also added tests to make sure it doesn't happen again. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/440/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/440",
"html_url": "https://github.com/huggingface/datasets/pull/440",
"diff_url": "https://github.com/huggingface/datasets/pull/440.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/440.patch",
"merged_at": 1595928322000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/439/comments | https://api.github.com/repos/huggingface/datasets/issues/439/events | https://github.com/huggingface/datasets/issues/439 | 665,964,673 | MDU6SXNzdWU2NjU5NjQ2NzM= | 439 | Issues: Adding a FAISS or Elastic Search index to a Dataset | {
"login": "nsankar",
"id": 431890,
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsankar",
"html_url": "https://github.com/nsankar",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"repos_url": "https://api.github.com/users/nsankar/repos",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"`DPRContextEncoder` and `DPRContextEncoderTokenizer` will be available in the next release of `transformers`.\r\n\r\nRight now you can experiment with it by installing `transformers` from the master branch.\r\nYou can also check the docs of DPR [here](https://huggingface.co/transformers/master/model_doc/dpr.html).\r\n\r\nMoreover all the indexing features will also be available in the next release of `nlp`.",
"@lhoestq Thanks for the info ",
"@lhoestq I tried installing transformer from the master branch. Python imports for DPR again didnt' work. Anyways, Looking forward to trying it in the next release of nlp ",
"@nsankar have you tried with the latest version of the library?",
"@yjernite it worked. Thanks"
] | 1,595,823,917,000 | 1,603,849,584,000 | 1,603,849,584,000 | NONE | null | It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on the latest PyArrow 1.0.0 ? Is it yet to be made generally available ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/439/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/438/comments | https://api.github.com/repos/huggingface/datasets/issues/438/events | https://github.com/huggingface/datasets/issues/438 | 665,865,490 | MDU6SXNzdWU2NjU4NjU0OTA= | 438 | New Datasets: IWSLT15+, ITTB | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks Sam, we now have a very detailed tutorial and template on how to add a new dataset to the library. It typically take 1-2 hours to add one. Do you want to give it a try ?\r\nThe tutorial on writing a new dataset loading script is here: https://huggingface.co/nlp/add_dataset.html\r\nAnd the part on how to share a new dataset is here: https://huggingface.co/nlp/share_dataset.html",
"Hi @sshleifer, I'm trying to add IWSLT using the link you provided but the download urls are not working. Only `[en, de]` pair is working. For others language pairs it throws a `404` error.\r\n\r\n"
] | 1,595,799,784,000 | 1,598,281,935,000 | null | CONTRIBUTOR | null | **Links:**
[iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html)
Don't know if that link is up to date.
[ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/)
**Motivation**: replicate mbart finetuning results (table below)
![image](https://user-images.githubusercontent.com/6045025/88490093-0c1c8c00-cf67-11ea-960d-8dcaad2aa8eb.png)
For future readers, we already have the following language pairs in the wmt namespaces:
```
wmt14: ['cs-en', 'de-en', 'fr-en', 'hi-en', 'ru-en']
wmt15: ['cs-en', 'de-en', 'fi-en', 'fr-en', 'ru-en']
wmt16: ['cs-en', 'de-en', 'fi-en', 'ro-en', 'ru-en', 'tr-en']
wmt17: ['cs-en', 'de-en', 'fi-en', 'lv-en', 'ru-en', 'tr-en', 'zh-en']
wmt18: ['cs-en', 'de-en', 'et-en', 'fi-en', 'kk-en', 'ru-en', 'tr-en', 'zh-en']
wmt19: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/438/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/437/comments | https://api.github.com/repos/huggingface/datasets/issues/437/events | https://github.com/huggingface/datasets/pull/437 | 665,597,176 | MDExOlB1bGxSZXF1ZXN0NDU2NjIzNjc3 | 437 | Fix XTREME PAN-X loading | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"There is an interesting design question here (cc @lhoestq).\r\n\r\nI guess the labels form a closed set so we could also use a [nlp.ClassLabel](https://huggingface.co/nlp/package_reference/main_classes.html#nlp.ClassLabel) instead of a string. The differences will be mainly that:\r\n- the labels are stored as integers and thus ready for training a model\r\n- the string to int conversion methods are handled by the `nlp.ClassLabel` feature (see the [doc](https://huggingface.co/nlp/package_reference/main_classes.html#nlp.ClassLabel) and [here](https://huggingface.co/nlp/features.html) and [here](https://huggingface.co/nlp/quicktour.html#fine-tuning-a-deep-learning-model)).\r\n\r\nIn my opinion, storing the labels as integers instead of strings makes it:\r\n- slightly less readable when accessing a dataset example (e.g. with `dataset[0]`)\r\n- force you with a specific mapping from string to integers\r\n- more clear that there is a fixed and predefined list of labels\r\n- easier to list all the labels (directly visible in the features).\r\n\r\n=> overall I'm pretty neutral about using one or the other option (`nlp.string` or `nlp.ClassLabel`).\r\n\r\nNote that we can now rather easily convert from one to the other with the map function and something like:\r\n```python\r\ndataset = dataset.map(lambda x: x, features=nlp.Features({'labels': nlp.ClassLabel(MY_LABELS_NAMES)}))\r\ndataset = dataset.map(lambda x: {'labels': dataset.features['labels'].int2str(x['labels'])}, features=nlp.Features({'labels': nlp.Value('string')}))\r\n```\r\n^^ this could probably be made even simpler (in particular for the second case)",
"I see. This is an interesting question.\r\nMaybe as the dataset doesn't provide the mapping we shouldn't force an arbitrary one, and keep them as strings ?\r\nMoreover for NER the labels are often different from a dataset to the other so it's probably good to keep strings (there is no conventional mapping).\r\nAlso as the column is called \"ner_tags\" (or \"langs\"), you can already assume that there is a fixed and predefined list of labels.",
"Yes sounds good to me.\r\nThis make me wonder if we don’t want to have a default identity function in `map` so this method could also be used to easily cast features. What do you think?",
"Yes sounds good. I also noticed that people use map with identity to write a dataset into a specified cache file."
] | 1,595,688,297,000 | 1,596,097,695,000 | 1,596,097,695,000 | CONTRIBUTOR | null | Hi 🤗
In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo.
With the fix the output of the dataset should look as follows:
```python
>>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
>>> dataset['train'][0]
{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],
'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],
'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/437/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/437",
"html_url": "https://github.com/huggingface/datasets/pull/437",
"diff_url": "https://github.com/huggingface/datasets/pull/437.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/437.patch",
"merged_at": 1596097695000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/436/comments | https://api.github.com/repos/huggingface/datasets/issues/436/events | https://github.com/huggingface/datasets/issues/436 | 665,582,167 | MDU6SXNzdWU2NjU1ODIxNjc= | 436 | Google Colab - load_dataset - PyArrow exception | {
"login": "nsankar",
"id": 431890,
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsankar",
"html_url": "https://github.com/nsankar",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"repos_url": "https://api.github.com/users/nsankar/repos",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Indeed, we’ll make a new PyPi release next week to solve this. Cc @lhoestq ",
"+1! this is the reason our tests are failing at [TextAttack](https://github.com/QData/TextAttack) \r\n\r\n(Though it's worth noting if we fixed the version number of pyarrow to 0.16.0 that would fix our problem too. But in this case we'll just wait for you all to update)",
"Came to raise this issue, great to see other already have and it's being fixed so soon!\r\n\r\nAs an aside, since no one wrote this already, it seems like the version check only looks at the second part of the version number making sure it is >16, but pyarrow newest version is 1.0.0 so the second past is 0!",
"> Indeed, we’ll make a new PyPi release next week to solve this. Cc @lhoestq\r\n\r\nYes definitely",
"please fix this on pypi! @lhoestq ",
"Is this issue fixed ?",
"We’ll release the new version later today. Apologies for the delay.",
"I just pushed the new version on pypi :)",
"Thanks for the update."
] | 1,595,682,320,000 | 1,597,910,898,000 | 1,597,910,898,000 | NONE | null | With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue
ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`.
The error goes only when I install version 0.16.0
i.e. !pip install pyarrow==0.16.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/436/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/436/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/435/comments | https://api.github.com/repos/huggingface/datasets/issues/435/events | https://github.com/huggingface/datasets/issues/435 | 665,507,141 | MDU6SXNzdWU2NjU1MDcxNDE= | 435 | ImportWarning for pyarrow 1.0.0 | {
"login": "HanGuo97",
"id": 18187806,
"node_id": "MDQ6VXNlcjE4MTg3ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/18187806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanGuo97",
"html_url": "https://github.com/HanGuo97",
"followers_url": "https://api.github.com/users/HanGuo97/followers",
"following_url": "https://api.github.com/users/HanGuo97/following{/other_user}",
"gists_url": "https://api.github.com/users/HanGuo97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanGuo97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanGuo97/subscriptions",
"organizations_url": "https://api.github.com/users/HanGuo97/orgs",
"repos_url": "https://api.github.com/users/HanGuo97/repos",
"events_url": "https://api.github.com/users/HanGuo97/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanGuo97/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"This was fixed in #434 \r\nWe'll do a release later this week to include this fix.\r\nThanks for reporting",
"I dont know if the fix was made but the problem is still present : \r\nInstaled with pip : NLP 0.3.0 // pyarrow 1.0.0 \r\nOS : archlinux with kernel zen 5.8.5",
"Yes it was fixed in `nlp>=0.4.0`\r\nYou can update with pip",
"Sorry, I didn't got the updated version, all is now working perfectly thanks"
] | 1,595,648,679,000 | 1,599,587,835,000 | 1,596,472,652,000 | NONE | null | The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/435/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/435/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/434/comments | https://api.github.com/repos/huggingface/datasets/issues/434/events | https://github.com/huggingface/datasets/pull/434 | 665,477,638 | MDExOlB1bGxSZXF1ZXN0NDU2NTM3Njgz | 434 | Fixed check for pyarrow | {
"login": "nadahlberg",
"id": 58701810,
"node_id": "MDQ6VXNlcjU4NzAxODEw",
"avatar_url": "https://avatars.githubusercontent.com/u/58701810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nadahlberg",
"html_url": "https://github.com/nadahlberg",
"followers_url": "https://api.github.com/users/nadahlberg/followers",
"following_url": "https://api.github.com/users/nadahlberg/following{/other_user}",
"gists_url": "https://api.github.com/users/nadahlberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nadahlberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nadahlberg/subscriptions",
"organizations_url": "https://api.github.com/users/nadahlberg/orgs",
"repos_url": "https://api.github.com/users/nadahlberg/repos",
"events_url": "https://api.github.com/users/nadahlberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/nadahlberg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Great, thanks!"
] | 1,595,636,213,000 | 1,595,658,994,000 | 1,595,658,994,000 | CONTRIBUTOR | null | Fix check for pyarrow in __init__.py. Previously would raise an error for pyarrow >= 1.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/434/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/434",
"html_url": "https://github.com/huggingface/datasets/pull/434",
"diff_url": "https://github.com/huggingface/datasets/pull/434.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/434.patch",
"merged_at": 1595658994000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/433/comments | https://api.github.com/repos/huggingface/datasets/issues/433/events | https://github.com/huggingface/datasets/issues/433 | 665,311,025 | MDU6SXNzdWU2NjUzMTEwMjU= | 433 | How to reuse functionality of a (generic) dataset? | {
"login": "ArneBinder",
"id": 3375489,
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArneBinder",
"html_url": "https://github.com/ArneBinder",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions",
"organizations_url": "https://api.github.com/users/ArneBinder/orgs",
"repos_url": "https://api.github.com/users/ArneBinder/repos",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArneBinder/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi @ArneBinder, we have a few \"generic\" datasets which are intended to load data files with a predefined format:\r\n- csv: https://github.com/huggingface/nlp/tree/master/datasets/csv\r\n- json: https://github.com/huggingface/nlp/tree/master/datasets/json\r\n- text: https://github.com/huggingface/nlp/tree/master/datasets/text\r\n\r\nYou can find more details about this way to load datasets here in the documentation: https://huggingface.co/nlp/loading_datasets.html#from-local-files\r\n\r\nMaybe your brat loading script could be shared in a similar fashion?",
"> Maybe your brat loading script could be shared in a similar fashion?\r\n\r\n@thomwolf that was also my first idea and I think I will tackle that in the next days. I separated the code and created a real abstract class `AbstractBrat` to allow to inherit from that (I've just seen that the dataset_loader loads the first non abstract class), now `Brat` is very similar in its functionality to https://github.com/huggingface/nlp/tree/master/datasets/text but inherits from `AbstractBrat`.\r\n\r\nHowever, it is still not clear to me how to add a specific dataset (as explained in https://huggingface.co/nlp/add_dataset.html) to your repo that uses this format/abstract class, i.e. re-using the `features` entry of the `DatasetInfo` object and `_generate_examples()`. Again, by doing so, the only remaining entries/functions to define would be `_DESCRIPTION`, `_CITATION`, `homepage` and `_URL` (which is all copy-paste stuff) and `_split_generators()`.\r\n \r\nIn a lack of better ideas, I tried sth like below, but of course it does not work outside `nlp` (`AbstractBrat` is currently defined in [datasets/brat.py](https://github.com/ArneBinder/nlp/blob/5e81fb8710546ee7be3353a7f02a3045e9a8351e/datasets/brat/brat.py)):\r\n```python\r\nfrom __future__ import absolute_import, division, print_function\r\n\r\nimport os\r\n\r\nimport nlp\r\n\r\nfrom datasets.brat.brat import AbstractBrat\r\n\r\n_CITATION = \"\"\"\r\n@inproceedings{lauscher2018b,\r\n title = {An argument-annotated corpus of scientific publications},\r\n booktitle = {Proceedings of the 5th Workshop on Mining Argumentation},\r\n publisher = {Association for Computational Linguistics},\r\n author = {Lauscher, Anne and Glava\\v{s}, Goran and Ponzetto, Simone Paolo},\r\n address = {Brussels, Belgium},\r\n year = {2018},\r\n pages = {40–46}\r\n}\r\n\"\"\"\r\n\r\n_DESCRIPTION = \"\"\"\\\r\nThis dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing \r\nfine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific \r\npublications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of \r\nscientific writing.\r\n\"\"\"\r\n\r\n_URL = \"http://data.dws.informatik.uni-mannheim.de/sci-arg/compiled_corpus.zip\"\r\n\r\n\r\nclass Sciarg(AbstractBrat):\r\n\r\n VERSION = nlp.Version(\"1.0.0\")\r\n\r\n def _info(self):\r\n\r\n brat_features = super()._info().features\r\n return nlp.DatasetInfo(\r\n # This is the description that will appear on the datasets page.\r\n description=_DESCRIPTION,\r\n # nlp.features.FeatureConnectors\r\n features=brat_features,\r\n # If there's a common (input, target) tuple from the features,\r\n # specify them here. They'll be used if as_supervised=True in\r\n # builder.as_dataset.\r\n #supervised_keys=None,\r\n # Homepage of the dataset for documentation\r\n homepage=\"https://github.com/anlausch/ArguminSci\",\r\n citation=_CITATION,\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n # TODO: Downloads the data and defines the splits\r\n # dl_manager is a nlp.download.DownloadManager that can be used to\r\n # download and extract URLs\r\n dl_dir = dl_manager.download_and_extract(_URL)\r\n data_dir = os.path.join(dl_dir, \"compiled_corpus\")\r\n print(f'data_dir: {data_dir}')\r\n return [\r\n nlp.SplitGenerator(\r\n name=nlp.Split.TRAIN,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"directory\": data_dir,\r\n },\r\n ),\r\n ]\r\n``` \r\n\r\nNevertheless, many thanks for tackling the dataset accessibility problem with this great library!",
"As temporary fix I've created [ArneBinder/nlp-formats](https://github.com/ArneBinder/nlp-formats) (contributions welcome)."
] | 1,595,611,657,000 | 1,596,190,997,000 | null | NONE | null | I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to reuse formats and loading functionality for datasets with a common format?
In my case, it took a bit of time to create the Brat dataset and I think others would appreciate to not have to think about that again. Also, I assume there are other formats (e.g. conll) that are widely used, so having this would really ease dataset onboarding and adoption of the library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/433/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/432/comments | https://api.github.com/repos/huggingface/datasets/issues/432/events | https://github.com/huggingface/datasets/pull/432 | 665,234,340 | MDExOlB1bGxSZXF1ZXN0NDU2MzQxNDk3 | 432 | Fix handling of config files while loading datasets from multiple processes | {
"login": "orsharir",
"id": 99543,
"node_id": "MDQ6VXNlcjk5NTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orsharir",
"html_url": "https://github.com/orsharir",
"followers_url": "https://api.github.com/users/orsharir/followers",
"following_url": "https://api.github.com/users/orsharir/following{/other_user}",
"gists_url": "https://api.github.com/users/orsharir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orsharir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orsharir/subscriptions",
"organizations_url": "https://api.github.com/users/orsharir/orgs",
"repos_url": "https://api.github.com/users/orsharir/repos",
"events_url": "https://api.github.com/users/orsharir/events{/privacy}",
"received_events_url": "https://api.github.com/users/orsharir/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Ok for this but I think we may want to use the general `filelock` method we are using at other places in the library instead of filecmp (in particular `filelock` take care of being an atomic operation which is safer for concurrent processes)",
"Ok I see.\r\nWhy not use filelock in this case then ?",
"I think we should 🙂",
"Thanks for approving my patch.\n\nI agree that if copying is needed then some locking mechanism should be put in place. But, I don't think a file should be needlessly copied without a check. So I guess the flow should be, lock => copy if needed => unlock, and add locks wherever else that file is being accessed.\n\nI'll also add that my personal experience with filelock on a different project hasn't been that great, and on some occasions a process somehow got through the lock -- I've never gotten to the bottom of that but it tainted my view of that module. Perhaps it's been fixed (or I just miss used it), but thought you should know to take steps to test it."
] | 1,595,603,457,000 | 1,596,301,902,000 | 1,596,097,528,000 | CONTRIBUTOR | null | When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written.
This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/432/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/432",
"html_url": "https://github.com/huggingface/datasets/pull/432",
"diff_url": "https://github.com/huggingface/datasets/pull/432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/432.patch",
"merged_at": 1596097528000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/431/comments | https://api.github.com/repos/huggingface/datasets/issues/431/events | https://github.com/huggingface/datasets/pull/431 | 665,044,416 | MDExOlB1bGxSZXF1ZXN0NDU2MTgyNDE2 | 431 | Specify split post processing + Add post processing resources downloading | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I was using a hack in `wiki_dpr` to download the index from GCS even for the configurations without the embeddings.\r\nHowever as GCS is something internal, I changed the logic to add a download step for indexes directly in the dataset script, using the `DownloadManager`.\r\n\r\nThis change was directly linked to the changes I did to take into account the split name in the post processing, so I included this change in this PR too.\r\n\r\nTo summarize:\r\n\r\nDataset builders can now implement\r\n- `_post_processing_resources(split)`: return a dict `resource_name -> resource_file_name`. It defines the additional resources such as indexes or arrow files that you need in post processing\r\n- `_download_post_processing_resources(split, resource_name, dl_manager))`: if some resources can be downloaded, you can use the download_manager to download them\r\n- `_post_process(dataset, resources_path)`: (main function for post processing) given a dataset, you can apply dataset transforms or add indexes. For resources that have been downloaded, you can load them. For the others, you can generate and save them. The paths to load/save resources are in `resources_path` which is a dictionary `resource_name -> resource_path`\r\n\r\nAbout the CI:\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr\r\n```\r\nIt fails because I changed the input of post processing functions (to include the split name)",
"I started to add metadata in the DatasetInfo.\r\nNote that because there are new fields, **ALL the dataset_info[s].json generated after these changes won't be loadable from older versions of the lib**\r\n\r\nRight now it looks like this:\r\n```json\r\n \"post_processing_resources_checksums\": {\r\n \"train\": {\r\n \"embeddings_index\": {\r\n \"num_bytes\": 30720045,\r\n \"checksum\": \"b04fb4f4f3ab83b9d1b9f6f9eb236f1c04a9fd61bef7cee16b12df8ac911766a\"\r\n }\r\n }\r\n },\r\n \"post_processing_size\": 30720045,\r\n```",
"Good point. Should we anticipate already that we may add other fields in the future and change the code to support the addition of new fields without breaking backward compatibility in the future?",
"I added:\r\n- post processing features (inside a PostProcessedInfo object)\r\n- backward compatibility for dataset info\r\n- post processing tests (as_dataset and download_and_prepare) for map (change features), select (change number of elements) and add_faiss_index (add indexes)\r\nAnd I fixed a bug in `map` that I found thanks to the new tests\r\n\r\nNow I just have to move `post_processing_resources_checksums` to PostProcessedInfo as well and everything should be good :)\r\nEdit: done"
] | 1,595,582,959,000 | 1,596,186,304,000 | 1,596,186,303,000 | MEMBER | null | Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
To fix that I made it so post processing resources can be named according to the split.
I'm going to add tests on post processing too.
Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr
```
EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/431/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/431",
"html_url": "https://github.com/huggingface/datasets/pull/431",
"diff_url": "https://github.com/huggingface/datasets/pull/431.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/431.patch",
"merged_at": 1596186303000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/430 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/430/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/430/comments | https://api.github.com/repos/huggingface/datasets/issues/430/events | https://github.com/huggingface/datasets/pull/430 | 664,583,837 | MDExOlB1bGxSZXF1ZXN0NDU1ODAxOTI2 | 430 | add DatasetDict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I did the changes in the docstrings and I added a type check in each `DatasetDict` method to make sure all values are of type `Dataset`",
"Awesome, do you mind adding these in the doc as well?",
"I added it to the docs (processing + main classes)",
"I'm trying to follow along with the following about datasets from the docs:\r\n\r\nhttps://huggingface.co/nlp/loading_datasets.html\r\nhttps://huggingface.co/nlp/processing.html\r\n\r\nHowever the train_test_split method no longer works as it is expecting a dataset, rather than a datsetdict. How would I got about splitting a CSV into a train and test set? \r\n\r\nI'm trying to utilize the Trainer() class, but am having trouble converting my data from a csv into dataset objects to pass in."
] | 1,595,519,029,000 | 1,596,502,913,000 | 1,596,013,582,000 | MEMBER | null | ## Add DatasetDict
### Overview
When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example).
If you wanted to apply dataset transforms you had to iterate over each split and apply the transform.
Instead of returning a dict, it now returns a `nlp.DatasetDict` object which inherits from dict and contains the same data as before, except that now users can call dataset transforms directly from the output, and they'll be applied on each split.
Before:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = {
split_name: dataset.map(my_func) for split_name, dataset in squad.items()
}
print(squad.keys())
# dict_keys(['train', 'validation'])
```
Now:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = squad.map(my_func)
print(squad.keys())
# dict_keys(['train', 'validation'])
```
### Dataset transforms
`nlp.DatasetDict` implements the following dataset transforms:
- map
- filter
- sort
- shuffle
### Arguments
The arguments of the methods are the same except for split-specific arguments like `cache_file_name`.
For such arguments, the expected input is a dictionary `{split_name: argument_value}`
It concerns:
- `cache_file_name` in map, filter, sort, shuffle
- `seed` and `generator` in shuffle | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/430/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/430",
"html_url": "https://github.com/huggingface/datasets/pull/430",
"diff_url": "https://github.com/huggingface/datasets/pull/430.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/430.patch",
"merged_at": 1596013582000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/429/comments | https://api.github.com/repos/huggingface/datasets/issues/429/events | https://github.com/huggingface/datasets/pull/429 | 664,412,137 | MDExOlB1bGxSZXF1ZXN0NDU1NjU2MDk5 | 429 | mlsum | {
"login": "RachelKer",
"id": 36986299,
"node_id": "MDQ6VXNlcjM2OTg2Mjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RachelKer",
"html_url": "https://github.com/RachelKer",
"followers_url": "https://api.github.com/users/RachelKer/followers",
"following_url": "https://api.github.com/users/RachelKer/following{/other_user}",
"gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions",
"organizations_url": "https://api.github.com/users/RachelKer/orgs",
"repos_url": "https://api.github.com/users/RachelKer/repos",
"events_url": "https://api.github.com/users/RachelKer/events{/privacy}",
"received_events_url": "https://api.github.com/users/RachelKer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks @RachelKer for this PR.\r\n\r\nI think the dummy_data structure does not also match. In the `_split_generator` you have something like `os.path.join(downloaded_files[\"validation\"], lang+'_val.jsonl')` but in you dummy_data you have `os.path.join(downloaded_files[\"validation\"], lang+\"_val.zip\", lang+'_val.jsonl')`. I think ` jsonl` files should be directly in the `dummy_data` folder without the sub-folder \r\n\r\n@lhoestq ",
"Hi @RachelKer :)\r\nThanks for adding MLSUM !\r\n\r\nTo fix the CI I think you just have to rebase from master",
"Great, I think it is working now. Thanks :)",
"It looks like your PR does tons of changes in other datasets. \r\nMaybe this is because of the merge from master ?",
"Hmm, I see, sorry I messed up somewhere. Maybe it's easier if we close the pull request and I do another one ?",
"Yea if it's easier for you feel free to re-open a PR"
] | 1,595,505,159,000 | 1,596,195,980,000 | 1,596,195,980,000 | CONTRIBUTOR | null | Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/429/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/429",
"html_url": "https://github.com/huggingface/datasets/pull/429",
"diff_url": "https://github.com/huggingface/datasets/pull/429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/429.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/428/comments | https://api.github.com/repos/huggingface/datasets/issues/428/events | https://github.com/huggingface/datasets/pull/428 | 664,367,086 | MDExOlB1bGxSZXF1ZXN0NDU1NjE3Nzcy | 428 | fix concatenate_datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,595,500,259,000 | 1,595,500,500,000 | 1,595,500,498,000 | MEMBER | null | `concatenate_datatsets` used to test that the different`nlp.Dataset.schema` match, but this attribute was removed in #423 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/428/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/428",
"html_url": "https://github.com/huggingface/datasets/pull/428",
"diff_url": "https://github.com/huggingface/datasets/pull/428.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/428.patch",
"merged_at": 1595500498000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/427/comments | https://api.github.com/repos/huggingface/datasets/issues/427/events | https://github.com/huggingface/datasets/pull/427 | 664,341,623 | MDExOlB1bGxSZXF1ZXN0NDU1NTk1Nzc3 | 427 | Allow sequence features for beam + add processed Natural Questions | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,595,497,961,000 | 1,595,509,770,000 | 1,595,509,769,000 | MEMBER | null | ## Allow Sequence features for Beam Datasets + add Natural Questions
### The issue
The steps of beam datasets processing is the following:
- download the source files and send them in a remote storage (gcs)
- process the files using a beam runner (dataflow)
- save output in remote storage (gcs)
- convert output to arrow in remote storage (gcs)
However it wasn't possible to process `natural_questions` because apache beam's processing outputs parquet files, and it's not yet possible to read parquet files with list features.
### The proposed solution
To allow sequence features for beam I added a workaround that serializes the values using `json.dumps`, so that we end up with strings instead of the original features. Then when the arrow file is created, the serialized objects are transformed back to normal with `json.loads`. Not sure if there's a better way to do it.
### Natural Questions
I was able to process NQ with it, and so I added the json infos file in this PR too.
The processed arrow files are also stored in gcs.
It allows you to load NQ with
```python
from nlp import load_dataset
nq = load_dataset("natural_questions") # download the 90GB arrow files from gcs and return the dataset
```
### Tests
I added a test case to make sure it works as expected.
Note that the CI will fail because I am updating `natural_questions.py`: it's not synced with the script on S3. It will be synced as soon as this PR is merged.
```
=========================== short test summary info ============================
FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_natural_questions/default
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/427/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/427/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/427",
"html_url": "https://github.com/huggingface/datasets/pull/427",
"diff_url": "https://github.com/huggingface/datasets/pull/427.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/427.patch",
"merged_at": 1595509769000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/426/comments | https://api.github.com/repos/huggingface/datasets/issues/426/events | https://github.com/huggingface/datasets/issues/426 | 664,203,897 | MDU6SXNzdWU2NjQyMDM4OTc= | 426 | [FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter | {
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Yes that's definitely something we plan to add ^^",
"Yes, that would be nice. We could take a look at what tensorflow `tf.data` does under the hood for instance.",
"So `tf.data.Dataset.map()` returns a `ParallelMapDataset` if `num_parallel_calls is not None` [link](https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/data/ops/dataset_ops.py#L1623).\r\n\r\nThere, `num_parallel_calls` is turned into a tensor and and fed to `gen_dataset_ops.parallel_map_dataset` where it looks like tensorflow takes over.\r\n\r\nWe could start with something simple like a thread or process pool that `imap`s over some shards.\r\n ",
"Multiprocessing was added in #552 . You can set the number of processes with `.map(..., num_proc=...)`. It also works for `filter`\r\n\r\nClosing this one, but feel free to reo-open if you have other questions",
"@lhoestq Great feature implemented! Do you have plans to add it to official tutorials [Processing data in a Dataset](https://huggingface.co/docs/datasets/processing.html?highlight=save#augmenting-the-dataset)? It took me sometime to find this parallel processing api.",
"Thanks for the heads up !\r\n\r\nI just added a paragraph about multiprocessing:\r\nhttps://huggingface.co/docs/datasets/master/processing.html#multiprocessing"
] | 1,595,480,441,000 | 1,615,541,652,000 | 1,599,490,084,000 | NONE | null | It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/426/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/426/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/425/comments | https://api.github.com/repos/huggingface/datasets/issues/425/events | https://github.com/huggingface/datasets/issues/425 | 664,029,848 | MDU6SXNzdWU2NjQwMjk4NDg= | 425 | Correct data structure for PAN-X task in XTREME dataset? | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks for noticing ! This looks more reasonable indeed.\r\nFeel free to open a PR",
"Hi @lhoestq \r\nI made the proposed changes to the `xtreme.py` script. I noticed that I also need to change the schema in the `dataset_infos.json` file. More specifically the `\"features\"` part of the PAN-X.LANG dataset:\r\n\r\n```json\r\n\"features\":{\r\n \"word\":{\r\n \"dtype\":\"string\",\r\n \"id\":null,\r\n \"_type\":\"Value\"\r\n },\r\n \"ner_tag\":{\r\n \"dtype\":\"string\",\r\n \"id\":null,\r\n \"_type\":\"Value\"\r\n },\r\n \"lang\":{\r\n \"dtype\":\"string\",\r\n \"id\":null,\r\n \"_type\":\"Value\"\r\n }\r\n}\r\n```\r\nTo fit the code above the fields `\"word\"`, `\"ner_tag\"`, and `\"lang\"` would become `\"words\"`, `ner_tags\"` and `\"langs\"`. In addition the `dtype` should be changed from `\"string\"` to `\"list\"`.\r\n\r\n I made this changes but when trying to test this locally with `dataset = load_dataset(\"xtreme\", \"PAN-X.en\", data_dir='./data')` I face the issue that the `dataset_info.json` file is always overwritten by a downloaded version with the old settings, which then throws an error because the schema does not match. This makes it hard to test the changes locally. Do you have any suggestions on how to deal with that?\r\n",
"Hi !\r\n\r\nYou have to point to your local script.\r\nFirst clone the repo and then:\r\n\r\n```python\r\ndataset = load_dataset(\"./datasets/xtreme\", \"PAN-X.en\")\r\n```\r\nThe \"xtreme\" directory contains \"xtreme.py\".\r\n\r\nYou also have to change the features definition in the `_info` method. You could use:\r\n\r\n```python\r\nfeatures = nlp.Features({\r\n \"words\": [nlp.Value(\"string\")],\r\n \"ner_tags\": [nlp.Value(\"string\")],\r\n \"langs\": [nlp.Value(\"string\")],\r\n})\r\n```\r\n\r\nHope this helps !\r\nLet me know if you have other questions.",
"Thanks, I am making progress. I got a new error `NonMatchingSplitsSizesError ` (see traceback below), which I suspect is due to the fact that number of rows in the dataset changed (one row per word --> one row per sentence) as well as the number of bytes due to the slightly updated data structure. \r\n\r\n```python\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='validation', num_bytes=1756492, num_examples=80536, dataset_name='xtreme'), 'recorded': SplitInfo(name='validation', num_bytes=1837109, num_examples=10000, dataset_name='xtreme')}, {'expected': SplitInfo(name='test', num_bytes=1752572, num_examples=80326, dataset_name='xtreme'), 'recorded': SplitInfo(name='test', num_bytes=1833214, num_examples=10000, dataset_name='xtreme')}, {'expected': SplitInfo(name='train', num_bytes=3496832, num_examples=160394, dataset_name='xtreme'), 'recorded': SplitInfo(name='train', num_bytes=3658428, num_examples=20000, dataset_name='xtreme')}]\r\n```\r\nI can fix the error by replacing the values in the `datasets_infos.json` file, which I tested for English. However, to update this for all 40 datasets manually is slightly painful. Is there a better way to update the expected values for all datasets?",
"You can update the json file by calling\r\n```\r\nnlp-cli test ./datasets/xtreme --save_infos --all_configs\r\n```",
"One more thing about features. I mentioned\r\n\r\n```python\r\nfeatures = nlp.Features({\r\n \"words\": [nlp.Value(\"string\")],\r\n \"ner_tags\": [nlp.Value(\"string\")],\r\n \"langs\": [nlp.Value(\"string\")],\r\n})\r\n```\r\n\r\nbut it's actually not consistent with the way we write datasets. Something like this is simpler to read and more consistent with the way we define datasets:\r\n\r\n```python\r\nfeatures = nlp.Features({\r\n \"words\": nlp.Sequence(nlp.Value(\"string\")),\r\n \"ner_tags\": nlp.Sequence(nlp.Value(\"string\")),\r\n \"langs\": nlp.Sequence(nlp.Value(\"string\")),\r\n})\r\n```\r\n\r\nSorry about that",
"Closing this since PR #437 fixed the problem and has been merged to `master`. "
] | 1,595,449,760,000 | 1,596,375,034,000 | 1,596,375,034,000 | MEMBER | null | Hi 🤗 team!
## Description of the problem
Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows:
```python
from nlp import load_dataset
# AmazonPhotos.zip is located in data/
dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
dataset_train = dataset['train']
```
However, I am not sure that `load_dataset()` is returning the correct data structure for NER.
Currently, every row in `dataset_train` is of the form
```python
{'word': str, 'ner_tag': str, 'lang': str}
```
but I think we actually want something like
```python
{'words': List[str], 'ner_tags': List[str], 'langs': List[str]}
```
so that each row corresponds to a _sequence_ of words associated with each example. With the current data structure I do not think it is possible to transform `dataset_train` into a form suitable for training because we do not know the boundaries between examples.
Indeed, [this line](https://github.com/google-research/xtreme/blob/522434d1aece34131d997a97ce7e9242a51a688a/third_party/utils_tag.py#L58) in the XTREME repo, processes the texts as lists of sentences, tags, and languages.
## Proposed solution
Replace
```python
with open(filepath) as f:
data = csv.reader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
for id_, row in enumerate(data):
if row:
lang, word = row[0].split(":")[0], row[0].split(":")[1]
tag = row[1]
yield id_, {"word": word, "ner_tag": tag, "lang": lang}
```
from [these lines](https://github.com/huggingface/nlp/blob/ce7d3a1d630b78fe27188d1706f3ea980e8eec43/datasets/xtreme/xtreme.py#L881-L887) of the `_generate_examples()` function with something like
```python
guid_index = 1
with open(filepath, encoding="utf-8") as f:
words = []
ner_tags = []
langs = []
for line in f:
if line.startswith("-DOCSTART-") or line == "" or line == "\n":
if words:
yield guid_index, {"words": words, "ner_tags": ner_tags, "langs": langs}
guid_index += 1
words = []
ner_tags = []
else:
# pan-x data is tab separated
splits = line.split("\t")
# strip out en: prefix
langs.append(splits[0][:2])
words.append(splits[0][3:])
if len(splits) > 1:
labels.append(splits[-1].replace("\n", ""))
else:
# examples have no label in test set
labels.append("O")
```
If you agree, me or @lvwerra would be happy to implement this and create a PR. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/425/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/425/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/424/comments | https://api.github.com/repos/huggingface/datasets/issues/424/events | https://github.com/huggingface/datasets/pull/424 | 663,858,552 | MDExOlB1bGxSZXF1ZXN0NDU1MTk4MTY0 | 424 | Web of science | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,595,432,311,000 | 1,595,514,478,000 | 1,595,514,476,000 | CONTRIBUTOR | null | this PR adds the WebofScience dataset
#353 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/424/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/424",
"html_url": "https://github.com/huggingface/datasets/pull/424",
"diff_url": "https://github.com/huggingface/datasets/pull/424.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/424.patch",
"merged_at": 1595514476000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/423 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/423/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/423/comments | https://api.github.com/repos/huggingface/datasets/issues/423/events | https://github.com/huggingface/datasets/pull/423 | 663,079,359 | MDExOlB1bGxSZXF1ZXN0NDU0NTU4OTA0 | 423 | Change features vs schema logic | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I had to make `SplitDict` serializable to be able to copy `DatasetInfo` objects properly.\r\nSerialization was also asked in #389 ",
"One thing I forgot to say here, is that we also want to use the features arguments of `load_dataset` (which goes in the builder’s config) to override the default features of a dataset script."
] | 1,595,343,167,000 | 1,595,668,114,000 | 1,595,499,317,000 | MEMBER | null | ## New logic for `nlp.Features` in datasets
Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`.
However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files.
Changes:
- Remove `schema` field in `nlp.Dataset`
- Make `features` the source of truth to read/write examples
- `features` can no longer be `None` in `nlp.Dataset`
- Update `features` after each dataset transform such as `nlp.Dataset.map`
Todo: change the tests to take these changes into account | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/423/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/423/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/423",
"html_url": "https://github.com/huggingface/datasets/pull/423",
"diff_url": "https://github.com/huggingface/datasets/pull/423.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/423.patch",
"merged_at": 1595499316000
} | true |