url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.11B
| node_id
stringlengths 18
32
| number
int64 1
3.59k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,642B
| updated_at
int64 1,587B
1,642B
| closed_at
int64 1,587B
1,642B
β | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/750/comments | https://api.github.com/repos/huggingface/datasets/issues/750/events | https://github.com/huggingface/datasets/issues/750 | 726,589,446 | MDU6SXNzdWU3MjY1ODk0NDY= | 750 | load_dataset doesn't include `features` in its hash | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,293,401,000 | 1,603,964,161,000 | 1,603,964,161,000 | MEMBER | null | It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored.
Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of:
```
dataset = load_dataset("glue", "mnli")
features = dataset["train"].features
features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order
dataset = load_dataset("glue", "mnli", features=features)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/750/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/749/comments | https://api.github.com/repos/huggingface/datasets/issues/749/events | https://github.com/huggingface/datasets/issues/749 | 726,366,062 | MDU6SXNzdWU3MjYzNjYwNjI= | 749 | [XGLUE] Adding new dataset | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Amazing! ",
"Small poll @thomwolf @yjernite @lhoestq @JetRunner @qiweizhen .\r\n\r\nAs stated in the XGLUE paper: https://arxiv.org/pdf/2004.01401.pdf , for each of the 11 down-stream tasks training data is only available in English, whereas development and test data is available in multiple different language *cf.* here: \r\n\r\n![Screenshot from 2020-11-04 15-02-17](https://user-images.githubusercontent.com/23423619/98120893-d7499a80-1eae-11eb-9d0b-57dfe5d4ee68.png)\r\n\r\nSo, I'd suggest to have exactly 11 \"language-independent\" configs: \"ner\", \"pos\", ... and give the sample in each dataset in the config a \"language\" label being one of \"ar\", \"bg\", .... => To me this makes more sense than making languaga specific config, *e.g.* \"ner-de\", ...especially because training data is only available in English. Do you guys agree? ",
"In this case we should have named splits, so config `ner` has splits `train`, `validation`, `test-en`, `test-ar`, `test-bg`, etc...\r\n\r\nThis is more in the spirit of the task afaiu, and will avoid making users do the filtering step themselves when testing different models or different configurations of the same model.",
"I see your point! \r\n\r\nI think this would be quite feasible to do and makes sense to me as well! In the paper results are reported per language, so it seems more natural to do it this way. \r\n\r\nGood for me @yjernite ! What do the others think? @lhoestq \r\n",
"I agree with Yacine on this!",
"Okey actually not that easy to add things like `test-de` to `datasets` => this would be the first dataset to have this.\r\nSee: https://github.com/huggingface/datasets/pull/802",
"IMO we should have one config per language. That's what we're doing for xnli, xtreme etc.\r\nHaving split names that depend on the language seems wrong. We should try to avoid split names that are not train/val/test.\r\nSorry for late response on this one",
"@lhoestq agreed on having one config per language, but we also need to be able to have different split names and people are going to want to use hyphens, so we should at the very least warn them why it's failing :) E.g. for ANLI with different stages of data (currently using underscores) or https://www.tau-nlp.org/commonsenseqa with their train-sanity or dev-sanity splits",
"Yes sure ! Could you open a separate issue for that ?",
"Really cool dataset π btw. does Transformers support all 11 tasks π€ would be awesome to have a xglue script (like the \"normal\" glue one)",
"Just to make sure this is what we want here. If we add one config per language, \r\n\r\nthis means that this dataset ends up with well over 100 different configs most of which will have the same `train` split. The train split is always in English. Also, I'm not sure whether it's better for the user to be honest. \r\n\r\nI think it could be quite confusing for the user to have\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner-de\", split=\"train\")\r\n```\r\n\r\nin English even though it's `ner-de`.\r\n\r\nTo be honest, I'd prefer:\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test-de\")\r\ntest_dataset_fr = load_dataset(\"xglue\", \"ner\", split=\"test-fr\")\r\n```\r\n\r\nhere",
"Oh yes right I didn't notice the train set was always in english sorry.\r\nMoreover it seems that the way this dataset is used is to pick a pretrained multilingual model, fine-tune it on the english train set and then evaluate on each test set (one per language).\r\nSo to better fit the usual usage of this dataset, I agree that it's better to have one test split per language. \r\n\r\nSomething like your latest example patrick is fine imo :\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test.de\")\r\n```\r\n\r\nI just replace test-de with test.de since `-` is not allowed for split names (it has to follow the `\\w+` regex), and usually we specify the language after a point. ",
"Closing since XGLUE has been added in #802 , thanks patrick :) "
] | 1,603,277,496,000 | 1,609,927,376,000 | 1,609,927,375,000 | MEMBER | null | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/749/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/748/comments | https://api.github.com/repos/huggingface/datasets/issues/748/events | https://github.com/huggingface/datasets/pull/748 | 726,196,589 | MDExOlB1bGxSZXF1ZXN0NTA3MzAyNjE3 | 748 | New version of CompGuessWhat?! with refined annotations | {
"login": "aleSuglia",
"id": 1479733,
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aleSuglia",
"html_url": "https://github.com/aleSuglia",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"No worries. Always happy to help and thanks for your support in fixing the issue :)"
] | 1,603,263,341,000 | 1,603,270,362,000 | 1,603,269,979,000 | CONTRIBUTOR | null | This pull request introduces a few fixes to the annotations for VisualGenome in the CompGuessWhat?! original split. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/748/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/748",
"html_url": "https://github.com/huggingface/datasets/pull/748",
"diff_url": "https://github.com/huggingface/datasets/pull/748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/748.patch",
"merged_at": 1603269979000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/747/comments | https://api.github.com/repos/huggingface/datasets/issues/747/events | https://github.com/huggingface/datasets/pull/747 | 725,884,704 | MDExOlB1bGxSZXF1ZXN0NTA3MDQ3MDE4 | 747 | Add Quail question answering dataset | {
"login": "sai-prasanna",
"id": 3595526,
"node_id": "MDQ6VXNlcjM1OTU1MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sai-prasanna",
"html_url": "https://github.com/sai-prasanna",
"followers_url": "https://api.github.com/users/sai-prasanna/followers",
"following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}",
"gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions",
"organizations_url": "https://api.github.com/users/sai-prasanna/orgs",
"repos_url": "https://api.github.com/users/sai-prasanna/repos",
"events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}",
"received_events_url": "https://api.github.com/users/sai-prasanna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,222,394,000 | 1,603,269,315,000 | 1,603,269,315,000 | CONTRIBUTOR | null | QuAIL is a multi-domain RC dataset featuring news, blogs, fiction and user stories. Each domain is represented by 200 texts, which gives us a 4-way data split. The texts are 300-350 word excerpts from CC-licensed texts that were hand-picked so as to make sense to human readers without larger context. Domain diversity mitigates the issue of possible overlap between training and test data of large pre-trained models, which the current SOTA systems are based on. For instance, BERT is trained on Wikipedia + BookCorpus, and was tested on Wikipedia-based SQuAD (Devlin, Chang, Lee, & Toutanova, 2019).
https://text-machine-lab.github.io/blog/2020/quail/ @annargrs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/747/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/747",
"html_url": "https://github.com/huggingface/datasets/pull/747",
"diff_url": "https://github.com/huggingface/datasets/pull/747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/747.patch",
"merged_at": 1603269315000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/746/comments | https://api.github.com/repos/huggingface/datasets/issues/746/events | https://github.com/huggingface/datasets/pull/746 | 725,627,235 | MDExOlB1bGxSZXF1ZXN0NTA2ODMzNDMw | 746 | dataset(ngt): add ngt dataset initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,603,202,698,000 | 1,616,480,378,000 | 1,616,480,378,000 | CONTRIBUTOR | null | Currently only making the paths to the annotation ELAN (eaf) file and videos available.
This is the first accessible way to download this dataset, which is not manual file-by-file.
Only downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB in total, saved in `mpg` format.
I do not intend to actually store these as an uncompressed array of frames, because it will be huge.
Future updates may add pose estimation files for all videos, making it easier to work with this data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/746/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/746",
"html_url": "https://github.com/huggingface/datasets/pull/746",
"diff_url": "https://github.com/huggingface/datasets/pull/746.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/746.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/745/comments | https://api.github.com/repos/huggingface/datasets/issues/745/events | https://github.com/huggingface/datasets/pull/745 | 725,589,352 | MDExOlB1bGxSZXF1ZXN0NTA2ODAxMTI0 | 745 | Fix emotion description | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello, I probably have a silly question but the labels of the emotion dataset are in the form of numbers and not string, so I can not use the function classification_report because it mixes numbers and string (prediction). How can I access the label in the form of a string and not a number? \r\nThank you in advance."
] | 1,603,200,519,000 | 1,619,102,851,000 | 1,603,269,507,000 | MEMBER | null | Fixes the description of the emotion dataset to reflect the class names observed in the data, not the ones described in the paper.
I also took the liberty to make use of `ClassLabel` for the emotion labels. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/745/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/745",
"html_url": "https://github.com/huggingface/datasets/pull/745",
"diff_url": "https://github.com/huggingface/datasets/pull/745.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/745.patch",
"merged_at": 1603269507000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/744/comments | https://api.github.com/repos/huggingface/datasets/issues/744/events | https://github.com/huggingface/datasets/issues/744 | 724,918,448 | MDU6SXNzdWU3MjQ5MTg0NDg= | 744 | Dataset Explorer Doesn't Work for squad_es and squad_it | {
"login": "gaotongxiao",
"id": 22607038,
"node_id": "MDQ6VXNlcjIyNjA3MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/22607038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaotongxiao",
"html_url": "https://github.com/gaotongxiao",
"followers_url": "https://api.github.com/users/gaotongxiao/followers",
"following_url": "https://api.github.com/users/gaotongxiao/following{/other_user}",
"gists_url": "https://api.github.com/users/gaotongxiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gaotongxiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaotongxiao/subscriptions",
"organizations_url": "https://api.github.com/users/gaotongxiao/orgs",
"repos_url": "https://api.github.com/users/gaotongxiao/repos",
"events_url": "https://api.github.com/users/gaotongxiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/gaotongxiao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Oups wrong click.\r\nThis one is for you @srush"
] | 1,603,136,052,000 | 1,603,730,177,000 | 1,603,730,177,000 | NONE | null | https://huggingface.co/nlp/viewer/?dataset=squad_es
https://huggingface.co/nlp/viewer/?dataset=squad_it
Both pages show "OSError: [Errno 28] No space left on device". | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/744/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/743/comments | https://api.github.com/repos/huggingface/datasets/issues/743/events | https://github.com/huggingface/datasets/issues/743 | 724,703,980 | MDU6SXNzdWU3MjQ3MDM5ODA= | 743 | load_dataset for CSV files not working | {
"login": "iliemihai",
"id": 2815308,
"node_id": "MDQ6VXNlcjI4MTUzMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2815308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliemihai",
"html_url": "https://github.com/iliemihai",
"followers_url": "https://api.github.com/users/iliemihai/followers",
"following_url": "https://api.github.com/users/iliemihai/following{/other_user}",
"gists_url": "https://api.github.com/users/iliemihai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliemihai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliemihai/subscriptions",
"organizations_url": "https://api.github.com/users/iliemihai/orgs",
"repos_url": "https://api.github.com/users/iliemihai/repos",
"events_url": "https://api.github.com/users/iliemihai/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliemihai/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Thank you !\r\nCould you provide a csv file that reproduces the error ?\r\nIt doesn't have to be one of your dataset. As long as it reproduces the error\r\nThat would help a lot !",
"I think another good example is the following:\r\n`\r\nfrom datasets import load_dataset\r\n`\r\n`\r\ndataset = load_dataset(\"csv\", data_files=[\"./sts-dev.csv\"], delimiter=\"\\t\", column_names=[\"one\", \"two\", \"three\", \"four\", \"score\", \"sentence1\", \"sentence2\"], script_version=\"master\")`\r\n`\r\n\r\nDisplayed error `CSV parse error: Expected 7 columns, got 6` even tough I put 7 columns. First four columns from the csv don't have a name, so I've named them by default. The csv file is the .dev file from STSb benchmark dataset.\r\n\r\n",
"Hi, seems I also can't read csv file. I was trying with a dummy csv with only three rows.\r\n\r\n```\r\ntext,label\r\nI hate google,negative\r\nI love Microsoft,positive\r\nI don't like you,negative\r\n```\r\nI was using the HuggingFace image in Paperspace Gradient (datasets==1.1.3). The following code doesn't work:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\")\r\n```\r\nIt outputs the following:\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset csv/default-3b6254ff4dd403e5 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/csv/default-3b6254ff4dd403e5/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2...\r\nDataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-3b6254ff4dd403e5/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2. Subsequent calls will reuse this data.\r\n```\r\nBut `len(dataset)` gives `1` and I can't access rows with indexing `dataset[0]` (it gives `KeyError: 0`).\r\n\r\nHowever, loading from pandas dataframe is working.\r\n```\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\ndf = pd.read_csv('test_data.csv')\r\ndataset = Dataset.from_pandas(df)\r\n```\r\n\r\n",
"This is because load_dataset without `split=` returns a dictionary of split names (train/validation/test) to dataset.\r\nYou can do\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\")\r\nprint(dataset[\"train\"][0])\r\n```\r\n\r\nOr if you want to directly get the train split:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\", split=\"train\")\r\nprint(dataset[0])\r\n```\r\n",
"Good point\r\n\r\nDesign question for us, though: should `load_dataset` when no split is specified and only one split is present in the dataset (common use case with CSV/text/JSON datasets) return a `Dataset` instead of a `DatsetDict`? I feel like it's often what the user is expecting. I break a bit the paradigm of a unique return type but since this library is designed for widespread DS people more than CS people usage I would tend to think that UX should take precedence over CS reasons. What do you think?",
"In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.\r\nI'm ok with returning the dataset object if no split specifications are given for text/json/csv/pandas.\r\n\r\nFor the other datasets ton the other hand the user doesn't know in advance the splits so I would keep the dictionary by default. What do you think ?",
"Thanks for your quick response! I'm fine with specifying the split as @lhoestq suggested. My only concern is when I'm loading from python dict or pandas, the library returns a dataset instead of a dictionary of datasets when no split is specified. I know that they use a different function `Dataset.from_dict` or `Dataset.from_pandas` but the text/csv files use `load_dataset()`. However, to the user, they do the same task and we probably expect them to have the same behavior.",
"```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='./amazon_data/Video_Games_5.csv', delimiter=\",\", split=['train', 'test'])\r\n```\r\nI was running the above line, but got this error.\r\n\r\n```ValueError: Unknown split \"test\". Should be one of ['train'].```\r\n\r\nThe data is amazon product data. I load the Video_Games_5.json.gz data into pandas and save it as csv file. and then load the csv file using the above code. I thought, ```split=['train', 'test']``` would split the data into train and test. did I misunderstood?\r\n\r\nThank you!\r\n\r\n",
"Hi ! the `split` argument in `load_dataset` is used to select the splits you want among the available splits.\r\nHowever when loading a csv with a single file as you did, only a `train` split is available by default.\r\n\r\nIndeed since `data_files='./amazon_data/Video_Games_5.csv'` is equivalent to `data_files={\"train\": './amazon_data/Video_Games_5.csv'}`, you can get a dataset with \r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='./amazon_data/Video_Games_5.csv', delimiter=\",\", split=\"train\")\r\n```\r\n\r\nAnd then to get both a train and test split you can do\r\n```python\r\ndataset = dataset.train_test_split()\r\nprint(dataset.keys())\r\n# ['train', 'test']\r\n```\r\n\r\n\r\nAlso note that a csv dataset may have several available splits if it is defined this way:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files={\r\n \"train\": './amazon_data/Video_Games_5_train.csv',\r\n \"test\": './amazon_data/Video_Games_5_test.csv'\r\n})\r\nprint(dataset.keys())\r\n# ['train', 'test']\r\n```\r\n",
"> In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.\r\n> I'm ok with returning the dataset object if no split specifications are given for text/json/csv/pandas.\r\n> \r\n> For the other datasets ton the other hand the user doesn't know in advance the splits so I would keep the dictionary by default. What do you think ?\r\n\r\nYes maybe this would be good. I think having to select 'train' from the resulting object why the user gave no split information is a confusing and not intuitive behavior.",
"> Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.\r\n> \r\n> `from datasets import load_dataset`\r\n> `dataset = load_dataset(\"csv\", data_files=[\"./sample_data.csv\"], delimiter=\"\\t\", column_names=[\"title\", \"text\"], script_version=\"master\")`\r\n> \r\n> Displayed error:\r\n> `... ArrowInvalid: CSV parse error: Expected 2 columns, got 1`\r\n\r\nI'm also facing the same issue when trying to load from a csv file locally:\r\n\r\n```python\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('csv', data_files='sample_data.csv')\r\n```\r\n\r\nError when executed from Google Colab:\r\n```python\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-34-79a8d4f65ed6> in <module>()\r\n 1 from nlp import load_dataset\r\n----> 2 dataset = load_dataset('csv', data_files='sample_data.csv')\r\n\r\n9 frames\r\n/usr/local/lib/python3.7/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 535 try:\r\n 536 # Prepare split will record examples associated to the split\r\n--> 537 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 538 except OSError:\r\n 539 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/builder.py in _prepare_split(self, split_generator)\r\n 863 \r\n 864 generator = self._generate_tables(**split_generator.gen_kwargs)\r\n--> 865 for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n 866 writer.write_table(table)\r\n 867 num_examples, num_bytes = writer.finalize()\r\n\r\n/usr/local/lib/python3.7/dist-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)\r\n 213 def __iter__(self, *args, **kwargs):\r\n 214 try:\r\n--> 215 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 216 # return super(tqdm...) will not catch exception\r\n 217 yield obj\r\n\r\n/usr/local/lib/python3.7/dist-packages/tqdm/std.py in __iter__(self)\r\n 1102 fp_write=getattr(self.fp, 'write', sys.stderr.write))\r\n 1103 \r\n-> 1104 for obj in iterable:\r\n 1105 yield obj\r\n 1106 # Update and possibly print the progressbar.\r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/datasets/csv/ede98314803c971fef04bcee45d660c62f3332e8a74491e0b876106f3d99bd9b/csv.py in _generate_tables(self, files)\r\n 78 read_options=self.config.pa_read_options,\r\n 79 parse_options=self.config.pa_parse_options,\r\n---> 80 convert_options=self.config.convert_options,\r\n 81 )\r\n 82 yield i, pa_table\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: CSV parse error: Expected 1 columns, got 8\r\n```\r\n\r\nVersion:\r\n```\r\nnlp==0.4.0\r\n```",
"Hi @kauvinlucas\r\n\r\nYou can use the latest versions of `datasets` to do this.\r\nTo do so, just `pip install datasets` instead of `nlp` (the library was renamed) and then\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='sample_data.csv')",
"Hi \r\nI'm having a different problem with loading local csv. \r\n```Python\r\nfrom datasets import load_dataset \r\ndataset = load_dataset('csv', data_files='sample.csv') \r\n``` \r\n\r\ngives `ValueError: Specified named and prefix; you can only specify one.` error \r\n\r\nversions: \r\n- datasets: 1.1.3 \r\n- python: 3.9.6 \r\n- pyarrow: 2.0.0 ",
"Oh.. I figured it out. According to issue #[42387](https://github.com/pandas-dev/pandas/issues/42387) from pandas, this new version does not accept None for both parameters (which was being done by the repo I'm testing). Dowgrading Pandas==1.0.4 and Python==3.8 worked",
"Hi, \r\nI got an `OSError: Cannot find data file. ` when I tried to use load_dataset with tsv files. I have checked the paths, and they are correct. \r\n\r\nversions\r\n- python: 3.7.9\r\n- datasets: 1.1.3\r\n- pyarrow: 2.0.0\r\n- transformers: 4.2.2\r\n\r\n~~~\r\ndata_files = {\"train\": \"train.tsv\", \"test\",: \"test.tsv\"}\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n~~~\r\n\r\nThe entire Error message is on below:\r\n\r\n```08/14/2021 16:55:44 - INFO - __main__ - load a local file for train: /project/media-framing/transformer4/data/0/val/label1.tsv\r\n08/14/2021 16:55:44 - INFO - __main__ - load a local file for test: /project/media-framing/transformer4/data/unlabel/test.tsv\r\nUsing custom data configuration default\r\nDownloading and preparing dataset csv/default-00a4200ae8507533 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /usr4/cs542sp/hey1/.cache/huggingface/datasets/csv/default-00a4200ae8507533/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2...\r\nTraceback (most recent call last):\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 592, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 944, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 307, in finalize\r\n self.stream.close()\r\n File \"pyarrow/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"run_glue.py\", line 484, in <module>\r\n main()\r\n File \"run_glue.py\", line 243, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/load.py\", line 610, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 515, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 594, in _download_and_prepare\r\n raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\nOSError: Cannot find data file. ```",
"Hi ! It looks like the error stacktrace doesn't match with your code snippet.\r\n\r\nWhat error do you get when running this ?\r\n```\r\ndata_files = {\"train\": \"train.tsv\", \"test\",: \"test.tsv\"}\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n```\r\ncan you check that both tsv files are in the same folder as the current working directory of your shell ?",
"Hi @lhoestq, Below is the entire error message after I move both tsv files to the same directory. It's the same with I got before.\r\n```\r\n/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)\r\n return torch._C._cuda_getDeviceCount() > 0\r\n08/29/2021 22:56:43 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False\r\n08/29/2021 22:56:43 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=/projectnb/media-framing/pred_result/label1/, overwrite_output_dir=True, do_train=True, do_eval=False, do_predict=True, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=8.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs/Aug29_22-56-43_scc1, logging_first_step=False, logging_steps=500, save_steps=3000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=/projectnb/media-framing/pred_result/label1/, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, _n_gpu=0)\r\n08/29/2021 22:56:43 - INFO - __main__ - load a local file for train: /project/media-framing/transformer4/temp_train.tsv\r\n08/29/2021 22:56:43 - INFO - __main__ - load a local file for test: /project/media-framing/transformer4/temp_test.tsv\r\n08/29/2021 22:56:43 - WARNING - datasets.builder - Using custom data configuration default-df627c23ac0e98ec\r\nDownloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /usr4/cs542sp/hey1/.cache/huggingface/datasets/csv/default-df627c23ac0e98ec/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff...\r\nTraceback (most recent call last):\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 693, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 1166, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 428, in finalize\r\n self.stream.close()\r\n File \"pyarrow/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"run_glue.py\", line 487, in <module>\r\n main()\r\n File \"run_glue.py\", line 244, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/load.py\", line 852, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 616, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 699, in _download_and_prepare\r\n + str(e)\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nerror closing file\r\n```",
"Hi !\r\nCan you try running this into a python shell directly ?\r\n\r\n```python\r\nimport os\r\nfrom datasets import load_dataset\r\n\r\ndata_files = {\"train\": \"train.tsv\", \"test\": \"test.tsv\"}\r\nassert all(os.path.isfile(data_file) for data_file in data_files.values()), \"Couln't find files\"\r\n\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\nprint(\"success !\")\r\n```\r\n\r\nThis way all the code from `run_glue.py` doesn't interfere with our tests :)",
"Hi @lhoestq, \r\n\r\nBelow is what I got from terminal after I copied and run your code. I think the files themselves are good since there is no assertion error. \r\n\r\n```\r\nUsing custom data configuration default-df627c23ac0e98ec\r\nDownloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /usr4/cs542sp/hey1/.cache/huggingface/datasets/csv/default-df627c23ac0e98ec/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff...\r\nTraceback (most recent call last):\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 693, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 1166, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 428, in finalize\r\n self.stream.close()\r\n File \"pyarrow/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"test.py\", line 7, in <module>\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/load.py\", line 852, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 616, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 699, in _download_and_prepare\r\n + str(e)\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nerror closing file\r\n```",
"Hi, could this be a permission error ? I think it fails to close the arrow file that contains the data from your CSVs in the cache.\r\n\r\nBy default datasets are cached in `~/.cache/huggingface/datasets`, could you check that you have the right permissions ?\r\nYou can also try to change the cache directory by passing `cache_dir=\"path/to/my/cache/dir\"` to `load_dataset`.",
"Thank you!! @lhoestq\r\n\r\nFor some reason, I don't have the default path for datasets to cache, maybe because I work from a remote system. The issue solved after I pass the `cache_dir` argument to the function. Thank you very much!!"
] | 1,603,119,231,000 | 1,631,212,006,000 | null | CONTRIBUTOR | null | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInvalid: CSV parse error: Expected 2 columns, got 1
`
I should mention that when I've tried to read data from `https://github.com/lhoestq/transformers/tree/custom-dataset-in-rag-retriever/examples/rag/test_data/my_knowledge_dataset.csv` it worked without a problem. I've read that there might be some problems with /r character, so I've removed them from the custom dataset, but the problem still remains.
I've added a colab reproducing the bug, but unfortunately I cannot provide the dataset.
https://colab.research.google.com/drive/1Qzu7sC-frZVeniiWOwzoCe_UHZsrlxu8?usp=sharing
Are there any work around for it ?
Thank you | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/743/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/742/comments | https://api.github.com/repos/huggingface/datasets/issues/742/events | https://github.com/huggingface/datasets/pull/742 | 724,509,974 | MDExOlB1bGxSZXF1ZXN0NTA1ODgzNjI3 | 742 | Add OCNLI, a new CLUE dataset | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks :) merging it"
] | 1,603,105,593,000 | 1,603,383,589,000 | 1,603,383,588,000 | MEMBER | null | OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for
Chinese Natural Language Inference, collected following closely the procedures of MNLI,
but with enhanced strategies aiming for more challenging inference pairs. We want to
emphasize we did not use human/machine translation in creating the dataset, and thus
our Chinese texts are original and not translated. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/742/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/742",
"html_url": "https://github.com/huggingface/datasets/pull/742",
"diff_url": "https://github.com/huggingface/datasets/pull/742.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/742.patch",
"merged_at": 1603383587000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/741/comments | https://api.github.com/repos/huggingface/datasets/issues/741/events | https://github.com/huggingface/datasets/issues/741 | 723,924,275 | MDU6SXNzdWU3MjM5MjQyNzU= | 741 | Creating dataset consumes too much memory | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Thanks for reporting.\r\nIn theory since the dataset script is just made to yield examples to write them into an arrow file, it's not supposed to create memory issues.\r\n\r\nCould you please try to run this exact same loop in a separate script to see if it's not an issue with `PIL` ?\r\nYou can just copy paste what's inside `_generate_examples` and remove all the code for `datasets` (remove yield).\r\n\r\nIf the RAM usage stays low after 600 examples it means that it comes from some sort of memory leak in the library, or with pyarrow.",
"Here's an equivalent loading code:\r\n```python\r\nimages_path = \"PHOENIX-2014-T-release-v3/PHOENIX-2014-T/features/fullFrame-210x260px/train\"\r\n\r\nfor dir_path in tqdm(os.listdir(images_path)):\r\n frames_path = os.path.join(images_path, dir_path)\r\n np_frames = []\r\n for frame_name in os.listdir(frames_path):\r\n frame_path = os.path.join(frames_path, frame_name)\r\n im = Image.open(frame_path)\r\n np_frames.append(np.asarray(im))\r\n im.close()\r\n```\r\n\r\nThe process takes 0.3% of memory, even after 1000 examples on the small machine with 120GB RAM.\r\n\r\nI guess something in the datasets library doesn't release the reference to the objects I'm yielding, but no idea how to test for this",
"I've had similar issues with Arrow once. I'll investigate...\r\n\r\nFor now maybe we can simply use the images paths in the dataset you want to add. I don't expect to fix this memory issue until 1-2 weeks unfortunately. Then we can just update the dataset with the images. What do you think ?",
"If it's just 1-2 weeks, I think it's best if we wait. I don't think it is very urgent to add it, and it will be much more useful with the images loaded rather than not (the images are low resolution, and thus papers using this dataset actually fit the entire video into memory anyway)\r\n\r\nI'll keep working on other datasets in the meanwhile :) ",
"Ok found the issue. This is because the batch size used by the writer is set to 10 000 elements by default so it would load your full dataset in memory (the writer has a buffer that flushes only after each batch). Moreover to write in Apache Arrow we have to use python objects so what's stored inside the ArrowWriter's buffer is actually python integers (32 bits).\r\n\r\nLowering the batch size to 10 should do the job.\r\n\r\nI will add a flag to the DatasetBuilder class of dataset scripts, so that we can customize the batch size.",
"Thanks, that's awesome you managed to find the problem.\r\n\r\nAbout the 32 bits - really? there isn't a way to serialize the numpy array somehow? 32 bits would take 4 times the memory / disk space needed to store these videos.\r\n\r\nPlease let me know when the batch size is customizable and I'll try again!",
"The 32 bit integrers are only used in the writer's buffer because Arrow doesn't take numpy arrays correctly as input. On disk it's stored as uint8 in arrow format ;)",
"> I don't expect to fix this memory issue until 1-2 weeks unfortunately.\r\n\r\nHi @lhoestq \r\nnot to rush of course, but I was wondering if you have a new timeline so I know how to plan my work around this :) ",
"Hi ! Next week for sure :) ",
"Alright it should be good now.\r\nYou just have to specify `_writer_batch_size = 10` for example as a class attribute of the dataset builder class.",
"I added it, but still it consumes as much memory\r\n\r\nhttps://github.com/huggingface/datasets/pull/722/files#diff-2e0d865dd4a60dedd1861d6f8c5ed281ded71508467908e1e0b1dbe7d2d420b1R66\r\n\r\nDid I not do it correctly?",
"Yes you did it right.\r\nDid you rebase to include the changes of #828 ?\r\n\r\nEDIT: looks like you merged from master in the PR. Not sure why you still have an issue then, I will investigate",
"Hi @lhoestq, any update on this?\r\nPerhaps even a direction I could try myself?",
"Sorry for the delay, I was busy with the dataset sprint and the incredible amount of contributions to the library ^^'\r\n\r\nWhat you can try to do to find what's wrong is check at which frequency the arrow writer writes all the examples from its in-memory buffer on disk. This happens [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L257-L258) in the code.\r\n\r\nThe idea is that `write_on_file` writes the examples every `writer_batch_size` examples and clear the buffer `self. current_rows`. As soon as `writer_batch_size` is small enough you shouldn't have memory issues in theory.\r\n\r\nLet me know if you have questions or if I can help.\r\n\r\nSince the dataset sprint is over and I will also be done with all the PRs soon I will be able to go back at it and take a look.",
"Thanks. I gave it a try and no success. I'm not sure what's happening there",
"I had the same issue. It works for me by setting `DEFAULT_WRITER_BATCH_SIZE = 10` of my dataset builder class. (And not `_writer_batch_size` as previously mentioned). I guess this is because `_writer_batch_size` is overwritten in `__init__` (see [here](https://github.com/huggingface/datasets/blob/0e2563e5d5c2fc193ea27d7c24607bb35607f2d5/src/datasets/builder.py#L934))",
"Yes the class attribute you can change is `DEFAULT_WRITER_BATCH_SIZE`.\r\nOtherwise in `load_dataset` you can specify `writer_batch_size=`",
"Ok thanks for the tips. Maybe the documentation should be updated accordingly https://huggingface.co/docs/datasets/add_dataset.html.",
"Thanks for reporting this mistake in the docs.\r\nI just fixed it at https://github.com/huggingface/datasets/commit/85cf7ff920c90ca2e12bedca12b36d2a043c3da2"
] | 1,603,001,226,000 | 1,617,097,628,000 | null | CONTRIBUTOR | null | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examples. """
filepath = os.path.join(base_path, "annotations", "manual", "PHOENIX-2014-T." + split + ".corpus.csv")
images_path = os.path.join(base_path, "features", "fullFrame-210x260px", split)
with open(filepath, "r", encoding="utf-8") as f:
data = csv.DictReader(f, delimiter="|", quoting=csv.QUOTE_NONE)
for row in data:
frames_path = os.path.join(images_path, row["video"])[:-7]
np_frames = []
for frame_name in os.listdir(frames_path):
frame_path = os.path.join(frames_path, frame_name)
im = Image.open(frame_path)
np_frames.append(np.asarray(im))
im.close()
yield row["name"], {"video": np_frames}
```
The dataset creation process goes out of memory on a machine with 500GB RAM.
I was under the impression that the "generator" here is exactly for that, to avoid memory constraints.
However, even if you want the entire dataset in memory, it would be in the worst case
`260x210x3 x 400 max length x 7000 samples` in bytes (uint8) = 458.64 gigabytes
So I'm not sure why it's taking more than 500GB.
And the dataset creation fails after 170 examples on a machine with 120gb RAM, and after 672 examples on a machine with 500GB RAM.
---
## Info that might help:
Iterating over examples is extremely slow.
![image](https://user-images.githubusercontent.com/5757359/96359590-3c666780-111d-11eb-9347-1f833ad982a9.png)
If I perform this iteration in my own, custom loop (Without saving to file), it runs at 8-9 examples/sec
And you can see at this state it is using 94% of the memory:
![image](https://user-images.githubusercontent.com/5757359/96359606-7afc2200-111d-11eb-8c11-0afbdba1a6a3.png)
And it is only using one CPU core, which is probably why it's so slow:
![image](https://user-images.githubusercontent.com/5757359/96359630-a3841c00-111d-11eb-9ba0-7fd3cdf51d26.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/741/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/740/comments | https://api.github.com/repos/huggingface/datasets/issues/740/events | https://github.com/huggingface/datasets/pull/740 | 723,047,958 | MDExOlB1bGxSZXF1ZXN0NTA0NzAyNTc0 | 740 | Fix TREC urls | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,839,488,000 | 1,603,097,677,000 | 1,603,097,676,000 | MEMBER | null | The old TREC urls are now redirections.
I updated the urls to the new ones, since we don't support redirections for downloads.
Fix #737 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/740/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/740",
"html_url": "https://github.com/huggingface/datasets/pull/740",
"diff_url": "https://github.com/huggingface/datasets/pull/740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/740.patch",
"merged_at": 1603097675000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/739/comments | https://api.github.com/repos/huggingface/datasets/issues/739/events | https://github.com/huggingface/datasets/pull/739 | 723,044,066 | MDExOlB1bGxSZXF1ZXN0NTA0Njk5NTY3 | 739 | Add wiki dpr multiset embeddings | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I still have to compute the dataset_infos, and build + host the indexes",
"update: I'm computing the metadata, will update the PR soon",
"Finally all green and ready to merge :)"
] | 1,602,839,149,000 | 1,606,399,370,000 | 1,606,399,369,000 | MEMBER | null | There are two DPR encoders, one trained on Natural Questions and one trained on a multiset/hybrid dataset.
Previously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset/hybrid dataset.
In the configuration you can now specify `embeddings_name="nq"` or `embeddings_name="multiset"` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/739/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/739",
"html_url": "https://github.com/huggingface/datasets/pull/739",
"diff_url": "https://github.com/huggingface/datasets/pull/739.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/739.patch",
"merged_at": 1606399369000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/738/comments | https://api.github.com/repos/huggingface/datasets/issues/738/events | https://github.com/huggingface/datasets/pull/738 | 723,033,923 | MDExOlB1bGxSZXF1ZXN0NTA0NjkxNjM4 | 738 | Replace seqeval code with original classification_report for simplicity | {
"login": "Hironsan",
"id": 6737785,
"node_id": "MDQ6VXNlcjY3Mzc3ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6737785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hironsan",
"html_url": "https://github.com/Hironsan",
"followers_url": "https://api.github.com/users/Hironsan/followers",
"following_url": "https://api.github.com/users/Hironsan/following{/other_user}",
"gists_url": "https://api.github.com/users/Hironsan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hironsan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hironsan/subscriptions",
"organizations_url": "https://api.github.com/users/Hironsan/orgs",
"repos_url": "https://api.github.com/users/Hironsan/repos",
"events_url": "https://api.github.com/users/Hironsan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hironsan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello,\r\n\r\nI ran https://github.com/huggingface/transformers/blob/master/examples/token-classification/run.sh\r\n\r\nAnd received this error:\r\n```\r\n100%|ββββββββββ| 407/407 [21:37<00:00, 3.44s/it]Traceback (most recent call last):\r\n File \"run_ner.py\", line 445, in <module>\r\n main()\r\n File \"run_ner.py\", line 398, in main\r\n results = trainer.evaluate()\r\n File \"/data/2021/transformers/src/transformers/trainer.py\", line 1470, in evaluate\r\n metric_key_prefix=metric_key_prefix,\r\n File \"/data/2021/transformers/src/transformers/trainer.py\", line 1622, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"run_ner.py\", line 345, in compute_metrics\r\n results = metric.compute(predictions=true_predictions, references=true_labels)\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/metric.py\", line 398, in compute\r\n output = self._compute(predictions=predictions, references=references, **kwargs)\r\n File \"/root/.cache/huggingface/modules/datasets_modules/metrics/seqeval/81eda1ff004361d4fa48754a446ec69bb7aa9cf4d14c7215f407d1475941c5ff/seqeval.py\", line 97, in _compute\r\n report = classification_report(y_true=references, y_pred=predictions, suffix=suffix, output_dict=True)\r\nTypeError: classification_report() got an unexpected keyword argument 'output_dict'\r\n```\r\n\r\nI'm still trying multiple things to see if I can work around this, but I thought it might be useful to mention it here.\r\n\r\n```\r\nName: transformers\r\nVersion: 4.3.0.dev0\r\n\r\nName: datasets\r\nVersion: 1.2.1\r\n```",
"Hi, can you try to update your local installation of `seqeval` ?\r\n\r\n```\r\npip install --upgrade seqeval\r\n```",
"@lhoestq thanks for the reply. Indeed it was some issue with my setup. I removed the \"transformers\" and \"datasets\" (that I had previously installed from the source code), cleared the cache and installed everything again. It works great now!"
] | 1,602,838,305,000 | 1,611,245,235,000 | 1,603,103,472,000 | CONTRIBUTOR | null | Recently, the original seqeval has enabled us to get per type scores and overall scores as a dictionary.
This PR replaces the current code with the original function(`classification_report`) to simplify it.
Also, the original code has been updated to fix #352.
- Related issue: https://github.com/chakki-works/seqeval/pull/38
```python
from datasets import load_metric
metric = load_metric("seqeval")
y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
metric.compute(predictions=y_pred, references=y_true)
# Output: {'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/738/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/738",
"html_url": "https://github.com/huggingface/datasets/pull/738",
"diff_url": "https://github.com/huggingface/datasets/pull/738.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/738.patch",
"merged_at": 1603103471000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/737/comments | https://api.github.com/repos/huggingface/datasets/issues/737/events | https://github.com/huggingface/datasets/issues/737 | 722,463,923 | MDU6SXNzdWU3MjI0NjM5MjM= | 737 | Trec Dataset Connection Error | {
"login": "aychang95",
"id": 10554495,
"node_id": "MDQ6VXNlcjEwNTU0NDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/10554495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aychang95",
"html_url": "https://github.com/aychang95",
"followers_url": "https://api.github.com/users/aychang95/followers",
"following_url": "https://api.github.com/users/aychang95/following{/other_user}",
"gists_url": "https://api.github.com/users/aychang95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aychang95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aychang95/subscriptions",
"organizations_url": "https://api.github.com/users/aychang95/orgs",
"repos_url": "https://api.github.com/users/aychang95/repos",
"events_url": "https://api.github.com/users/aychang95/events{/privacy}",
"received_events_url": "https://api.github.com/users/aychang95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting.\r\nThat's because the download url has changed. The old url now redirects to the new one but we don't support redirection for downloads.\r\n\r\nI'm opening a PR to update the url"
] | 1,602,777,473,000 | 1,603,097,676,000 | 1,603,097,676,000 | NONE | null | **Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label (but the link doesn't seem broken)
<details>
<summary>Error Logs</summary>
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /root/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-8-66bf1242096e> in <module>()
----> 1 load_dataset("trec")
10 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
</details> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/737/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/736/comments | https://api.github.com/repos/huggingface/datasets/issues/736/events | https://github.com/huggingface/datasets/pull/736 | 722,348,191 | MDExOlB1bGxSZXF1ZXN0NTA0MTE0MjMy | 736 | Start community-provided dataset docs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"can you also reference the `--organization` flag like in https://github.com/huggingface/transformers/blob/master/docs/source/model_sharing.rst#upload-your-model-with-the-cli ?",
"done!",
"Not sure if the changes in `datasets/wmt_t2t/wmt_utils.py` are intentional.\r\nIf you want to add more configs to wmt, could you do it in a serapate PR ?",
"I don't think I changed wmt_utils (I think github is wrong or my setup is poorly configured).\r\n\r\nLocally git diff master --name-only says one file. Master is up to date.\r\nTried to make a new PR #755 and the same thing happened.",
"Trying new fork."
] | 1,602,769,299,000 | 1,603,458,928,000 | 1,603,458,928,000 | CONTRIBUTOR | null | This is one I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
+ In slack @thomwolf called it a `user-namespace` dataset, but the docs call it `community dataset`.
I think the first naming is clearer, but I didn't address that here.
+ I didn't add metadata, will try that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/736/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/736",
"html_url": "https://github.com/huggingface/datasets/pull/736",
"diff_url": "https://github.com/huggingface/datasets/pull/736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/736.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/735/comments | https://api.github.com/repos/huggingface/datasets/issues/735/events | https://github.com/huggingface/datasets/issues/735 | 722,225,270 | MDU6SXNzdWU3MjIyMjUyNzA= | 735 | Throw error when an unexpected key is used in data_files | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting !\r\nWe'll add support for other keys"
] | 1,602,759,327,000 | 1,604,064,232,000 | 1,604,064,232,000 | CONTRIBUTOR | null | I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users.
So the following, unintuitively, returns only one key (namely `train`).
```python
datasets = load_dataset("text", data_files={"train": train_f, "valid": valid_f})
print(datasets.keys())
# dict_keys(['train'])
```
whereas using `validation` instead, does return the expected result:
```python
datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f})
print(datasets.keys())
# dict_keys(['train', 'validation'])
```
I would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/735/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/734/comments | https://api.github.com/repos/huggingface/datasets/issues/734/events | https://github.com/huggingface/datasets/pull/734 | 721,767,848 | MDExOlB1bGxSZXF1ZXN0NTAzNjMwMDcz | 734 | Fix GLUE metric description | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,708,254,000 | 1,602,754,063,000 | 1,602,754,062,000 | MEMBER | null | Small typo: the description says translation instead of prediction. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/734/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/734",
"html_url": "https://github.com/huggingface/datasets/pull/734",
"diff_url": "https://github.com/huggingface/datasets/pull/734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/734.patch",
"merged_at": 1602754062000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/733/comments | https://api.github.com/repos/huggingface/datasets/issues/733/events | https://github.com/huggingface/datasets/pull/733 | 721,366,744 | MDExOlB1bGxSZXF1ZXN0NTAzMjk2NDQw | 733 | Update link to dataset viewer | {
"login": "negedng",
"id": 12969168,
"node_id": "MDQ6VXNlcjEyOTY5MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/12969168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/negedng",
"html_url": "https://github.com/negedng",
"followers_url": "https://api.github.com/users/negedng/followers",
"following_url": "https://api.github.com/users/negedng/following{/other_user}",
"gists_url": "https://api.github.com/users/negedng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/negedng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/negedng/subscriptions",
"organizations_url": "https://api.github.com/users/negedng/orgs",
"repos_url": "https://api.github.com/users/negedng/repos",
"events_url": "https://api.github.com/users/negedng/events{/privacy}",
"received_events_url": "https://api.github.com/users/negedng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,674,003,000 | 1,602,684,451,000 | 1,602,684,451,000 | CONTRIBUTOR | null | Change 404 error links in quick tour to working ones | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/733/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/733",
"html_url": "https://github.com/huggingface/datasets/pull/733",
"diff_url": "https://github.com/huggingface/datasets/pull/733.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/733.patch",
"merged_at": 1602684451000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/732/comments | https://api.github.com/repos/huggingface/datasets/issues/732/events | https://github.com/huggingface/datasets/pull/732 | 721,359,448 | MDExOlB1bGxSZXF1ZXN0NTAzMjkwMjEy | 732 | dataset(wlasl): initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Followup: \r\nFrom the info in https://github.com/huggingface/datasets/pull/722, I probably should load the videos as array of frames directly into the database. \r\nThis will make the dataset generation time very long, but will make working with the dataset much easier.",
"When I run:\r\n```\r\npython datasets-cli dummy_data datasets/wlasl\r\n```\r\n\r\nI get:\r\n```\r\nChecking datasets/wlasl/wlasl.py for additional imports. \r\nFound main folder for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl \r\nFound specific version folder for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786 \r\nFound script file from datasets/wlasl/wlasl.py to /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/wlasl.py \r\nFound dataset infos file from datasets/wlasl/dataset_infos.json to /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/dataset_infos.json \r\nFound metadata file for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/wlasl.json \r\nUsing custom data configuration default \r\nLoading Dataset Infos from /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786\r\nCreating dummy folder structure for datasets/wlasl/dummy/0.3.0... \r\nDataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data. \r\nTraceback (most recent call last): \r\nFile \"datasets-cli\", line 36, in \r\nservice.run() File \"/home/nlp/amit/anaconda2/envs/meta-scholar/lib/python3.7/site-packages/datasets-1.1.2-py3.7.egg/datasets/commands/dummy_data.py\", line 73, in run \r\nfor split in generator_splits: \r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```"
] | 1,602,673,302,000 | 1,616,480,383,000 | 1,616,480,383,000 | CONTRIBUTOR | null | takes like 9-10 hours to download all of the videos for the dataset, but it does finish :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/732/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/732",
"html_url": "https://github.com/huggingface/datasets/pull/732",
"diff_url": "https://github.com/huggingface/datasets/pull/732.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/732.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/731/comments | https://api.github.com/repos/huggingface/datasets/issues/731/events | https://github.com/huggingface/datasets/pull/731 | 721,142,985 | MDExOlB1bGxSZXF1ZXN0NTAzMTExNzc4 | 731 | dataset(aslg_pc12): initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks @lhoestq \r\nAre there any guidelines for the dummy data?\r\nIn this particular case for example, the dataset fetches from two hardcoded URLs. \r\nDo I just `head -n 10` both files and zip them?\r\n\r\n",
"> Thanks @lhoestq\r\n> Are there any guidelines for the dummy data?\r\n> In this particular case for example, the dataset fetches from two hardcoded URLs.\r\n> Do I just `head -n 10` both files and zip them?\r\n\r\nYes the idea is just to have a few examples to properly test the script and make sure it keeps working in the long run.\r\n\r\nAnd FYI there's a command to help you name the dummy data files correctly. More info in the documentation [here](https://huggingface.co/docs/datasets/share_dataset.html#adding-dummy-data)",
"@lhoestq passes all tests"
] | 1,602,652,477,000 | 1,603,898,826,000 | 1,603,898,826,000 | CONTRIBUTOR | null | This contains the only current public part of this corpus.
The rest of the corpus is not yet been made public, but this sample is still being used by researchers. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/731/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/731",
"html_url": "https://github.com/huggingface/datasets/pull/731",
"diff_url": "https://github.com/huggingface/datasets/pull/731.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/731.patch",
"merged_at": 1603898826000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/730/comments | https://api.github.com/repos/huggingface/datasets/issues/730/events | https://github.com/huggingface/datasets/issues/730 | 721,073,812 | MDU6SXNzdWU3MjEwNzM4MTI= | 730 | Possible caching bug | {
"login": "ArneBinder",
"id": 3375489,
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArneBinder",
"html_url": "https://github.com/ArneBinder",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions",
"organizations_url": "https://api.github.com/users/ArneBinder/orgs",
"repos_url": "https://api.github.com/users/ArneBinder/repos",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArneBinder/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting. That's a bug indeed.\r\nApparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`)",
"Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command \r\n`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`\r\n\r\nchange the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html\r\n`dataset = datasets.load_dataset('json', data_files=args.dataset)`\r\n\r\nErrors:\r\n`Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-c1e124ad488911b8/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264...\r\n`"
] | 1,602,640,954,000 | 1,638,109,737,000 | 1,603,964,161,000 | NONE | null | The following code with `test1.txt` containing just "π€π€π€":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produces this output:
```
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': 'Γ°\x9fΒ€\x97Γ°\x9fΒ€\x97Γ°\x9fΒ€\x97'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': 'Γ°\x9fΒ€\x97Γ°\x9fΒ€\x97Γ°\x9fΒ€\x97'}
```
Just changing the order (and deleting the temp files):
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
```
produces this:
```
Using custom data configuration default
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': 'π€π€π€'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': 'π€π€π€'}
```
Is it intended that the cache path does not depend on the config entries?
tested with datasets==1.1.2 and python==3.8.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/730/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/729/comments | https://api.github.com/repos/huggingface/datasets/issues/729/events | https://github.com/huggingface/datasets/issues/729 | 719,558,876 | MDU6SXNzdWU3MTk1NTg4NzY= | 729 | Better error message when one forgets to call `add_batch` before `compute` | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,525,562,000 | 1,603,984,704,000 | 1,603,984,704,000 | MEMBER | null | When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer.
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
pass # User forgets to call `add_batch`
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-267729d187fa> in <module>
3 pass
4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 5 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
343 elif self.process_id == 0:
344 # Let's acquire a lock on each node files to be sure they are finished writing
--> 345 file_paths, filelocks = self._get_all_cache_files()
346
347 # Read the predictions and references
~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self)
280 filelocks = []
281 for process_id, file_path in enumerate(file_paths):
--> 282 filelock = FileLock(file_path + ".lock")
283 try:
284 filelock.acquire(timeout=self.timeout)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/729/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/728/comments | https://api.github.com/repos/huggingface/datasets/issues/728/events | https://github.com/huggingface/datasets/issues/728 | 719,555,780 | MDU6SXNzdWU3MTk1NTU3ODA= | 728 | Passing `cache_dir` to a metric does not work | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,525,314,000 | 1,603,964,082,000 | 1,603,964,082,000 | MEMBER | null | When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
~/git/datasets/src/datasets/metric.py in _finalize(self)
349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features))
--> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
351 except FileNotFoundError:
~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions)
227 # Prepend path to filename
--> 228 pa_table = self._read_files(files)
229 files = copy.deepcopy(files)
~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files)
166 for f_dict in files:
--> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
168 pa_tables.append(pa_table)
~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take)
291 )
--> 292 mmap = pa.memory_map(filename)
293 f = pa.ipc.open_stream(mmap)
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-17-e42d43cc981f> in <module>
2 for i in range(0, 1024, batch_size):
3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 4 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
351 except FileNotFoundError:
352 raise ValueError(
--> 353 "Error in finalize: another metric instance is already using the local cache file. "
354 "Please specify an experiment_id to avoid colision between distributed metric instances."
355 )
ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.
```
The code works when we remove the `cache_dir=...` from the metric. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/728/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/727/comments | https://api.github.com/repos/huggingface/datasets/issues/727/events | https://github.com/huggingface/datasets/issues/727 | 719,386,366 | MDU6SXNzdWU3MTkzODYzNjY= | 727 | Parallel downloads progress bar flickers | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,602,509,765,000 | 1,602,509,765,000 | null | MEMBER | null | When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.
To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar.
Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/727/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/726/comments | https://api.github.com/repos/huggingface/datasets/issues/726/events | https://github.com/huggingface/datasets/issues/726 | 719,313,754 | MDU6SXNzdWU3MTkzMTM3NTQ= | 726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | {
"login": "SparkJiao",
"id": 16469472,
"node_id": "MDQ6VXNlcjE2NDY5NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/16469472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SparkJiao",
"html_url": "https://github.com/SparkJiao",
"followers_url": "https://api.github.com/users/SparkJiao/followers",
"following_url": "https://api.github.com/users/SparkJiao/following{/other_user}",
"gists_url": "https://api.github.com/users/SparkJiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SparkJiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SparkJiao/subscriptions",
"organizations_url": "https://api.github.com/users/SparkJiao/orgs",
"repos_url": "https://api.github.com/users/SparkJiao/repos",
"events_url": "https://api.github.com/users/SparkJiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/SparkJiao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi try, to provide more information please.\r\n\r\nExample code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).",
"> Hi try, to provide more information please.\r\n> \r\n> Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).\r\n\r\nI have update the description, sorry for the incomplete issue by mistake.",
"Hi, I have manually downloaded the compressed dataset `openwebtext.tar.xz' and use the following command to preprocess the examples:\r\n```\r\n>>> dataset = load_dataset('/home/admin/workspace/datasets/datasets-master/datasets-master/datasets/openwebtext', data_dir='/home/admin/workspace/datasets')\r\nUsing custom data configuration default\r\nDownloading and preparing dataset openwebtext/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/admin/.cache/huggingface/datasets/openwebtext/default/0.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...\r\nDataset openwebtext downloaded and prepared to /home/admin/.cache/huggingface/datasets/openwebtext/default/0.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02. Subsequent calls will reuse this data.\r\n>>> len(dataset['train'])\r\n74571\r\n>>>\r\n```\r\nThe size of the pre-processed example file is only 354MB, however the processed bookcorpus dataset is 4.6g. Are there any problems?",
"NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n\r\ni got this issue when i try to work on my own datasets kindly tell me, from where i can get checksums of train and dev file in my github repo",
"Hi, I got the similar issue for xnli dataset while working on colab with python3.7. \r\n\r\n`nlp.load_dataset(path = 'xnli')`\r\n\r\nThe above command resulted in following issue : \r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']\r\n```\r\n\r\nAny idea how to fix this ?",
"Did anyone figure out how to fix this error?"
] | 1,602,503,110,000 | 1,633,830,741,000 | null | NONE | null | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/openwebtext/plain_text/1.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 536, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://zenodo.org/record/3834942/files/openwebtext.tar.xz']
```
I think this problem is caused because the released dataset has changed. Or I should download the dataset manually?
Sorry for release the unfinised issue by mistake. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/726/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/726/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/725/comments | https://api.github.com/repos/huggingface/datasets/issues/725/events | https://github.com/huggingface/datasets/pull/725 | 718,985,641 | MDExOlB1bGxSZXF1ZXN0NTAxMjUxODI1 | 725 | pretty print dataset objects | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Great, as you found it useful I improved the code a bit to automate indentation in the parent class, so that the child repr doesn't need to guess the indentation level, while repr'ing nicely on its own.\r\n\r\n- do we want indent=4 or 2?\r\n- do we want `{` ... `}` or w/o?\r\n\r\ncurrently it's indent4 and w/ curly braces, so it looks:\r\n\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 157252\r\n })\r\n validation: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5599\r\n })\r\n test: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5577\r\n })\r\n})\r\n```\r\njust child:\r\n```\r\nDataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5577\r\n})\r\n```\r\n\r\n",
"Yes! A lot better indeed!"
] | 1,602,468,226,000 | 1,603,470,275,000 | 1,603,443,646,000 | CONTRIBUTOR | null | Currently, if I do:
```
from datasets import load_dataset
load_dataset("wikihow", 'all', data_dir="/hf/pegasus-datasets/wikihow/")
```
I get:
```
DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None),
'headline': Value(dtype='string', id=None), 'title': Value(dtype='string',
id=None)}, num_rows: 157252), 'validation': Dataset(features: {'text':
Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None)}, num_rows: 5599), 'test':
Dataset(features: {'text': Value(dtype='string', id=None), 'headline':
Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)},
num_rows: 5577)})
```
This is not very readable.
Can we either have a better `__repr__` or have a custom method to nicely pprint the dataset object?
Here is my very simple attempt. With this PR, it produces:
```
DatasetDict({
train: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 157252
})
validation: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 5599
})
test: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 5577
})
})
```
I did omit the data types on purpose to make it more readable, but it shouldn't be too difficult to integrate those too.
note that this PR also fixes the inconsistency in output that in master misses enclosing `{}` for Dataset, but it is there for `DatasetDict` - or perhaps it was by design.
I'm totally not attached to this format, just wanting something more readable. One approach could be to serialize to `json.dumps` or something similar. It'd make the indentation simpler.
Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/725/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/725",
"html_url": "https://github.com/huggingface/datasets/pull/725",
"diff_url": "https://github.com/huggingface/datasets/pull/725.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/725.patch",
"merged_at": 1603443646000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/724/comments | https://api.github.com/repos/huggingface/datasets/issues/724/events | https://github.com/huggingface/datasets/issues/724 | 718,947,700 | MDU6SXNzdWU3MTg5NDc3MDA= | 724 | need to redirect /nlp to /datasets and remove outdated info | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Should be fixed now: \r\n\r\n![image](https://user-images.githubusercontent.com/35882/95917301-040b0600-0d78-11eb-9655-c4ac0e788089.png)\r\n\r\nNot sure I understand what you mean by the second part?\r\n",
"Thank you!\r\n\r\n> Not sure I understand what you mean by the second part?\r\n\r\nCompare the 2:\r\n* https://huggingface.co/datasets/wikihow\r\n* https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all\r\nCan you see the difference? 2nd has formatting, 1st doesn't.\r\n",
"For context, those are two different pages (not an old vs new one), one is from the dataset viewer (you can browse data inside the datasets) while the other is just a basic reference page displayed some metadata about the dataset.\r\n\r\nFor the second one, we'll move to markdown parsing soon, so it'll be formatted better.",
"I understand. I was just flagging the lack of markup issue."
] | 1,602,457,932,000 | 1,602,694,812,000 | 1,602,694,812,000 | CONTRIBUTOR | null | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/724/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/723/comments | https://api.github.com/repos/huggingface/datasets/issues/723/events | https://github.com/huggingface/datasets/issues/723 | 718,926,723 | MDU6SXNzdWU3MTg5MjY3MjM= | 723 | Adding pseudo-labels to datasets | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Nice ! :)\r\nIt's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.\r\nCould you add details on what they could be used for ?\r\n",
"They can be used as training data for a smaller model.",
"Sounds just like a regular dataset to me then, no?",
"A new configuration for those datasets should do the job then.\r\nNote that until now datasets like xsum only had one configuration. It means that users didn't have to specify the configuration name when loading the dataset. If we add new configs, users that update the lib will have to update their code to specify the default/standard configuration name (not the one with pseudo labels).",
"Could also be a `user-namespace` dataset maybe?",
"Oh yes why not. I'm more in favor of this actually since pseudo labels are things that users (not dataset authors in general) can compute by themselves and share with the community",
"![image](https://user-images.githubusercontent.com/6045025/96045248-b528a380-0e3f-11eb-9124-bd55afa031bb.png)\r\n\r\nI assume I should (for example) rename the xsum dir, change the URL, and put the modified dir somewhere in S3?",
"You can use the `datasets-cli` to upload the folder with your version of xsum with the pseudo labels.\r\n\r\n```\r\ndatasets-cli upload_dataset path/to/xsum\r\n```"
] | 1,602,450,345,000 | 1,627,967,511,000 | 1,627,967,511,000 | CONTRIBUTOR | null | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution.
I read https://huggingface.co/docs/datasets/add_dataset.html, but it doesn't really cover this type of contribution.
I could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https://github.com/huggingface/datasets/blob/5f4c6e830f603830117877b8990a0e65a2386aa6/datasets/xsum/xsum.py
What do you think @lhoestq ?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/723/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/722/comments | https://api.github.com/repos/huggingface/datasets/issues/722/events | https://github.com/huggingface/datasets/pull/722 | 718,689,117 | MDExOlB1bGxSZXF1ZXN0NTAxMDI3NjAw | 722 | datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"This might be interesting to @kayoyin the author of https://github.com/kayoyin/transformer-slt β pinging you just in case :)",
"Thanks Amit, this is a great idea! I'm thinking of porting the SLT models from my paper here as well, having this dataset would be perfect for that :)"
] | 1,602,359,048,000 | 1,609,830,411,000 | null | CONTRIBUTOR | null | This is the first sign language dataset in this repo as far as I know.
Following an old issue I opened https://github.com/huggingface/datasets/issues/302.
I added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/722/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/722",
"html_url": "https://github.com/huggingface/datasets/pull/722",
"diff_url": "https://github.com/huggingface/datasets/pull/722.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/722.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/721/comments | https://api.github.com/repos/huggingface/datasets/issues/721/events | https://github.com/huggingface/datasets/issues/721 | 718,647,147 | MDU6SXNzdWU3MTg2NDcxNDc= | 721 | feat(dl_manager): add support for ftp downloads | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"We only support http by default for downloading.\r\nIf you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the dataset.\r\n\r\nTo make the download_manager work with a custom downloader, you can call `download_manager.download_custom` instead of `download_manager.download_and_extract`. The expected arguments are the following:\r\n```\r\nurl_or_urls: url or `list`/`dict` of urls to download and extract. Each\r\n url is a `str`.\r\ncustom_download: Callable with signature (src_url: str, dst_path: str) -> Any\r\n as for example `tf.io.gfile.copy`, that lets you download from google storage\r\n```\r\n",
"Also maybe it coud be interesting to have a direct support of ftp inside the `datasets` library. Do you know any good libraries that we might consider adding as a (optional ?) dependency ?",
"Downloading an `ftp` file is as simple as:\r\n```python\r\nimport urllib \r\nurllib.urlretrieve('ftp://server/path/to/file', 'file')\r\n```\r\n\r\nI believe this should be supported by the library, as its not using any dependency and is trivial amount of code.",
"I know its unorthodox, but I added `ftp` download support to `file_utils` in the same PR https://github.com/huggingface/datasets/pull/722\r\nSo its possible to understand the interaction of the download component with the ftp download ability",
"Awesome ! I'll take a look :)",
"@AmitMY Can you now download the Phoenix2014 Dataset?",
"@hoanganhpham1006 yes.\r\nSee pull request https://github.com/huggingface/datasets/pull/722 , it has a loader for this dataset, mostly ready.\r\nThere's one issue that delays it being merged - https://github.com/huggingface/datasets/issues/741 - regarding memory consumption.",
"The problem which I have now is that this dataset seems does not allow to download? Can you share it with me pls",
"The dataset loader is not yet ready, because of that issue.\r\nIf you want to just download the dataset the old-fashioned way, just go to: https://www-i6.informatik.rwth-aachen.de/ftp/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz (the ftp link is now broken, and its available over https)",
"Got it, thank you so much!"
] | 1,602,345,020,000 | 1,603,531,473,000 | null | CONTRIBUTOR | null | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.download_and_extract(_URL)
```
I get an error:
> ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path
I checked, and indeed you don't consider `ftp` as a remote file.
https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188
Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/721/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/720/comments | https://api.github.com/repos/huggingface/datasets/issues/720/events | https://github.com/huggingface/datasets/issues/720 | 716,581,266 | MDU6SXNzdWU3MTY1ODEyNjY= | 720 | OSError: Cannot find data file when not using the dummy dataset in RAG | {
"login": "josemlopez",
"id": 4112135,
"node_id": "MDQ6VXNlcjQxMTIxMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4112135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josemlopez",
"html_url": "https://github.com/josemlopez",
"followers_url": "https://api.github.com/users/josemlopez/followers",
"following_url": "https://api.github.com/users/josemlopez/following{/other_user}",
"gists_url": "https://api.github.com/users/josemlopez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josemlopez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josemlopez/subscriptions",
"organizations_url": "https://api.github.com/users/josemlopez/orgs",
"repos_url": "https://api.github.com/users/josemlopez/repos",
"events_url": "https://api.github.com/users/josemlopez/events{/privacy}",
"received_events_url": "https://api.github.com/users/josemlopez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet. \r\n\r\n```\r\n99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnpicklingError Traceback (most recent call last)\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 446 try:\r\n--> 447 return pickle.load(fid, **pickle_kwargs)\r\n 448 except Exception:\r\n\r\nUnpicklingError: pickle data was truncated\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 559 \r\n--> 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n\r\n~/src/datasets/src/datasets/builder.py in _prepare_split(self, split_generator)\r\n 847 writer.write(example)\r\n--> 848 finally:\r\n 849 num_examples, num_bytes = writer.finalize()\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)\r\n 227 try:\r\n--> 228 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 229 # return super(tqdm...) will not catch exception\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)\r\n 1132 try:\r\n-> 1133 for obj in iterable:\r\n 1134 yield obj\r\n\r\n/hdd/rag/cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)\r\n 131 break\r\n--> 132 vecs = np.load(open(vectors_files.pop(0), \"rb\"), allow_pickle=True)\r\n 133 vec_idx = 0\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 449 raise IOError(\r\n--> 450 \"Failed to interpret file %s as a pickle\" % repr(file))\r\n 451 \r\n\r\nOSError: Failed to interpret file <_io.BufferedReader name='/hdd/rag/downloads/99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498'> as a pickle\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-8-24351ff8ce44> in <module>\r\n 4 retriever = RagRetriever.from_pretrained(\"facebook/rag-sequence-nq\", \r\n 5 index_name=\"exact\",\r\n----> 6 use_dummy_dataset=False)\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)\r\n 321 generator_tokenizer = rag_tokenizer.generator\r\n 322 return cls(\r\n--> 323 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n 324 )\r\n 325 \r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)\r\n 310 self.config = config\r\n 311 if self._init_retrieval:\r\n--> 312 self.init_retrieval()\r\n 313 \r\n 314 @classmethod\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_retrieval(self)\r\n 338 \r\n 339 logger.info(\"initializing retrieval\")\r\n--> 340 self.index.init_index()\r\n 341 \r\n 342 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_index(self)\r\n 248 split=self.dataset_split,\r\n 249 index_name=self.index_name,\r\n--> 250 dummy=self.use_dummy_dataset,\r\n 251 )\r\n 252 self.dataset.set_format(\"numpy\", columns=[\"embeddings\"], output_all_columns=True)\r\n\r\n~/src/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 615 builder_instance.download_and_prepare(\r\n 616 download_config=download_config,\r\n--> 617 download_mode=download_mode,\r\n 618 ignore_verifications=ignore_verifications,\r\n 619 )\r\n\r\n~/src/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 481 # Sync info\r\n 482 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n--> 483 self.info.download_checksums = dl_manager.get_recorded_sizes_checksums()\r\n 484 self.info.size_in_bytes = self.info.dataset_size + self.info.download_size\r\n 485 # Save info\r\n\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n--> 562 \r\n 563 # Update the info object with the splits.\r\n 564 self.info.splits = split_dict\r\n\r\nOSError: Cannot find data file.\r\n```\r\n\r\nThank you.",
"An update on my end. This seems like a transient issue. Reran the script from scratch overnight with no errors. ",
"Closing this one. Feel free to re-open if you have other questions about this issue"
] | 1,602,080,833,000 | 1,608,732,271,000 | 1,608,732,271,000 | NONE | null | ## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour:
```
import os
os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache'
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
```
Plese note that I'm using the whole dataset: **use_dummy_dataset=False**
After around 4 hours (downloading and some other things) this is returned:
```
Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...
---------------------------------------------------------------------------
UnpicklingError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
459 try:
--> 460 return pickle.load(fid, **pickle_kwargs)
461 except Exception:
UnpicklingError: pickle data was truncated
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
552 # Prepare split will record examples associated to the split
--> 553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
840 for key, record in utils.tqdm(
--> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
842 ):
/opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)
217 try:
--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
219 # return super(tqdm...) will not catch exception
/opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)
131 break
--> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True)
133 vec_idx = 0
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
462 raise IOError(
--> 463 "Failed to interpret file %s as a pickle" % repr(file))
464 finally:
OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-10-f28df370ac47> in <module>
1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets
----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)
307 generator_tokenizer = rag_tokenizer.generator
308 return cls(
--> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer
310 )
311
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)
298 self.config = config
299 if self._init_retrieval:
--> 300 self.init_retrieval()
301
302 @classmethod
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self)
324
325 logger.info("initializing retrieval")
--> 326 self.index.init_index()
327
328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self)
238 split=self.dataset_split,
239 index_name=self.index_name,
--> 240 dummy=self.use_dummy_dataset,
241 )
242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True)
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
474 if not downloaded_from_gcs:
475 self._download_and_prepare(
--> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
477 )
478 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
--> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
556
557 if verify_infos:
OSError: Cannot find data file.
```
Thanks
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/720/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/720/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/719/comments | https://api.github.com/repos/huggingface/datasets/issues/719/events | https://github.com/huggingface/datasets/pull/719 | 716,492,263 | MDExOlB1bGxSZXF1ZXN0NDk5MjE5Mjg2 | 719 | Fix train_test_split output format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,602,074,341,000 | 1,602,077,888,000 | 1,602,077,886,000 | MEMBER | null | There was an issue in the `transmit_format` wrapper that returned bad formats when using train_test_split.
This was due to `column_names` being handled as a List[str] instead of Dict[str, List[str]] when the dataset transform (train_test_split) returns a DatasetDict (one set of column names per split).
This should fix @timothyjlaurent 's issue in #620 and fix #676
I added tests for `transmit_format` so that it doesn't happen again | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/719/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/719",
"html_url": "https://github.com/huggingface/datasets/pull/719",
"diff_url": "https://github.com/huggingface/datasets/pull/719.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/719.patch",
"merged_at": 1602077886000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/718/comments | https://api.github.com/repos/huggingface/datasets/issues/718/events | https://github.com/huggingface/datasets/pull/718 | 715,694,709 | MDExOlB1bGxSZXF1ZXN0NDk4NTU5MDcw | 718 | Don't use tqdm 4.50.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,991,953,000 | 1,601,992,164,000 | 1,601,992,162,000 | MEMBER | null | tqdm 4.50.0 introduced permission errors on windows
see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111) for the error details.
For now I just added `<4.50.0` in the setup.py
Hopefully we can find what's wrong with this version soon | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/718/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/718",
"html_url": "https://github.com/huggingface/datasets/pull/718",
"diff_url": "https://github.com/huggingface/datasets/pull/718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/718.patch",
"merged_at": 1601992162000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/717/comments | https://api.github.com/repos/huggingface/datasets/issues/717/events | https://github.com/huggingface/datasets/pull/717 | 714,959,268 | MDExOlB1bGxSZXF1ZXN0NDk3OTUwOTA2 | 717 | Fixes #712 Error in the Overview.ipynb notebook | {
"login": "subhrm",
"id": 850012,
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subhrm",
"html_url": "https://github.com/subhrm",
"followers_url": "https://api.github.com/users/subhrm/followers",
"following_url": "https://api.github.com/users/subhrm/following{/other_user}",
"gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhrm/subscriptions",
"organizations_url": "https://api.github.com/users/subhrm/orgs",
"repos_url": "https://api.github.com/users/subhrm/repos",
"events_url": "https://api.github.com/users/subhrm/events{/privacy}",
"received_events_url": "https://api.github.com/users/subhrm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,913,041,000 | 1,601,965,903,000 | 1,601,915,141,000 | CONTRIBUTOR | null | Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/717/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/717",
"html_url": "https://github.com/huggingface/datasets/pull/717",
"diff_url": "https://github.com/huggingface/datasets/pull/717.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/717.patch",
"merged_at": 1601915140000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/716/comments | https://api.github.com/repos/huggingface/datasets/issues/716/events | https://github.com/huggingface/datasets/pull/716 | 714,952,888 | MDExOlB1bGxSZXF1ZXN0NDk3OTQ1ODAw | 716 | Fixes #712 Attribute error in cell 3 of the overview notebook | {
"login": "subhrm",
"id": 850012,
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subhrm",
"html_url": "https://github.com/subhrm",
"followers_url": "https://api.github.com/users/subhrm/followers",
"following_url": "https://api.github.com/users/subhrm/following{/other_user}",
"gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhrm/subscriptions",
"organizations_url": "https://api.github.com/users/subhrm/orgs",
"repos_url": "https://api.github.com/users/subhrm/repos",
"events_url": "https://api.github.com/users/subhrm/events{/privacy}",
"received_events_url": "https://api.github.com/users/subhrm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Referencing the wrong issue # in the commit message. Closing this to fix it again."
] | 1,601,912,529,000 | 1,601,912,798,000 | 1,601,912,792,000 | CONTRIBUTOR | null | Fixes the Attribute error in cell 3 of the overview notebook | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/716/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/716",
"html_url": "https://github.com/huggingface/datasets/pull/716",
"diff_url": "https://github.com/huggingface/datasets/pull/716.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/716.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/715/comments | https://api.github.com/repos/huggingface/datasets/issues/715/events | https://github.com/huggingface/datasets/pull/715 | 714,690,192 | MDExOlB1bGxSZXF1ZXN0NDk3NzMwMDQ2 | 715 | Use python read for text dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"One thing though, could we try to read the files in parallel?",
"We could but I'm not sure this would help a lot since the bottleneck is the drive IO if the files are big enough.\r\nIt could make sense for very small files.",
"Looks like windows is not a big fan of this approach\r\nI'm working on a fix",
"I remember issue https://github.com/huggingface/datasets/issues/546 where this was kinda requested (but maybe IO would bottleneck). What do you think?",
"I think it's worth testing multiprocessing. It could also be something we add to our speed benchmarks",
"> I remember issue #546 where this was kinda requested (but maybe IO would bottleneck). What do you think?\r\n\r\nIt still would be interesting I think, especially in scenarios where IO is less of an issue (SSDs particularly) and where there are many smaller files. Wrapping this function in a `pool.map` is perhaps an easy thing to try. ",
"Merging this one for now for the patch release"
] | 1,601,891,275,000 | 1,601,903,598,000 | 1,601,903,597,000 | MEMBER | null | As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \r characters in the text file.
Instead I switched to pure python using `open` and `read`.
From my benchmark on a 100MB text file, it's the same speed as the previous pandas reader. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/715/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/715/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/715",
"html_url": "https://github.com/huggingface/datasets/pull/715",
"diff_url": "https://github.com/huggingface/datasets/pull/715.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/715.patch",
"merged_at": 1601903596000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/714/comments | https://api.github.com/repos/huggingface/datasets/issues/714/events | https://github.com/huggingface/datasets/pull/714 | 714,487,881 | MDExOlB1bGxSZXF1ZXN0NDk3NTYzNjAx | 714 | Add the official dependabot implementation | {
"login": "ALazyMeme",
"id": 12804673,
"node_id": "MDQ6VXNlcjEyODA0Njcz",
"avatar_url": "https://avatars.githubusercontent.com/u/12804673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ALazyMeme",
"html_url": "https://github.com/ALazyMeme",
"followers_url": "https://api.github.com/users/ALazyMeme/followers",
"following_url": "https://api.github.com/users/ALazyMeme/following{/other_user}",
"gists_url": "https://api.github.com/users/ALazyMeme/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ALazyMeme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ALazyMeme/subscriptions",
"organizations_url": "https://api.github.com/users/ALazyMeme/orgs",
"repos_url": "https://api.github.com/users/ALazyMeme/repos",
"events_url": "https://api.github.com/users/ALazyMeme/events{/privacy}",
"received_events_url": "https://api.github.com/users/ALazyMeme/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,869,785,000 | 1,602,503,361,000 | 1,602,503,361,000 | NONE | null | This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/714/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/714",
"html_url": "https://github.com/huggingface/datasets/pull/714",
"diff_url": "https://github.com/huggingface/datasets/pull/714.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/714.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/713/comments | https://api.github.com/repos/huggingface/datasets/issues/713/events | https://github.com/huggingface/datasets/pull/713 | 714,475,732 | MDExOlB1bGxSZXF1ZXN0NDk3NTUzOTUy | 713 | Fix reading text files with carriage return symbols | {
"login": "mozharovsky",
"id": 6762769,
"node_id": "MDQ6VXNlcjY3NjI3Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6762769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mozharovsky",
"html_url": "https://github.com/mozharovsky",
"followers_url": "https://api.github.com/users/mozharovsky/followers",
"following_url": "https://api.github.com/users/mozharovsky/following{/other_user}",
"gists_url": "https://api.github.com/users/mozharovsky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mozharovsky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mozharovsky/subscriptions",
"organizations_url": "https://api.github.com/users/mozharovsky/orgs",
"repos_url": "https://api.github.com/users/mozharovsky/repos",
"events_url": "https://api.github.com/users/mozharovsky/events{/privacy}",
"received_events_url": "https://api.github.com/users/mozharovsky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Discussed in #622, fixed in #715. Closing the issue. Thanks @lhoestq, it works now! π "
] | 1,601,867,223,000 | 1,602,223,105,000 | 1,601,905,769,000 | NONE | null | The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\r`).
It fails with the following error message:
```
...
File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas/_libs/parsers.pyx", line 918, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 2042, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file.
```
___
I figured out the pandas uses those symbols as line terminators and this eventually causes the error. Explicitly specifying the `lineterminator` fixes that issue and everything works fine.
Please, consider this PR as it seems to be a common issue to solve. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/713/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/713",
"html_url": "https://github.com/huggingface/datasets/pull/713",
"diff_url": "https://github.com/huggingface/datasets/pull/713.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/713.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/712 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/712/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/712/comments | https://api.github.com/repos/huggingface/datasets/issues/712/events | https://github.com/huggingface/datasets/issues/712 | 714,242,316 | MDU6SXNzdWU3MTQyNDIzMTY= | 712 | Error in the notebooks/Overview.ipynb notebook | {
"login": "subhrm",
"id": 850012,
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subhrm",
"html_url": "https://github.com/subhrm",
"followers_url": "https://api.github.com/users/subhrm/followers",
"following_url": "https://api.github.com/users/subhrm/following{/other_user}",
"gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/subhrm/subscriptions",
"organizations_url": "https://api.github.com/users/subhrm/orgs",
"repos_url": "https://api.github.com/users/subhrm/repos",
"events_url": "https://api.github.com/users/subhrm/events{/privacy}",
"received_events_url": "https://api.github.com/users/subhrm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Do this:\r\n``` python\r\nsquad_dataset = list_datasets(with_details=True)[datasets.index('squad')]\r\npprint(squad_dataset.__dict__) # It's a simple python dataclass\r\n```",
"Thanks! This worked. I have created a PR to fix this in the notebook. "
] | 1,601,791,111,000 | 1,601,915,140,000 | 1,601,915,140,000 | CONTRIBUTOR | null | Hi,
I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab.
```python
# You can access various attributes of the datasets before downloading them
squad_dataset = list_datasets()[datasets.index('squad')]
pprint(squad_dataset.__dict__) # It's a simple python dataclass
```
Error message
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-8dc805c4949c> in <module>()
2 squad_dataset = list_datasets()[datasets.index('squad')]
3
----> 4 pprint(squad_dataset.__dict__) # It's a simple python dataclass
AttributeError: 'str' object has no attribute '__dict__'
```
The object `squad_dataset` is a `str` not a `dataclass` . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/712/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/710/comments | https://api.github.com/repos/huggingface/datasets/issues/710/events | https://github.com/huggingface/datasets/pull/710 | 714,186,999 | MDExOlB1bGxSZXF1ZXN0NDk3MzQ1NjQ0 | 710 | fix README typos/ consistency | {
"login": "discdiver",
"id": 7703961,
"node_id": "MDQ6VXNlcjc3MDM5NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7703961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/discdiver",
"html_url": "https://github.com/discdiver",
"followers_url": "https://api.github.com/users/discdiver/followers",
"following_url": "https://api.github.com/users/discdiver/following{/other_user}",
"gists_url": "https://api.github.com/users/discdiver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/discdiver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/discdiver/subscriptions",
"organizations_url": "https://api.github.com/users/discdiver/orgs",
"repos_url": "https://api.github.com/users/discdiver/repos",
"events_url": "https://api.github.com/users/discdiver/events{/privacy}",
"received_events_url": "https://api.github.com/users/discdiver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,763,656,000 | 1,602,928,365,000 | 1,602,928,365,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/710/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/710",
"html_url": "https://github.com/huggingface/datasets/pull/710",
"diff_url": "https://github.com/huggingface/datasets/pull/710.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/710.patch",
"merged_at": 1602928365000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/709/comments | https://api.github.com/repos/huggingface/datasets/issues/709/events | https://github.com/huggingface/datasets/issues/709 | 714,067,902 | MDU6SXNzdWU3MTQwNjc5MDI= | 709 | How to use similarity settings other then "BM25" in Elasticsearch index ? | {
"login": "nsankar",
"id": 431890,
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsankar",
"html_url": "https://github.com/nsankar",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"repos_url": "https://api.github.com/users/nsankar/repos",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Datasets does not use elasticsearch API to define custom similarity. If you want to use a custom similarity, the best would be to run a curl request directly to your elasticsearch instance (see sample hereafter, directly from ES documentation), then you should be able to use `my_similarity` in your configuration passed to datasets\r\n\r\n```\r\ncurl -X PUT \"localhost:9200/index?pretty\" -H 'Content-Type: application/json' -d'\r\n{\r\n \"settings\": {\r\n \"index\": {\r\n \"similarity\": {\r\n \"my_similarity\": {\r\n \"type\": \"DFR\",\r\n \"basic_model\": \"g\",\r\n \"after_effect\": \"l\",\r\n \"normalization\": \"h2\",\r\n \"normalization.h2.c\": \"3.0\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n'\r\n\r\n```"
] | 1,601,723,929,000 | 1,626,634,975,000 | null | NONE | null | **QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:**
https://huggingface.co/docs/datasets/faiss_and_ea.html
**context :**
========
I used the latest Elasticsearch server version 7.9.2
When I set DFR which is one of the other similarity algorithms supported by elasticsearch in the mapping, I get an error
For example DFR that I had tried in the first instance in mappings as below.,
`"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "DFR"}}},`
I get the following error
RequestError: RequestError(400, 'mapper_parsing_exception', 'Unknown Similarity type [DFR] for field [text]')
The other thing as another option I had tried was to declare "similarity": "my_similarity" within settings and then assigning "my_similarity" inside the mappings as below
`es_config = {
"settings": {
"number_of_shards": 1,
**"similarity": "my_similarity"**: {
"type": "DFR",
"basic_model": "g",
"after_effect": "l",
"normalization": "h2",
"normalization.h2.c": "3.0"
} ,
"analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}},
},
"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "my_similarity"}}},
}`
For this , I got the following error
RequestError: RequestError(400, 'illegal_argument_exception', 'unknown setting [index.similarity] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/709/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/708/comments | https://api.github.com/repos/huggingface/datasets/issues/708/events | https://github.com/huggingface/datasets/issues/708 | 714,020,953 | MDU6SXNzdWU3MTQwMjA5NTM= | 708 | Datasets performance slow? - 6.4x slower than in memory dataset | {
"login": "eugeneware",
"id": 38154,
"node_id": "MDQ6VXNlcjM4MTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/38154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eugeneware",
"html_url": "https://github.com/eugeneware",
"followers_url": "https://api.github.com/users/eugeneware/followers",
"following_url": "https://api.github.com/users/eugeneware/following{/other_user}",
"gists_url": "https://api.github.com/users/eugeneware/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eugeneware/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eugeneware/subscriptions",
"organizations_url": "https://api.github.com/users/eugeneware/orgs",
"repos_url": "https://api.github.com/users/eugeneware/repos",
"events_url": "https://api.github.com/users/eugeneware/events{/privacy}",
"received_events_url": "https://api.github.com/users/eugeneware/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Facing a similar issue here. My model using SQuAD dataset takes about 1h to process with in memory data and more than 2h with datasets directly.",
"And if you use in-memory-data with datasets with `load_dataset(..., keep_in_memory=True)`?",
"Thanks for the tip @thomwolf ! I did not see that flag in the docs. I'll try with that.",
"We should add it indeed and also maybe a specific section with all the tips for maximal speed. What do you think @lhoestq @SBrandeis @yjernite ?",
"By default the datasets loaded with `load_dataset` live on disk.\r\nIt's possible to load them in memory by using some transforms like `.map(..., keep_in_memory=True)`.\r\n\r\nSmall correction to @thomwolf 's comment above: currently we don't have the `keep_in_memory` parameter for `load_dataset` AFAIK but it would be nice to add it indeed :)",
"Yes indeed we should add it!",
"Great! Thanks a lot.\r\n\r\nI did a test using `map(..., keep_in_memory=True)` and also a test using in-memory only data.\r\n\r\n```python\r\nfeatures = dataset.map(tokenize, batched=True, remove_columns=dataset['train'].column_names)\r\nfeatures.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nfeatures_in_memory = dataset.map(tokenize, batched=True, keep_in_memory=True, remove_columns=dataset['train'].column_names)\r\nfeatures_in_memory.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nin_memory = [features['train'][i] for i in range(len(features['train']))]\r\n```\r\n\r\nFor using the features without any tweak, I got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nFor using the features mapped with `keep_in_memory=True`, I also got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features_in_memory['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nAnd for the case using every element in memory, converted from the original dataset, I got **12.5s**:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(in_memory, batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nTaking a closer look in my SQuAD code, using a profiler, I see a lot of calls to `posix read` api. It seems that it is really reliying on disk, which results in a very high train time.",
"I am having the same issue here. When loading from memory I can get the GPU up to 70% util but when loading after mapping I can only get 40%.\r\n\r\nIn disk:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')\r\nbook_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=2500)\r\nbook_corpus.set_format(type='torch', columns=['text', \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./mobile_bert_big\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=32,\r\n per_device_eval_batch_size=16,\r\n save_steps=50,\r\n save_total_limit=2,\r\n logging_first_step=True,\r\n warmup_steps=100,\r\n logging_steps=50,\r\n eval_steps=100,\r\n no_cuda=False,\r\n gradient_accumulation_steps=16,\r\n fp16=True)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=book_corpus,\r\n tokenizer=tokenizer)\r\n```\r\n\r\nIn disk I can only get 0,17 it/s:\r\n`[ 13/28907 01:03 < 46:03:27, 0.17 it/s, Epoch 0.00/1] `\r\n\r\nIf I load it with torch.utils.data.Dataset()\r\n```\r\nclass BCorpusDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings):\r\n self.encodings = encodings\r\n\r\n def __getitem__(self, idx):\r\n item = [torch.tensor(val[idx]) for key, val in self.encodings.items()][0]\r\n return item\r\n\r\n def __len__(self):\r\n length = [len(val) for key, val in self.encodings.items()][0]\r\n return length\r\n\r\n**book_corpus = book_corpus.select([i for i in range(16*2000)])** # filtering to not have 20% of BC in memory...\r\nbook_corpus = book_corpus(book_corpus)\r\n```\r\nI can get:\r\n` [ 5/62 00:09 < 03:03, 0.31 it/s, Epoch 0.06/1]`\r\n\r\nBut obviously I can not get BookCorpus in memory xD\r\n\r\nEDIT: it is something weird. If i load in disk 1% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]')\r\n```\r\n\r\nI can get 0.28 it/s, (the same that in memory) but if I load 20% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')\r\n```\r\nI get again 0.17 it/s. \r\n\r\nI am missing something? I think it is something related to size, and not disk or in-memory.",
"There is a way to increase the batches read from memory? or multiprocessed it? I think that one of two or it is reading with just 1 core o it is reading very small chunks from disk and left my GPU at 0 between batches",
"My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks."
] | 1,601,707,447,000 | 1,613,139,208,000 | 1,613,139,208,000 | NONE | null | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.
For example, in the `yelp_polarity` dataset (560000 datapoints, or 17500 batches of 32), it was taking me 3:31 to just get process the data and get it on the GPU (no model involved). Whereas, the equivalent in-memory dataset would finish in just 0:33.
Is this expected? Given that one of the goals of this project is also accelerate dataset processing, this seems a bit slower than I would expect. I understand the advantages of being able to work on datasets that exceed memory, and that's very exciting to me, but thought I'd open this issue to discuss.
For reference I'm running a AMD Ryzen Threadripper 1900X 8-Core Processor CPU, with 128 GB of RAM and an NVME SSD Samsung 960 EVO. I'm running with an RTX Titan 24GB GPU.
I can see with `iotop` that the dataset gets quickly loaded into the system read buffers, and thus doesn't incur any additional IO reads. Thus in theory, all the data *should* be in RAM, but in my benchmark code below it's still 6.4 times slower.
What am I doing wrong? And is there a way to force the datasets to completely load into memory instead of being memory mapped in cases where you want maximum performance?
At 3:31 for 17500 batches, that's 12ms per batch. Does this 12ms just become insignificant as a proportion of forward and backward passes in practice, and thus it's not worth worrying about this in practice?
In any case, here's my code `benchmark.py`. If you run it with an argument of `memory` it will copy the data into memory before executing the same test.
``` py
import sys
from datasets import load_dataset
from transformers import DataCollatorWithPadding, BertTokenizerFast
from torch.utils.data import DataLoader
from tqdm import tqdm
if __name__ == '__main__':
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
collate_fn = DataCollatorWithPadding(tokenizer, padding=True)
ds = load_dataset('yelp_polarity')
def do_tokenize(x):
return tokenizer(x['text'], truncation=True)
ds = ds.map(do_tokenize, batched=True)
ds.set_format('torch', ['input_ids', 'token_type_ids', 'attention_mask'])
if len(sys.argv) == 2 and sys.argv[1] == 'memory':
# copy to memory - probably a faster way to do this - but demonstrates the point
# approximately 530 batches per second - 17500 batches in 0:33
print('using memory')
_ds = [data for data in tqdm(ds['train'])]
else:
# approximately 83 batches per second - 17500 batches in 3:31
print('using datasets')
_ds = ds['train']
dl = DataLoader(_ds, shuffle=True, collate_fn=collate_fn, batch_size=32, num_workers=4)
for data in tqdm(dl):
for k, v in data.items():
data[k] = v.to('cuda')
```
For reference, my conda environment is [here](https://gist.github.com/05b6101518ff70ed42a858b302a0405d)
Once again, I'm very excited about this library, and how easy it is to load datasets, and to do so without worrying about system memory constraints.
Thanks for all your great work.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/708/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/708/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/707 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/707/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/707/comments | https://api.github.com/repos/huggingface/datasets/issues/707/events | https://github.com/huggingface/datasets/issues/707 | 713,954,666 | MDU6SXNzdWU3MTM5NTQ2NjY= | 707 | Requirements should specify pyarrow<1 | {
"login": "mathcass",
"id": 918541,
"node_id": "MDQ6VXNlcjkxODU0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/918541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mathcass",
"html_url": "https://github.com/mathcass",
"followers_url": "https://api.github.com/users/mathcass/followers",
"following_url": "https://api.github.com/users/mathcass/following{/other_user}",
"gists_url": "https://api.github.com/users/mathcass/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mathcass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mathcass/subscriptions",
"organizations_url": "https://api.github.com/users/mathcass/orgs",
"repos_url": "https://api.github.com/users/mathcass/repos",
"events_url": "https://api.github.com/users/mathcass/events{/privacy}",
"received_events_url": "https://api.github.com/users/mathcass/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello @mathcass I would want to work on this issue. May I do the same? ",
"@punitaojha, certainly. Feel free to work on this. Let me know if you need any help or clarity.",
"Hello @mathcass \r\n1. I did fork the repository and clone the same on my local system. \r\n\r\n2. Then learnt about how we can publish our package on pypi.org. Also, found some instructions on same in setup.py documentation.\r\n\r\n3. Then I Perplexity document link that you shared above. I created a colab link from there keep both tensorflow and pytorch means a mixed option and tried to run it in colab but I encountered no errors at a point where you mentioned. Can you help me to figure out the issue. \r\n\r\n4.Here is the link of the colab file with my saved responses. \r\nhttps://colab.research.google.com/drive/1hfYz8Ira39FnREbxgwa_goZWpOojp2NH?usp=sharing",
"Also, please share some links which made you conclude that pyarrow < 1 would help. ",
"Access granted for the colab link. ",
"Thanks for looking at this @punitaojha and thanks for sharing the notebook. \r\n\r\nI just tried to reproduce this on my own (based on the environment where I had this issue) and I can't reproduce it somehow. If I run into this again, I'll include some steps to reproduce it. I'll close this as invalid. \r\n\r\nThanks again. ",
"I am sorry for hijacking this closed issue, but I believe I was able to reproduce this very issue. Strangely enough, it also turned out that running `pip install \"pyarrow<1\" --upgrade` did indeed fix the issue (PyArrow was installed in version `0.14.1` in my case).\r\n\r\nPlease see the Colab below:\r\n\r\nhttps://colab.research.google.com/drive/15QQS3xWjlKW2aK0J74eEcRFuhXUddUST\r\n\r\nThanks!"
] | 1,601,681,979,000 | 1,607,070,159,000 | 1,601,844,628,000 | NONE | null | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinning in the setup file.
https://github.com/huggingface/datasets/blob/e86a2a8f869b91654e782c9133d810bb82783200/setup.py#L68
Downgrading by installing `pip install "pyarrow<1"` resolved the issue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/707/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/706/comments | https://api.github.com/repos/huggingface/datasets/issues/706/events | https://github.com/huggingface/datasets/pull/706 | 713,721,959 | MDExOlB1bGxSZXF1ZXN0NDk2OTkwMDA0 | 706 | Fix config creation for data files with NamedSplit | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,653,609,000 | 1,601,885,700,000 | 1,601,885,699,000 | MEMBER | null | During config creation, we need to iterate through the data files of all the splits to compute a hash.
To make sure the hash is unique given a certain combination of files/splits, we sort the split names.
However the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort the string of their names instead.
Fix #705 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/706/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/706",
"html_url": "https://github.com/huggingface/datasets/pull/706",
"diff_url": "https://github.com/huggingface/datasets/pull/706.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/706.patch",
"merged_at": 1601885699000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/705/comments | https://api.github.com/repos/huggingface/datasets/issues/705/events | https://github.com/huggingface/datasets/issues/705 | 713,709,100 | MDU6SXNzdWU3MTM3MDkxMDA= | 705 | TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' | {
"login": "pvcastro",
"id": 12713359,
"node_id": "MDQ6VXNlcjEyNzEzMzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/12713359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pvcastro",
"html_url": "https://github.com/pvcastro",
"followers_url": "https://api.github.com/users/pvcastro/followers",
"following_url": "https://api.github.com/users/pvcastro/following{/other_user}",
"gists_url": "https://api.github.com/users/pvcastro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pvcastro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pvcastro/subscriptions",
"organizations_url": "https://api.github.com/users/pvcastro/orgs",
"repos_url": "https://api.github.com/users/pvcastro/repos",
"events_url": "https://api.github.com/users/pvcastro/events{/privacy}",
"received_events_url": "https://api.github.com/users/pvcastro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi !\r\nThanks for reporting :) \r\nIndeed this is an issue on the `datasets` side.\r\nI'm creating a PR",
"Thanks @lhoestq !"
] | 1,601,652,475,000 | 1,601,885,699,000 | 1,601,885,699,000 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- `datasets` version: 1.0.2 (installed as a dependency from transformers)
- Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.9
I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample:
```
text,label
"Registra-se a presenΓ§a do acadΓͺmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausΓͺncia injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessΓ£o dos benefΓcios da JustiΓ§a Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> AudiΓͺncia encerrada Γ s 8h42min . <REL_SEP> <name> <REL_SEP> JuΓza do Trabalho <REL_SEP> Ata redigida por << <name> >> , SecretΓ‘rio de AudiΓͺncia .",NO_RELATION
```
However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section.
## To reproduce
Steps to reproduce the behavior:
1. Created a new conda environment using conda env -n transformers python=3.7
2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt
3. Installed tensorflow with `pip install tensorflow`
3. Ran `run_tf_text_classification.py` with the following parameters:
```
--train_file <DATASET_PATH>/train.csv \
--dev_file <DATASET_PATH>/dev.csv \
--test_file <DATASET_PATH>/test.csv \
--label_column_id 1 \
--model_name_or_path neuralmind/bert-base-portuguese-cased \
--output_dir <OUTPUT_PATH> \
--num_train_epochs 4 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--do_train \
--do_eval \
--do_predict \
--logging_steps 1000 \
--evaluate_during_training \
--save_steps 1000 \
--overwrite_output_dir \
--overwrite_cache
```
I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is the stack trace:
```
2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
/media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)
FutureWarning,
2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz
2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1
coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s
2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1
10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False
10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False)
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock
Using custom data configuration default
Traceback (most recent call last):
File "run_tf_text_classification.py", line 283, in <module>
main()
File "run_tf_text_classification.py", line 222, in main
max_seq_length=data_args.max_seq_length,
File "run_tf_text_classification.py", line 43, in get_tfds
ds = datasets.load_dataset("csv", data_files=files)
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__
**config_kwargs,
File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config
for key in sorted(data_files.keys()):
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
```
## Expected behavior
Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow)
Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/705/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/704 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/704/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/704/comments | https://api.github.com/repos/huggingface/datasets/issues/704/events | https://github.com/huggingface/datasets/pull/704 | 713,572,556 | MDExOlB1bGxSZXF1ZXN0NDk2ODY2NTQ0 | 704 | Fix remote tests for new datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,640,484,000 | 1,601,640,722,000 | 1,601,640,721,000 | MEMBER | null | When adding a new dataset, the remote tests fail because they try to get the new dataset from the master branch (i.e., where the dataset doesn't exist yet)
To fix that I reverted to the use of the HF API that fetch the available datasets on S3 that is synced with the master branch | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/704/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/704",
"html_url": "https://github.com/huggingface/datasets/pull/704",
"diff_url": "https://github.com/huggingface/datasets/pull/704.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/704.patch",
"merged_at": 1601640721000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/703/comments | https://api.github.com/repos/huggingface/datasets/issues/703/events | https://github.com/huggingface/datasets/pull/703 | 713,559,718 | MDExOlB1bGxSZXF1ZXN0NDk2ODU1OTQ5 | 703 | Add hotpot QA | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Awesome :) \r\n\r\nDon't pay attention to the RemoteDatasetTest error, I'm fixing it right now",
"You can rebase from master to fix the CI test :)",
"If we're lucky we can even include this dataset in today's release",
"Just thinking since `type` can only be `comparison` or `bridge` and `level` can only be `easy`, `medium`, `hard` should they be `ClassLabel`?",
"> Just thinking since `type` can only be `comparison` or `bridge` and `level` can only be `easy`, `medium`, `hard` should they be `ClassLabel`?\r\n\r\nI think it's more a tag than a label. I guess a string is fine\r\n"
] | 1,601,639,068,000 | 1,601,643,281,000 | 1,601,643,281,000 | CONTRIBUTOR | null | Added the [HotpotQA](https://github.com/hotpotqa/hotpot) multi-hop question answering dataset.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/703/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/703",
"html_url": "https://github.com/huggingface/datasets/pull/703",
"diff_url": "https://github.com/huggingface/datasets/pull/703.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/703.patch",
"merged_at": 1601643280000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/702/comments | https://api.github.com/repos/huggingface/datasets/issues/702/events | https://github.com/huggingface/datasets/pull/702 | 713,499,628 | MDExOlB1bGxSZXF1ZXN0NDk2ODA3Mjg4 | 702 | Complete rouge kwargs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,632,741,000 | 1,601,633,464,000 | 1,601,633,463,000 | MEMBER | null | In #701 we noticed that some kwargs were missing for rouge | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/702/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/702",
"html_url": "https://github.com/huggingface/datasets/pull/702",
"diff_url": "https://github.com/huggingface/datasets/pull/702.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/702.patch",
"merged_at": 1601633463000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/701 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/701/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/701/comments | https://api.github.com/repos/huggingface/datasets/issues/701/events | https://github.com/huggingface/datasets/pull/701 | 713,485,757 | MDExOlB1bGxSZXF1ZXN0NDk2Nzk2MTQ1 | 701 | Add rouge 2 and rouge Lsum to rouge metric outputs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Oups too late, sorry"
] | 1,601,631,346,000 | 1,601,632,514,000 | 1,601,632,338,000 | MEMBER | null | Continuation of #700
Rouge 2 and Rouge Lsum were missing in Rouge's outputs.
Rouge Lsum is also useful to evaluate Rouge L for sentences with `\n`
Fix #617 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/701/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/701",
"html_url": "https://github.com/huggingface/datasets/pull/701",
"diff_url": "https://github.com/huggingface/datasets/pull/701.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/701.patch",
"merged_at": 1601632338000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/700/comments | https://api.github.com/repos/huggingface/datasets/issues/700/events | https://github.com/huggingface/datasets/pull/700 | 713,450,295 | MDExOlB1bGxSZXF1ZXN0NDk2NzY3MTMz | 700 | Add rouge-2 in rouge_types for metric calculation | {
"login": "Shashi456",
"id": 18056781,
"node_id": "MDQ6VXNlcjE4MDU2Nzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/18056781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shashi456",
"html_url": "https://github.com/Shashi456",
"followers_url": "https://api.github.com/users/Shashi456/followers",
"following_url": "https://api.github.com/users/Shashi456/following{/other_user}",
"gists_url": "https://api.github.com/users/Shashi456/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shashi456/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shashi456/subscriptions",
"organizations_url": "https://api.github.com/users/Shashi456/orgs",
"repos_url": "https://api.github.com/users/Shashi456/repos",
"events_url": "https://api.github.com/users/Shashi456/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shashi456/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Indeed there's currently a mismatch between the description and what it rouge actually returns.\r\nThanks for proposing this fix :) \r\n\r\nI think it's better to return rouge 1-2-L.\r\nWas there a reason to only include rouge 1 and rouge L @thomwolf ? ",
"rougeLsum is also missing, could you add it ?",
"Adding `RougeLSum` would fix https://github.com/huggingface/datasets/issues/617",
"I am opening a PR with both of them right now actually :)",
"Also the format of the output isn't exactly ideal, It's usually only the F-1 score that is cared about. \r\n\r\nFormatting the output to reflect how `ROUGE-1-5-5` (the perl version thats usually used and pyrouge is a wrapper over it), would be better.\r\n\r\n",
"I'll close this since you seem to have already added it in another PR. Sorry for the delay in responding to you @lhoestq.",
"What do you mean by \"Formatting the output to reflect how ROUGE-1-5-5\" @Shashi456 ?",
"I like the idea of returning all the scores for two reason:\r\n- Rouge's aggregator does sampling and therefore it returns \"low\" \"mid\" and \"high\" scores\r\n- It is interesting to have the precision and recall to see how the F1 score was computed\r\nBut I understand your point that returning only the F1 score makes sense since it's the one that's always used ",
"@thomwolf the scores now returned look like this:\r\n```\r\n{'rouge1': AggregateScore(low=Score(precision=0.16620308156871524, recall=0.18219819615984395, fmeasure=0.16226017699359463), mid=Score(precision=0.17274338501705871, recall=0.1890957812369246, fmeasure=0.16823877588620403), high=Score(precision=0.17934569582981455, recall=0.1965626706042028, fmeasure=0.17491509794856058)), \r\n'rouge2': AggregateScore(low=Score(precision=0.12478835737689957, recall=0.1362113231755514, fmeasure=0.12055941950062395), mid=Score(precision=0.1303967602691664, recall=0.1423747229852964, fmeasure=0.1258363976151122), high=Score(precision=0.13654527560789362, recall=0.1488071465116122, fmeasure=0.13184989406704056)), \r\n'rougeL': AggregateScore(low=Score(precision=0.16568068818352072, recall=0.1811919016674486, fmeasure=0.1614784523482225), mid=Score(precision=0.17156684723552357, recall=0.1879777628247058, fmeasure=0.16720699286250762), high=Score(precision=0.17788847350584547, recall=0.1948899838530898, fmeasure=0.17316501523379826))}\r\n```\r\n\r\nWhile when computed through the perl rouge script, it looks like:\r\n```\r\nROUGE-1 Average_R: 0.34775 (95%-conf.int. 0.34546 - 0.35025)\r\nROUGE-1 Average_P: 0.19381 (95%-conf.int. 0.19246 - 0.19538)\r\nROUGE-1 Average_F: 0.24070 (95%-conf.int. 0.23925 - 0.24230)\r\n---------------------------------------------\r\nROUGE-2 Average_R: 0.07160 (95%-conf.int. 0.07010 - 0.07298)\r\nROUGE-2 Average_F: 0.04845 (95%-conf.int. 0.04741 - 0.04942)\r\n---------------------------------------------\r\nROUGE-L Average_R: 0.26404 (95%-conf.int. 0.26215 - 0.26598)\r\nROUGE-L Average_P: 0.14696 (95%-conf.int. 0.14576 - 0.14815)\r\nROUGE-L Average_F: 0.18245 (95%-conf.int. 0.18120 - 0.18367)\r\n```\r\nwhile the wrapper returns the much more readable:\r\n```\r\n[2020-07-30 18:13:38,556 INFO] Rouges at step 13000 \r\n>> ROUGE-F(1/2/3/l): 43.43/20.42/39.78 \r\nROUGE-R(1/2/3/l): 53.91/25.34/49.32\r\n```\r\n\r\nThe formatting allows for easy reading, and although \"low\", \"mid\", \"high\" make sense, this is more concise and effective. \r\n\r\nOne way of changing this might be to return a dictionary that returns values like `rouge_1_precision`, `rouge_1_F1`, `rouge_1_recall`, and maybe also having the ability to get the values you are interested in and keeping `recall` and `F1` as default.",
"cc: @lhoestq ",
"Ok I see.\r\nI think it's also important to follow one of the existing output format (there are already too many different formats, let's try not to add another different one)\r\nI'd still stick with the current format and not transform the output of the python implementation of rouge since it's already widely used.\r\nWhat do you think ?",
"Maybe we could convert the dataclasses in dictionnaries, would that help @Shashi456 ?",
"@thomwolf yeah I think that would help. I initially didn't understand the high low mid categories. Dictionaries could help in this case I guess, and if we allow the user to choose what they want i.e F1 and precision or recall."
] | 1,601,627,805,000 | 1,601,636,929,000 | 1,601,632,745,000 | NONE | null | The description of the ROUGE metric says,
```
_KWARGS_DESCRIPTION = """
Calculates average rouge scores for a list of hypotheses and references
Args:
predictions: list of predictions to score. Each predictions
should be a string with tokens separated by spaces.
references: list of reference for each prediction. Each
reference should be a string with tokens separated by spaces.
Returns:
rouge1: rouge_1 f1,
rouge2: rouge_2 f1,
rougeL: rouge_l f1,
rougeLsum: rouge_l precision
"""
```
but the `rouge_types` argument defaults to `rouge_types = ["rouge1", "rougeL"]`, this PR updates and add `rouge2` to the list so as to reflect the description card. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/700/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/700",
"html_url": "https://github.com/huggingface/datasets/pull/700",
"diff_url": "https://github.com/huggingface/datasets/pull/700.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/700.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/699/comments | https://api.github.com/repos/huggingface/datasets/issues/699/events | https://github.com/huggingface/datasets/issues/699 | 713,395,642 | MDU6SXNzdWU3MTMzOTU2NDI= | 699 | XNLI dataset is not loading | {
"login": "imadarsh1001",
"id": 14936525,
"node_id": "MDQ6VXNlcjE0OTM2NTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/14936525?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imadarsh1001",
"html_url": "https://github.com/imadarsh1001",
"followers_url": "https://api.github.com/users/imadarsh1001/followers",
"following_url": "https://api.github.com/users/imadarsh1001/following{/other_user}",
"gists_url": "https://api.github.com/users/imadarsh1001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imadarsh1001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imadarsh1001/subscriptions",
"organizations_url": "https://api.github.com/users/imadarsh1001/orgs",
"repos_url": "https://api.github.com/users/imadarsh1001/repos",
"events_url": "https://api.github.com/users/imadarsh1001/events{/privacy}",
"received_events_url": "https://api.github.com/users/imadarsh1001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"also i tried below code to solve checksum error \r\n`datasets-cli test ./datasets/xnli --save_infos --all_configs`\r\n\r\nand it shows \r\n\r\n```\r\n2020-10-02 07:06:16.588760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 268, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/./datasets/xnli/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 279, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/xnli/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/conda/bin/datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/commands/test.py\", line 76, in run\r\n module_path, hash = prepare_module(path)\r\n File \"/opt/conda/lib/python3.7/site-packages/datasets/load.py\", line 283, in prepare_module\r\n combined_path, github_file_path, file_path\r\nFileNotFoundError: Couldn't find file locally at ./datasets/xnli/xnli.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/./datasets/xnli/xnli.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/./datasets/xnli/xnli.py\r\n```\r\n\r\n",
"Hi !\r\nYes the download url changed.\r\nIt's updated on the master branch. I'm doing a release today to fix that :)",
"the issue is fixed with latest release \r\n\r\n"
] | 1,601,621,596,000 | 1,601,747,152,000 | 1,601,747,017,000 | NONE | null | `dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls))
39 logger.info("All the checksums matched successfully" + for_verification_name)
40
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']
```
I think URL is now changed to "https://cims.nyu.edu/~sbowman/xnli/XNLI-MT-1.0.zip" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/699/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/697/comments | https://api.github.com/repos/huggingface/datasets/issues/697/events | https://github.com/huggingface/datasets/pull/697 | 712,979,029 | MDExOlB1bGxSZXF1ZXN0NDk2MzczNDU5 | 697 | Update README.md | {
"login": "bishug",
"id": 71011306,
"node_id": "MDQ6VXNlcjcxMDExMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/71011306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bishug",
"html_url": "https://github.com/bishug",
"followers_url": "https://api.github.com/users/bishug/followers",
"following_url": "https://api.github.com/users/bishug/following{/other_user}",
"gists_url": "https://api.github.com/users/bishug/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bishug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bishug/subscriptions",
"organizations_url": "https://api.github.com/users/bishug/orgs",
"repos_url": "https://api.github.com/users/bishug/repos",
"events_url": "https://api.github.com/users/bishug/events{/privacy}",
"received_events_url": "https://api.github.com/users/bishug/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,568,162,000 | 1,601,568,720,000 | 1,601,568,720,000 | NONE | null | Hey I was just telling my subscribers to check out your repositories
Thank you | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/697/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/697",
"html_url": "https://github.com/huggingface/datasets/pull/697",
"diff_url": "https://github.com/huggingface/datasets/pull/697.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/697.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/696/comments | https://api.github.com/repos/huggingface/datasets/issues/696/events | https://github.com/huggingface/datasets/pull/696 | 712,942,977 | MDExOlB1bGxSZXF1ZXN0NDk2MzQzMjEy | 696 | Elasticsearch index docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,565,538,000 | 1,601,624,899,000 | 1,601,624,898,000 | MEMBER | null | I added the docs for ES indexes.
I also added a `load_elasticsearch_index` method to load an index that has already been built.
I checked the tests for the ES index and we have tests that mock ElasticSearch.
I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES running. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/696/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/696",
"html_url": "https://github.com/huggingface/datasets/pull/696",
"diff_url": "https://github.com/huggingface/datasets/pull/696.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/696.patch",
"merged_at": 1601624898000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/695/comments | https://api.github.com/repos/huggingface/datasets/issues/695/events | https://github.com/huggingface/datasets/pull/695 | 712,843,949 | MDExOlB1bGxSZXF1ZXN0NDk2MjU5NTM0 | 695 | Update XNLI download link | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,558,842,000 | 1,601,560,875,000 | 1,601,560,874,000 | MEMBER | null | The old link isn't working anymore. I updated it with the new official link.
Fix #690 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/695/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/695",
"html_url": "https://github.com/huggingface/datasets/pull/695",
"diff_url": "https://github.com/huggingface/datasets/pull/695.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/695.patch",
"merged_at": 1601560874000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/694/comments | https://api.github.com/repos/huggingface/datasets/issues/694/events | https://github.com/huggingface/datasets/pull/694 | 712,827,751 | MDExOlB1bGxSZXF1ZXN0NDk2MjQ1NzU0 | 694 | Use GitHub instead of aws in remote dataset tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,557,670,000 | 1,601,624,848,000 | 1,601,624,847,000 | MEMBER | null | Recently we switched from aws s3 to github to download dataset scripts.
However in the tests, the dummy data were still downloaded from s3.
So I changed that to download them from github instead, in the MockDownloadManager.
Moreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the entire dataset) so I replaced them with dummy data with few examples. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/694/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/694",
"html_url": "https://github.com/huggingface/datasets/pull/694",
"diff_url": "https://github.com/huggingface/datasets/pull/694.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/694.patch",
"merged_at": 1601624846000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/693/comments | https://api.github.com/repos/huggingface/datasets/issues/693/events | https://github.com/huggingface/datasets/pull/693 | 712,822,200 | MDExOlB1bGxSZXF1ZXN0NDk2MjQxMjUw | 693 | Rachel ker add dataset/mlsum | {
"login": "pdhg",
"id": 32742136,
"node_id": "MDQ6VXNlcjMyNzQyMTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/32742136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdhg",
"html_url": "https://github.com/pdhg",
"followers_url": "https://api.github.com/users/pdhg/followers",
"following_url": "https://api.github.com/users/pdhg/following{/other_user}",
"gists_url": "https://api.github.com/users/pdhg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdhg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdhg/subscriptions",
"organizations_url": "https://api.github.com/users/pdhg/orgs",
"repos_url": "https://api.github.com/users/pdhg/repos",
"events_url": "https://api.github.com/users/pdhg/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdhg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It looks like an outdated PR (we've already added mlsum). Closing it"
] | 1,601,557,270,000 | 1,601,571,673,000 | 1,601,571,673,000 | NONE | null | . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/693/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/693",
"html_url": "https://github.com/huggingface/datasets/pull/693",
"diff_url": "https://github.com/huggingface/datasets/pull/693.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/693.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/692/comments | https://api.github.com/repos/huggingface/datasets/issues/692/events | https://github.com/huggingface/datasets/pull/692 | 712,818,968 | MDExOlB1bGxSZXF1ZXN0NDk2MjM4NzIw | 692 | Update README.md | {
"login": "mayank1897",
"id": 62796466,
"node_id": "MDQ6VXNlcjYyNzk2NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/62796466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mayank1897",
"html_url": "https://github.com/mayank1897",
"followers_url": "https://api.github.com/users/mayank1897/followers",
"following_url": "https://api.github.com/users/mayank1897/following{/other_user}",
"gists_url": "https://api.github.com/users/mayank1897/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mayank1897/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayank1897/subscriptions",
"organizations_url": "https://api.github.com/users/mayank1897/orgs",
"repos_url": "https://api.github.com/users/mayank1897/repos",
"events_url": "https://api.github.com/users/mayank1897/events{/privacy}",
"received_events_url": "https://api.github.com/users/mayank1897/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hacktoberfest spam",
"To enhance its readability.....not Hacktoberfest spam",
"How is adding a punctuation to the end of a sentence justified as \"To enhance its readability\". \r\nConsidering that this is not your first \"README enhancement '' please don't spam the open source community with useless PR to get a free T-Shirt it just hurts the maintainers.\r\n\r\n//Joey",
"closed as spam"
] | 1,601,557,042,000 | 1,601,636,519,000 | 1,601,636,519,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/692/reactions",
"total_count": 6,
"+1": 0,
"-1": 4,
"laugh": 0,
"hooray": 0,
"confused": 2,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/692/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/692",
"html_url": "https://github.com/huggingface/datasets/pull/692",
"diff_url": "https://github.com/huggingface/datasets/pull/692.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/692.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/691/comments | https://api.github.com/repos/huggingface/datasets/issues/691/events | https://github.com/huggingface/datasets/issues/691 | 712,389,499 | MDU6SXNzdWU3MTIzODk0OTk= | 691 | Add UI filter to filter datasets based on task | {
"login": "praateekmahajan",
"id": 7589415,
"node_id": "MDQ6VXNlcjc1ODk0MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7589415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/praateekmahajan",
"html_url": "https://github.com/praateekmahajan",
"followers_url": "https://api.github.com/users/praateekmahajan/followers",
"following_url": "https://api.github.com/users/praateekmahajan/following{/other_user}",
"gists_url": "https://api.github.com/users/praateekmahajan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/praateekmahajan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/praateekmahajan/subscriptions",
"organizations_url": "https://api.github.com/users/praateekmahajan/orgs",
"repos_url": "https://api.github.com/users/praateekmahajan/repos",
"events_url": "https://api.github.com/users/praateekmahajan/events{/privacy}",
"received_events_url": "https://api.github.com/users/praateekmahajan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,601,513,778,000 | 1,603,812,270,000 | null | NONE | null | This is great work, so huge shoutout to contributors and huggingface.
The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following tasks (non exhaustive list)
- Classification
- Multi label
- Multi class
- Q&A
- Summarization
- Translation
I believe this feature might have some value, for folks trying to find datasets for a particular task, and then testing their model capabilities.
Thank you :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/691/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/691/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/690/comments | https://api.github.com/repos/huggingface/datasets/issues/690/events | https://github.com/huggingface/datasets/issues/690 | 712,150,321 | MDU6SXNzdWU3MTIxNTAzMjE= | 690 | XNLI dataset: NonMatchingChecksumError | {
"login": "xiey1",
"id": 13307358,
"node_id": "MDQ6VXNlcjEzMzA3MzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/13307358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiey1",
"html_url": "https://github.com/xiey1",
"followers_url": "https://api.github.com/users/xiey1/followers",
"following_url": "https://api.github.com/users/xiey1/following{/other_user}",
"gists_url": "https://api.github.com/users/xiey1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiey1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiey1/subscriptions",
"organizations_url": "https://api.github.com/users/xiey1/orgs",
"repos_url": "https://api.github.com/users/xiey1/repos",
"events_url": "https://api.github.com/users/xiey1/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiey1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting.\r\nThe data file must have been updated by the host.\r\nI'll update the checksum with the new one.",
"Well actually it looks like the link isn't working anymore :(",
"The new link is https://cims.nyu.edu/~sbowman/xnli/XNLI-1.0.zip\r\nI'll update the dataset script",
"I'll do a release in the next few days to make the fix available for everyone.\r\nIn the meantime you can load `xnli` with\r\n```\r\nxnli = load_dataset('xnli', script_version=\"master\")\r\n```\r\nThis will use the latest version of the xnli script (available on master branch), instead of the old one.",
"That's awesome! Thanks a lot!"
] | 1,601,488,203,000 | 1,601,572,508,000 | 1,601,560,874,000 | NONE | null | Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']`
The same code worked well several days ago in colab but stopped working now. Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/690/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/689/comments | https://api.github.com/repos/huggingface/datasets/issues/689/events | https://github.com/huggingface/datasets/pull/689 | 712,095,262 | MDExOlB1bGxSZXF1ZXN0NDk1NjMzNjMy | 689 | Switch to pandas reader for text dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"If the windows tests in the CI pass, today will be a happy day"
] | 1,601,483,292,000 | 1,601,484,332,000 | 1,601,484,331,000 | MEMBER | null | Following the discussion in #622 , it appears that there's no appropriate ways to use the payrrow csv reader to read text files because of the separator.
In this PR I switched to pandas to read the file.
Moreover pandas allows to read the file by chunk, which means that you can build the arrow dataset from a text file that is bigger than RAM (we used to have to shard text files an mentioned in https://github.com/huggingface/datasets/issues/610#issuecomment-691672919)
From a test that I did locally on a 1GB text file, the pyarrow reader used to run in 150ms while the new one takes 650ms (multithreading off for pyarrow). This is probably due to chunking since I am having the same speed difference by calling `read()` and calling `read(chunksize)` + `readline()` to read the text file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/689/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/689",
"html_url": "https://github.com/huggingface/datasets/pull/689",
"diff_url": "https://github.com/huggingface/datasets/pull/689.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/689.patch",
"merged_at": 1601484331000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/688/comments | https://api.github.com/repos/huggingface/datasets/issues/688/events | https://github.com/huggingface/datasets/pull/688 | 711,804,828 | MDExOlB1bGxSZXF1ZXN0NDk1MzkwMTc1 | 688 | Disable tokenizers parallelism in multiprocessed map | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,459,614,000 | 1,601,541,946,000 | 1,601,541,945,000 | MEMBER | null | It was reported in #620 that using multiprocessing with a tokenizers shows this message:
```
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
```
This message is shown when TOKENIZERS_PARALLELISM is unset.
Moreover if it is set to `true`, then the program just hangs.
To hide the message (if TOKENIZERS_PARALLELISM is unset) and avoid hanging (if TOKENIZERS_PARALLELISM is `true`), then I set TOKENIZERS_PARALLELISM to `false` when forking the process. After forking is gets back to its original value.
Also I added a warning if TOKENIZERS_PARALLELISM was `true` and is set to `false`:
```
Setting TOKENIZERS_PARALLELISM=false for forked processes.
```
cc @n1t0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/688/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/688",
"html_url": "https://github.com/huggingface/datasets/pull/688",
"diff_url": "https://github.com/huggingface/datasets/pull/688.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/688.patch",
"merged_at": 1601541945000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/687/comments | https://api.github.com/repos/huggingface/datasets/issues/687/events | https://github.com/huggingface/datasets/issues/687 | 711,664,810 | MDU6SXNzdWU3MTE2NjQ4MTA= | 687 | `ArrowInvalid` occurs while running `Dataset.map()` function | {
"login": "peinan",
"id": 5601012,
"node_id": "MDQ6VXNlcjU2MDEwMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5601012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peinan",
"html_url": "https://github.com/peinan",
"followers_url": "https://api.github.com/users/peinan/followers",
"following_url": "https://api.github.com/users/peinan/following{/other_user}",
"gists_url": "https://api.github.com/users/peinan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/peinan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peinan/subscriptions",
"organizations_url": "https://api.github.com/users/peinan/orgs",
"repos_url": "https://api.github.com/users/peinan/repos",
"events_url": "https://api.github.com/users/peinan/events{/privacy}",
"received_events_url": "https://api.github.com/users/peinan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi !\r\n\r\nThis is because `encode` expects one single text as input (str), or one tokenized text (List[str]).\r\nI believe that you actually wanted to use `encode_batch` which expects a batch of texts.\r\nHowever this method is only available for our \"fast\" tokenizers (ex: BertTokenizerFast).\r\nBertJapanese is not one of them unfortunately and I don't think it will be added for now (see https://github.com/huggingface/transformers/pull/7141)...\r\ncc @thomwolf for confirmation.\r\n\r\nTherefore what I'd suggest for now is disable batching and process one text at a time using `encode`.\r\nNote that you can make it faster by using multiprocessing:\r\n\r\n```python\r\nnum_proc = None # Specify here the number of processes if you want to use multiprocessing. ex: num_proc = 4\r\nencoded = train_ds.map(\r\n lambda example: {'tokens': t.encode(example['title'], max_length=1000)}, num_proc=num_proc\r\n)\r\n```\r\n",
"Thank you very much for the kind and precise suggestion!\r\nI'm looking forward to seeing BertJapaneseTokenizer built into the \"fast\" tokenizers.\r\n\r\nI tried `map` with multiprocessing as follows, and it worked!\r\n\r\n```python\r\n# There was a Pickle problem if I use `lambda` for multiprocessing\r\ndef encode(examples):\r\n return {'tokens': t.encode(examples['title'], max_length=1000)}\r\n\r\nnum_proc = 8\r\nencoded = train_ds.map(encode, num_proc=num_proc)\r\n```\r\n\r\nThank you again!"
] | 1,601,446,610,000 | 1,601,459,583,000 | 1,601,459,583,000 | NONE | null | It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=None)
# }, num_rows: 99999)
# suggested in #665
class PicklableTokenizer(BertJapaneseTokenizer):
def __getstate__(self):
state = dict(self.__dict__)
state['do_lower_case'] = self.word_tokenizer.do_lower_case
state['never_split'] = self.word_tokenizer.never_split
del state['word_tokenizer']
return state
def __setstate(self):
do_lower_case = state.pop('do_lower_case')
never_split = state.pop('never_split')
self.__dict__ = state
self.word_tokenizer = MecabTokenizer(
do_lower_case=do_lower_case, never_split=never_split
)
t = PicklableTokenizer.from_pretrained('bert-base-japanese-whole-word-masking')
encoded = train_ds.map(
lambda examples: {'tokens': t.encode(examples['title'], max_length=1000)}, batched=True, batch_size=1000
)
```
Error Message:
```
99% 99/100 [00:22<00:00, 39.07ba/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<timed exec> in <module>
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1242 fn_kwargs=fn_kwargs,
1243 new_fingerprint=new_fingerprint,
-> 1244 update_data=update_data,
1245 )
1246 else:
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
151 "output_all_columns": self._output_all_columns,
152 }
--> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
154 if new_format["columns"] is not None:
155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names))
/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)
1496 if update_data:
1497 batch = cast_to_python_objects(batch)
-> 1498 writer.write_batch(batch)
1499 if update_data:
1500 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
/usr/local/lib/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
271 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)
272 typed_sequence_examples[col] = typed_sequence
--> 273 pa_table = pa.Table.from_pydict(typed_sequence_examples)
274 self.write_table(pa_table)
275
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays()
/usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate()
/usr/local/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Column 4 named tokens expected length 999 but got length 1000
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/687/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/686/comments | https://api.github.com/repos/huggingface/datasets/issues/686/events | https://github.com/huggingface/datasets/issues/686 | 711,385,739 | MDU6SXNzdWU3MTEzODU3Mzk= | 686 | Dataset browser url is still https://huggingface.co/nlp/viewer/ | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new)",
"This was fixed but forgot to close the issue. cc @lhoestq @yjernite \r\n\r\nThanks @jarednielsen!"
] | 1,601,407,312,000 | 1,610,130,566,000 | 1,610,130,566,000 | CONTRIBUTOR | null | Might be worth updating to https://huggingface.co/datasets/viewer/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/686/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/685/comments | https://api.github.com/repos/huggingface/datasets/issues/685/events | https://github.com/huggingface/datasets/pull/685 | 711,182,185 | MDExOlB1bGxSZXF1ZXN0NDk0ODg1NjIz | 685 | Add features parameter to CSV | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,390,616,000 | 1,601,455,196,000 | 1,601,455,194,000 | MEMBER | null | Add support for the `features` parameter when loading a csv dataset:
```python
from datasets import load_dataset, Features
features = Features({...})
csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features)
```
I added tests to make sure that it is also compatible with the caching system
Fix #623 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/685/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/685",
"html_url": "https://github.com/huggingface/datasets/pull/685",
"diff_url": "https://github.com/huggingface/datasets/pull/685.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/685.patch",
"merged_at": 1601455194000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/684 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/684/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/684/comments | https://api.github.com/repos/huggingface/datasets/issues/684/events | https://github.com/huggingface/datasets/pull/684 | 711,080,947 | MDExOlB1bGxSZXF1ZXN0NDk0ODA2NjE1 | 684 | Fix column order issue in cast | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,383,753,000 | 1,601,395,006,000 | 1,601,395,005,000 | MEMBER | null | Previously, the order of the columns in the features passes to `cast_` mattered.
However even though features passed to `cast_` had the same order as the dataset features, it could fail because the schema that was built was always in alphabetical order.
This issue was reported by @lewtun in #623
To fix that I fixed the schema to follow the order of the arrow table columns.
I also added the possibility to give features that are not ordered the same way as the dataset features. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/684/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/684",
"html_url": "https://github.com/huggingface/datasets/pull/684",
"diff_url": "https://github.com/huggingface/datasets/pull/684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/684.patch",
"merged_at": 1601395005000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/683/comments | https://api.github.com/repos/huggingface/datasets/issues/683/events | https://github.com/huggingface/datasets/pull/683 | 710,942,704 | MDExOlB1bGxSZXF1ZXN0NDk0NzAwNzY1 | 683 | Fix wrong delimiter in text dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,372,604,000 | 1,620,239,071,000 | 1,601,372,646,000 | MEMBER | null | The delimiter is set to the bell character as it is used nowhere is text files usually.
However in the text dataset the delimiter was set to `\b` which is backspace in python, while the bell character is `\a`.
I replace \b by \a
Hopefully it fixes issues mentioned by some users in #622 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/683/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/683",
"html_url": "https://github.com/huggingface/datasets/pull/683",
"diff_url": "https://github.com/huggingface/datasets/pull/683.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/683.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/682/comments | https://api.github.com/repos/huggingface/datasets/issues/682/events | https://github.com/huggingface/datasets/pull/682 | 710,325,399 | MDExOlB1bGxSZXF1ZXN0NDk0MTkzMzEw | 682 | Update navbar chapter titles color | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,303,717,000 | 1,601,314,213,000 | 1,601,314,212,000 | MEMBER | null | Consistency with the color change that was done in transformers at https://github.com/huggingface/transformers/pull/7423
It makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections.
see changes [here](https://691-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/682/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/682",
"html_url": "https://github.com/huggingface/datasets/pull/682",
"diff_url": "https://github.com/huggingface/datasets/pull/682.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/682.patch",
"merged_at": 1601314212000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/681/comments | https://api.github.com/repos/huggingface/datasets/issues/681/events | https://github.com/huggingface/datasets/pull/681 | 710,075,721 | MDExOlB1bGxSZXF1ZXN0NDkzOTkwMjEz | 681 | Adding missing @property (+2 small flake8 fixes). | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,283,233,000 | 1,601,288,773,000 | 1,601,288,769,000 | CONTRIBUTOR | null | Fixes #678 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/681/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/681",
"html_url": "https://github.com/huggingface/datasets/pull/681",
"diff_url": "https://github.com/huggingface/datasets/pull/681.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/681.patch",
"merged_at": 1601288769000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/680 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/680/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/680/comments | https://api.github.com/repos/huggingface/datasets/issues/680/events | https://github.com/huggingface/datasets/pull/680 | 710,066,138 | MDExOlB1bGxSZXF1ZXN0NDkzOTgyMjY4 | 680 | Fix bug related to boolean in GAP dataset. | {
"login": "otakumesi",
"id": 14996977,
"node_id": "MDQ6VXNlcjE0OTk2OTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/14996977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/otakumesi",
"html_url": "https://github.com/otakumesi",
"followers_url": "https://api.github.com/users/otakumesi/followers",
"following_url": "https://api.github.com/users/otakumesi/following{/other_user}",
"gists_url": "https://api.github.com/users/otakumesi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/otakumesi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/otakumesi/subscriptions",
"organizations_url": "https://api.github.com/users/otakumesi/orgs",
"repos_url": "https://api.github.com/users/otakumesi/repos",
"events_url": "https://api.github.com/users/otakumesi/events{/privacy}",
"received_events_url": "https://api.github.com/users/otakumesi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi !\r\n\r\nGood catch, thanks for creating this PR :)\r\n\r\nCould you also regenerate the metadata for this dataset using \r\n```\r\ndatasets-cli test ./datasets/gap --save_infos --all_configs\r\n```\r\n\r\nThat'd be awesome",
"@lhoestq Thank you for your revieing!!!\r\n\r\nI've performed it and have read CONTRIBUTING.md now!"
] | 1,601,282,379,000 | 1,601,394,887,000 | 1,601,394,887,000 | CONTRIBUTOR | null | ### Why I did
The value in `row["A-coref"]` and `row["B-coref"]` is `'TRUE'` or `'FALSE'`.
This type is `string`, then `bool('FALSE')` is equal to `True` in Python.
So, both rows are transformed into `True` now.
So, I modified this problem.
### What I did
I modified `bool(row["A-coref"])` and `bool(row["B-coref"])` to `row["A-coref"] == "TRUE"` and `row["B-coref"] == "TRUE"`.
Thank you! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/680/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/680",
"html_url": "https://github.com/huggingface/datasets/pull/680",
"diff_url": "https://github.com/huggingface/datasets/pull/680.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/680.patch",
"merged_at": 1601394887000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/679/comments | https://api.github.com/repos/huggingface/datasets/issues/679/events | https://github.com/huggingface/datasets/pull/679 | 710,065,838 | MDExOlB1bGxSZXF1ZXN0NDkzOTgyMDMx | 679 | Fix negative ids when slicing with an array | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,282,348,000 | 1,601,304,140,000 | 1,601,304,139,000 | MEMBER | null | ```python
from datasets import Dataset
d = ds.Dataset.from_dict({"a": range(10)})
print(d[[0, -1]])
# OverflowError
```
raises an error because of the negative id.
This PR fixes that.
Fix #668 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/679/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/679",
"html_url": "https://github.com/huggingface/datasets/pull/679",
"diff_url": "https://github.com/huggingface/datasets/pull/679.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/679.patch",
"merged_at": 1601304139000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/678 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/678/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/678/comments | https://api.github.com/repos/huggingface/datasets/issues/678/events | https://github.com/huggingface/datasets/issues/678 | 710,060,497 | MDU6SXNzdWU3MTAwNjA0OTc= | 678 | The download instructions for c4 datasets are not contained in the error message | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Good catch !\r\nIndeed the `@property` is missing.\r\n\r\nFeel free to open a PR :)",
"Also not that C4 is a dataset that needs an Apache Beam runtime to be generated.\r\nFor example Dataflow, Spark, Flink etc.\r\n\r\nUsually we generate the dataset on our side once and for all, but we haven't done it for C4 yet.\r\nMore info about beam datasets [here](https://huggingface.co/docs/datasets/beam_dataset.html)\r\n\r\nLet me know if you have any questions"
] | 1,601,281,854,000 | 1,601,288,769,000 | 1,601,288,769,000 | CONTRIBUTOR | null | The manual download instructions are not clear
```The dataset c4 with config en requires manual data.
Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff8c5969760>>.
Manual data can be loaded with `datasets.load_dataset(c4, data_dir='<path/to/manual/data>')
```
Either `@property` could be added to C4.manual_download_instrcutions (or make it a real property), or the manual_download_instructions function needs to be called I think.
Let me know if you want a PR for this, but I'm not sure which possible fix is the correct one. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/678/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/677/comments | https://api.github.com/repos/huggingface/datasets/issues/677/events | https://github.com/huggingface/datasets/pull/677 | 710,055,239 | MDExOlB1bGxSZXF1ZXN0NDkzOTczNDE3 | 677 | Move cache dir root creation in builder's init | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,281,366,000 | 1,601,304,163,000 | 1,601,304,162,000 | MEMBER | null | We use lock files in the builder initialization but sometimes the cache directory where they're supposed to be was not created. To fix that I moved the builder's cache dir root creation in the builder's init.
Fix #671 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/677/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/677",
"html_url": "https://github.com/huggingface/datasets/pull/677",
"diff_url": "https://github.com/huggingface/datasets/pull/677.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/677.patch",
"merged_at": 1601304162000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/676/comments | https://api.github.com/repos/huggingface/datasets/issues/676/events | https://github.com/huggingface/datasets/issues/676 | 710,014,319 | MDU6SXNzdWU3MTAwMTQzMTk= | 676 | train_test_split returns empty dataset item | {
"login": "mojave-pku",
"id": 26648528,
"node_id": "MDQ6VXNlcjI2NjQ4NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/26648528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mojave-pku",
"html_url": "https://github.com/mojave-pku",
"followers_url": "https://api.github.com/users/mojave-pku/followers",
"following_url": "https://api.github.com/users/mojave-pku/following{/other_user}",
"gists_url": "https://api.github.com/users/mojave-pku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mojave-pku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mojave-pku/subscriptions",
"organizations_url": "https://api.github.com/users/mojave-pku/orgs",
"repos_url": "https://api.github.com/users/mojave-pku/repos",
"events_url": "https://api.github.com/users/mojave-pku/events{/privacy}",
"received_events_url": "https://api.github.com/users/mojave-pku/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The problem still exists after removing the cache files.",
"Can you reproduce this example in a Colab so we can investigate? (or give more information on your software/hardware config)",
"Thanks for reporting.\r\nI just found the issue, I'm creating a PR",
"We'll do a release pretty soon to include the fix :)\r\nIn the meantime you can install the lib from source if you want to "
] | 1,601,277,573,000 | 1,602,078,393,000 | 1,602,077,886,000 | NONE | null | I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split(test_size=0.1)
print(yelp_data)
print(yelp_data['test'])
print(yelp_data['test'][0])
```
The outputs:
```
{'stars': 2.0, 'text': 'xxxx'}
Loading cached split indices for dataset at /home/ssd4/huanglianzhe/test_yelp/cache-f9b22d8b9d5a7346.arrow and /home/ssd4/huanglianzhe/test_yelp/cache-4aa26fa4005059d1.arrow
DatasetDict({'train': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 7219009), 'test': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)})
Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)
{} # yelp_data['test'][0] is empty
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/676/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/676/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/675/comments | https://api.github.com/repos/huggingface/datasets/issues/675/events | https://github.com/huggingface/datasets/issues/675 | 709,818,725 | MDU6SXNzdWU3MDk4MTg3MjU= | 675 | Add custom dataset to NLP? | {
"login": "timpal0l",
"id": 6556710,
"node_id": "MDQ6VXNlcjY1NTY3MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6556710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timpal0l",
"html_url": "https://github.com/timpal0l",
"followers_url": "https://api.github.com/users/timpal0l/followers",
"following_url": "https://api.github.com/users/timpal0l/following{/other_user}",
"gists_url": "https://api.github.com/users/timpal0l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timpal0l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timpal0l/subscriptions",
"organizations_url": "https://api.github.com/users/timpal0l/orgs",
"repos_url": "https://api.github.com/users/timpal0l/repos",
"events_url": "https://api.github.com/users/timpal0l/events{/privacy}",
"received_events_url": "https://api.github.com/users/timpal0l/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Yes you can have a look here: https://huggingface.co/docs/datasets/loading_datasets.html#csv-files",
"No activity, closing"
] | 1,601,241,770,000 | 1,603,184,929,000 | 1,603,184,929,000 | CONTRIBUTOR | null | Is it possible to add a custom dataset such as a .csv to the NLP library?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/675/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/674/comments | https://api.github.com/repos/huggingface/datasets/issues/674/events | https://github.com/huggingface/datasets/issues/674 | 709,661,006 | MDU6SXNzdWU3MDk2NjEwMDY= | 674 | load_dataset() won't download in Windows | {
"login": "ThisDavehead",
"id": 34422661,
"node_id": "MDQ6VXNlcjM0NDIyNjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/34422661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThisDavehead",
"html_url": "https://github.com/ThisDavehead",
"followers_url": "https://api.github.com/users/ThisDavehead/followers",
"following_url": "https://api.github.com/users/ThisDavehead/following{/other_user}",
"gists_url": "https://api.github.com/users/ThisDavehead/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ThisDavehead/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ThisDavehead/subscriptions",
"organizations_url": "https://api.github.com/users/ThisDavehead/orgs",
"repos_url": "https://api.github.com/users/ThisDavehead/repos",
"events_url": "https://api.github.com/users/ThisDavehead/events{/privacy}",
"received_events_url": "https://api.github.com/users/ThisDavehead/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I have the same issue. Tried to download a few of them and not a single one is downloaded successfully.\r\n\r\nThis is the output:\r\n```\r\n>>> dataset = load_dataset('blended_skill_talk', split='train')\r\nUsing custom data configuration default <-- This step never ends\r\n```",
"This was fixed in #644 \r\nI'll do a new release soon :)\r\n\r\nIn the meantime you can run it by installing from source",
"Closing since version 1.1.0 got released with Windows support :) \r\nLet me know if it works for you now"
] | 1,601,178,985,000 | 1,601,886,498,000 | 1,601,886,498,000 | NONE | null | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've waited upwards of 18 hours to download the 'multi-news' dataset (which isn't very big), and still nothing. I've tried running it through different IDE's and the command line, but it had the same behavior. I've also tried it with all virus and malware protection turned off. I've made sure python and all IDE's are exceptions to the firewall and all the requisite permissions are enabled.
Additionally, I checked to see if other packages could download content such as an nltk corpus, and they could. I've also run the same script using Ubuntu and it downloaded fine (and quickly). When I copied the downloaded datasets from my Ubuntu drive to my Windows .cache folder it worked fine by reusing the already-downloaded dataset, but it's cumbersome to do that for every dataset I want to try in my Windows environment.
Could this be a bug, or is there something I'm doing wrong or not thinking of?
Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/674/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/674/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/673/comments | https://api.github.com/repos/huggingface/datasets/issues/673/events | https://github.com/huggingface/datasets/issues/673 | 709,603,989 | MDU6SXNzdWU3MDk2MDM5ODk= | 673 | blog_authorship_corpus crashed | {
"login": "Moshiii",
"id": 7553188,
"node_id": "MDQ6VXNlcjc1NTMxODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7553188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moshiii",
"html_url": "https://github.com/Moshiii",
"followers_url": "https://api.github.com/users/Moshiii/followers",
"following_url": "https://api.github.com/users/Moshiii/following{/other_user}",
"gists_url": "https://api.github.com/users/Moshiii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moshiii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moshiii/subscriptions",
"organizations_url": "https://api.github.com/users/Moshiii/orgs",
"repos_url": "https://api.github.com/users/Moshiii/repos",
"events_url": "https://api.github.com/users/Moshiii/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moshiii/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"Thanks for reporting !\r\nWe'll free some memory"
] | 1,601,151,328,000 | 1,601,280,290,000 | null | NONE | null | This is just to report that When I pick blog_authorship_corpus in
https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus
I get this:
![image](https://user-images.githubusercontent.com/7553188/94349542-4364f300-0013-11eb-897d-b25660a449f0.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/673/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/672 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/672/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/672/comments | https://api.github.com/repos/huggingface/datasets/issues/672/events | https://github.com/huggingface/datasets/issues/672 | 709,575,527 | MDU6SXNzdWU3MDk1NzU1Mjc= | 672 | Questions about XSUM | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"We should try to regenerate the data using the official script.\r\nBut iirc that's what we used in the first place, so not sure why it didn't match in the first place.\r\n\r\nI'll let you know when the dataset is updated",
"Thanks, looking forward to hearing your update on this thread. \r\n\r\nThis is a blocking issue for us; would appreciate any progress on this front. We can also help with the fix, if you deem it appropriately. ",
"I just started the generation on my side, I'll let you know how it goes :) ",
"Hmm after a first run I'm still missing 136668/226711 urls.\r\nI'll relaunch it tomorrow to try to get the remaining ones.",
"Update: I'm missing 36/226711 urls but I haven't managed to download them yet",
"Thanks! That sounds like a reasonable number! ",
"So I managed to download them all but when parsing only 226,181/226,711 worked.\r\nNot sure if it's worth digging and debugging parsing at this point :/ ",
"Maybe @sshleifer can help, I think he's already played with xsum at one point",
"Thanks @lhoestq\r\nIt would be great to improve coverage, but IDs are the really crucial part for us. We'd really appreciate an update to the dataset with IDs either way!",
"I gave up at an even earlier point. The dataset I use has 204,017 train examples.",
"@lhoestq @sshleifer like @jbragg said earlier, the main issue for us is that the current XSUM dataset (in your package) does not have IDs suggested by the original dataset ([here is the file](https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json).) Would appreciate if you update the XSUM dataset to include the instance IDs. \r\n\r\nThe missing instances is also a problem, but likely not worth pursuing given its relatively small scale. ",
">So I managed to download them all but when parsing only 226,181/226,711 worked.\r\n\r\n@lhoestq any chance we could update the HF-hosted dataset with the IDs in your new version? Happy to help if there's something I can do.",
"Well I couldn't parse what I downloaded.\r\nUnfortunately I think I won't be able to take a look at it this week.\r\nI can try to send you what I got if you want to give it a shot @jbragg \r\nOtherwise feel free to re-run the xsum download script, maybe you'll be luckier than me"
] | 1,601,140,584,000 | 1,603,185,367,000 | null | CONTRIBUTOR | null | Hi there β
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 204017)
>>> data['test']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 11333)
```
The first issue is, the instance counts donβt match what I see on [the dataset's website](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset#what-builds-the-xsum-dataset) (11,333 vs 11,334 for test set; 204,017 vs 204,045 for training set)
```
β¦ training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) set.
```
Any thoughts why? Perhaps @mariamabarham could help here, since she recently had a PR on this dataaset https://github.com/huggingface/datasets/pull/289 (reviewed by @patrickvonplaten)
Another issue is that the instances don't seem to have IDs. The original datasets provides IDs for the instances: https://github.com/EdinburghNLP/XSum/blob/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json but to be able to use them, the dataset sizes need to match.
CC @jbragg
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/672/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/671 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/671/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/671/comments | https://api.github.com/repos/huggingface/datasets/issues/671/events | https://github.com/huggingface/datasets/issues/671 | 709,093,151 | MDU6SXNzdWU3MDkwOTMxNTE= | 671 | [BUG] No such file or directory | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"repos_url": "https://api.github.com/users/jbragg/repos",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,051,934,000 | 1,601,304,162,000 | 1,601,304,162,000 | CONTRIBUTOR | null | This happens when both
1. Huggingface datasets cache dir does not exist
2. Try to load a local dataset script
builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist
https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177
Tested on v1.0.2
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/671/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/670/comments | https://api.github.com/repos/huggingface/datasets/issues/670/events | https://github.com/huggingface/datasets/pull/670 | 709,061,231 | MDExOlB1bGxSZXF1ZXN0NDkzMTc4OTQw | 670 | Fix SQuAD metric kwargs description | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,601,050,137,000 | 1,601,395,059,000 | 1,601,395,058,000 | MEMBER | null | The `answer_start` field was missing in the kwargs docstring.
This should fix #657
FYI another fix was proposed by @tshrjn in #658 and suggests to remove this field.
However IMO `answer_start` is useful to match the squad dataset format for consistency, even though it is not used in the metric computation. I think it's better to keep it this way, so that you can just give references=squad["answers"] to .compute().
Let me know what sounds the best for you
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/670/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/670",
"html_url": "https://github.com/huggingface/datasets/pull/670",
"diff_url": "https://github.com/huggingface/datasets/pull/670.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/670.patch",
"merged_at": 1601395057000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/669/comments | https://api.github.com/repos/huggingface/datasets/issues/669/events | https://github.com/huggingface/datasets/issues/669 | 708,857,595 | MDU6SXNzdWU3MDg4NTc1OTU= | 669 | How to skip a example when running dataset.map | {
"login": "xixiaoyao",
"id": 24541791,
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xixiaoyao",
"html_url": "https://github.com/xixiaoyao",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @xixiaoyao,\r\nDepending on what you want to do you can:\r\n- use a first step of `filter` to filter out the invalid examples: https://huggingface.co/docs/datasets/processing.html#filtering-rows-select-and-filter\r\n- or directly detect the invalid examples inside the callable used with `map` and return them unchanged or even remove them at the same time if you are using `map` in batched mode. Here is an example where we use `map` in batched mode to add new rows on the fly but you can also use it to remove examples on the fly (that's what `filter` actually do under-the-hood): https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset",
"Closing this one.\r\nFeel free to re-open if you have other questions"
] | 1,601,032,673,000 | 1,601,915,293,000 | 1,601,915,293,000 | NONE | null | in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/669/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/668/comments | https://api.github.com/repos/huggingface/datasets/issues/668/events | https://github.com/huggingface/datasets/issues/668 | 708,310,956 | MDU6SXNzdWU3MDgzMTA5NTY= | 668 | OverflowError when slicing with an array containing negative ids | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,600,964,834,000 | 1,601,304,139,000 | 1,601,304,139,000 | MEMBER | null | ```python
from datasets import Dataset
d = ds.Dataset.from_dict({"a": range(10)})
print(d[0])
# {'a': 0}
print(d[-1])
# {'a': 9}
print(d[[0, -1]])
# OverflowError
```
results in
```
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-5-863dc3555598> in <module>
----> 1 d[[0, -1]]
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key)
1070 format_columns=self._format_columns,
1071 output_all_columns=self._output_all_columns,
-> 1072 format_kwargs=self._format_kwargs,
1073 )
1074
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
1025 indices = key
1026
-> 1027 indices_array = pa.array([int(i) for i in indices], type=pa.uint64())
1028
1029 # Check if we need to convert indices
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
OverflowError: can't convert negative value to unsigned int
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/668/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/667/comments | https://api.github.com/repos/huggingface/datasets/issues/667/events | https://github.com/huggingface/datasets/issues/667 | 708,258,392 | MDU6SXNzdWU3MDgyNTgzOTI= | 667 | Loss not decrease with Datasets and Transformers | {
"login": "wangcongcong123",
"id": 23032865,
"node_id": "MDQ6VXNlcjIzMDMyODY1",
"avatar_url": "https://avatars.githubusercontent.com/u/23032865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangcongcong123",
"html_url": "https://github.com/wangcongcong123",
"followers_url": "https://api.github.com/users/wangcongcong123/followers",
"following_url": "https://api.github.com/users/wangcongcong123/following{/other_user}",
"gists_url": "https://api.github.com/users/wangcongcong123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wangcongcong123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wangcongcong123/subscriptions",
"organizations_url": "https://api.github.com/users/wangcongcong123/orgs",
"repos_url": "https://api.github.com/users/wangcongcong123/repos",
"events_url": "https://api.github.com/users/wangcongcong123/events{/privacy}",
"received_events_url": "https://api.github.com/users/wangcongcong123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"And I tested it on T5ForConditionalGeneration, that works no problem.",
"Hi did you manage to fix your issue ?\r\n\r\nIf so feel free to share your fix and close this thread"
] | 1,600,960,483,000 | 1,609,531,285,000 | 1,609,531,285,000 | NONE | null | HI,
The following script is used to fine-tune a BertForSequenceClassification model on SST2.
The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad dataset. In that colab, loss works fine. When I adapt it to SST2, the loss fails to decrease as it should. I attach the adapted script below and appreciate anyone pointing out what I miss?
```python
import torch
from datasets import load_dataset
from transformers import BertForSequenceClassification
from transformers import BertTokenizerFast
# Load our training dataset and tokenizer
dataset = load_dataset("glue", 'sst2')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
del dataset["test"] # let's remove it in this demo
# Tokenize our training dataset
def convert_to_features(example_batch):
encodings = tokenizer(example_batch["sentence"])
encodings.update({"labels": example_batch["label"]})
return encodings
encoded_dataset = dataset.map(convert_to_features, batched=True)
# Format our dataset to outputs torch.Tensor to train a pytorch model
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'labels']
encoded_dataset.set_format(type='torch', columns=columns)
# Instantiate a PyTorch Dataloader around our dataset
# Let's do dynamic batching (pad on the fly with our own collate_fn)
def collate_fn(examples):
return tokenizer.pad(examples, return_tensors='pt')
dataloader = torch.utils.data.DataLoader(encoded_dataset['train'], collate_fn=collate_fn, batch_size=8)
# Now let's train our model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Let's load a pretrained Bert model and a simple optimizer
model = BertForSequenceClassification.from_pretrained('bert-base-cased', return_dict=True)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5)
model.train().to(device)
for i, batch in enumerate(dataloader):
batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
```
In case needed.
- datasets == 1.0.2
- transformers == 3.2.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/667/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/666/comments | https://api.github.com/repos/huggingface/datasets/issues/666/events | https://github.com/huggingface/datasets/issues/666 | 707,608,578 | MDU6SXNzdWU3MDc2MDg1Nzg= | 666 | Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT? | {
"login": "wahab4114",
"id": 31090427,
"node_id": "MDQ6VXNlcjMxMDkwNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/31090427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wahab4114",
"html_url": "https://github.com/wahab4114",
"followers_url": "https://api.github.com/users/wahab4114/followers",
"following_url": "https://api.github.com/users/wahab4114/following{/other_user}",
"gists_url": "https://api.github.com/users/wahab4114/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wahab4114/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wahab4114/subscriptions",
"organizations_url": "https://api.github.com/users/wahab4114/orgs",
"repos_url": "https://api.github.com/users/wahab4114/repos",
"events_url": "https://api.github.com/users/wahab4114/events{/privacy}",
"received_events_url": "https://api.github.com/users/wahab4114/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"No they are other similar copies but they are not provided by the official Bert models authors."
] | 1,600,887,745,000 | 1,603,811,965,000 | 1,603,811,965,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/666/timeline | null | null | null | false |
|
https://api.github.com/repos/huggingface/datasets/issues/665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/665/comments | https://api.github.com/repos/huggingface/datasets/issues/665/events | https://github.com/huggingface/datasets/issues/665 | 707,037,738 | MDU6SXNzdWU3MDcwMzc3Mzg= | 665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | {
"login": "xixiaoyao",
"id": 24541791,
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xixiaoyao",
"html_url": "https://github.com/xixiaoyao",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi !\r\nIt works on my side with both the LongFormerTokenizer and the LongFormerTokenizerFast.\r\n\r\nWhich version of transformers/datasets are you using ?",
"transformers and datasets are both the latest",
"Then I guess you need to give us more informations on your setup (OS, python, GPU, etc) or a Google Colab reproducing the error for us to be able to debug this error.",
"And your version of `dill` if possible :)",
"I have the same issue with `transformers/BertJapaneseTokenizer`.\r\n\r\n\r\n\r\n```python\r\n# train_ds = Dataset(features: {\r\n# 'title': Value(dtype='string', id=None), \r\n# 'score': Value(dtype='float64', id=None)\r\n# }, num_rows: 99999)\r\n\r\nt = BertJapaneseTokenizer.from_pretrained('bert-base-japanese-whole-word-masking')\r\nencoded = train_ds.map(lambda examples: {'tokens': t.encode(examples['title'])}, batched=True)\r\n```\r\n\r\n<details><summary>Error Message</summary>\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-35-2b7d66b291c1> in <module>\r\n 2 \r\n 3 encoded = train_ds.map(lambda examples:\r\n----> 4 {'tokens': t.encode(examples['title'])}, batched=True)\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1242 fn_kwargs=fn_kwargs,\r\n 1243 new_fingerprint=new_fingerprint,\r\n-> 1244 update_data=update_data,\r\n 1245 )\r\n 1246 else:\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 151 \"output_all_columns\": self._output_all_columns,\r\n 152 }\r\n--> 153 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 154 if new_format[\"columns\"] is not None:\r\n 155 new_format[\"columns\"] = list(set(new_format[\"columns\"]) & set(out.column_names))\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)\r\n 156 kwargs_for_fingerprint[\"fingerprint_name\"] = fingerprint_name\r\n 157 kwargs[fingerprint_name] = update_fingerprint(\r\n--> 158 self._fingerprint, transform, kwargs_for_fingerprint\r\n 159 )\r\n 160 \r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)\r\n 103 for key in sorted(transform_args):\r\n 104 hasher.update(key)\r\n--> 105 hasher.update(transform_args[key])\r\n 106 return hasher.hexdigest()\r\n 107 \r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in update(self, value)\r\n 55 def update(self, value):\r\n 56 self.m.update(f\"=={type(value)}==\".encode(\"utf8\"))\r\n---> 57 self.m.update(self.hash(value).encode(\"utf-8\"))\r\n 58 \r\n 59 def hexdigest(self):\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in hash(cls, value)\r\n 51 return cls.dispatch[type(value)](cls, value)\r\n 52 else:\r\n---> 53 return cls.hash_default(value)\r\n 54 \r\n 55 def update(self, value):\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in hash_default(cls, value)\r\n 44 @classmethod\r\n 45 def hash_default(cls, value):\r\n---> 46 return cls.hash_bytes(dumps(value))\r\n 47 \r\n 48 @classmethod\r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/utils/py_utils.py in dumps(obj)\r\n 365 file = StringIO()\r\n 366 with _no_cache_fields(obj):\r\n--> 367 dump(obj, file)\r\n 368 return file.getvalue()\r\n 369 \r\n\r\n/usr/local/lib/python3.6/site-packages/datasets/utils/py_utils.py in dump(obj, file)\r\n 337 def dump(obj, file):\r\n 338 \"\"\"pickle an object to a file\"\"\"\r\n--> 339 Pickler(file, recurse=True).dump(obj)\r\n 340 return\r\n 341 \r\n\r\n/usr/local/lib/python3.6/site-packages/dill/_dill.py in dump(self, obj)\r\n 444 raise PicklingError(msg)\r\n 445 else:\r\n--> 446 StockPickler.dump(self, obj)\r\n 447 stack.clear() # clear record of 'recursion-sensitive' pickled objects\r\n 448 return\r\n\r\n/usr/local/lib/python3.6/pickle.py in dump(self, obj)\r\n 407 if self.proto >= 4:\r\n 408 self.framer.start_framing()\r\n--> 409 self.save(obj)\r\n 410 self.write(STOP)\r\n 411 self.framer.end_framing()\r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n/usr/local/lib/python3.6/site-packages/dill/_dill.py in save_function(pickler, obj)\r\n 1436 globs, obj.__name__,\r\n 1437 obj.__defaults__, obj.__closure__,\r\n-> 1438 obj.__dict__, fkwdefaults), obj=obj)\r\n 1439 else:\r\n 1440 _super = ('super' in getattr(obj.func_code,'co_names',())) and (_byref is not None) and getattr(pickler, '_recurse', False)\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 608 else:\r\n 609 save(func)\r\n--> 610 save(args)\r\n 611 write(REDUCE)\r\n 612 \r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n/usr/local/lib/python3.6/pickle.py in save_tuple(self, obj)\r\n 749 write(MARK)\r\n 750 for element in obj:\r\n--> 751 save(element)\r\n 752 \r\n 753 if id(obj) in memo:\r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n/usr/local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_dict(self, obj)\r\n 819 \r\n 820 self.memoize(obj)\r\n--> 821 self._batch_setitems(obj.items())\r\n 822 \r\n 823 dispatch[dict] = save_dict\r\n\r\n/usr/local/lib/python3.6/pickle.py in _batch_setitems(self, items)\r\n 850 k, v = tmp[0]\r\n 851 save(k)\r\n--> 852 save(v)\r\n 853 write(SETITEM)\r\n 854 # else tmp is empty, and we're done\r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 519 \r\n 520 # Save the reduce() output and finally memoize the object\r\n--> 521 self.save_reduce(obj=obj, *rv)\r\n 522 \r\n 523 def persistent_id(self, obj):\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 632 \r\n 633 if state is not None:\r\n--> 634 save(state)\r\n 635 write(BUILD)\r\n 636 \r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n/usr/local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_dict(self, obj)\r\n 819 \r\n 820 self.memoize(obj)\r\n--> 821 self._batch_setitems(obj.items())\r\n 822 \r\n 823 dispatch[dict] = save_dict\r\n\r\n/usr/local/lib/python3.6/pickle.py in _batch_setitems(self, items)\r\n 845 for k, v in tmp:\r\n 846 save(k)\r\n--> 847 save(v)\r\n 848 write(SETITEMS)\r\n 849 elif n:\r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 519 \r\n 520 # Save the reduce() output and finally memoize the object\r\n--> 521 self.save_reduce(obj=obj, *rv)\r\n 522 \r\n 523 def persistent_id(self, obj):\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 632 \r\n 633 if state is not None:\r\n--> 634 save(state)\r\n 635 write(BUILD)\r\n 636 \r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n/usr/local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n/usr/local/lib/python3.6/pickle.py in save_dict(self, obj)\r\n 819 \r\n 820 self.memoize(obj)\r\n--> 821 self._batch_setitems(obj.items())\r\n 822 \r\n 823 dispatch[dict] = save_dict\r\n\r\n/usr/local/lib/python3.6/pickle.py in _batch_setitems(self, items)\r\n 845 for k, v in tmp:\r\n 846 save(k)\r\n--> 847 save(v)\r\n 848 write(SETITEMS)\r\n 849 elif n:\r\n\r\n/usr/local/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)\r\n 494 reduce = getattr(obj, \"__reduce_ex__\", None)\r\n 495 if reduce is not None:\r\n--> 496 rv = reduce(self.proto)\r\n 497 else:\r\n 498 reduce = getattr(obj, \"__reduce__\", None)\r\n\r\nTypeError: can't pickle Tagger objects\r\n```\r\n\r\n</details>\r\n\r\ntrainsformers: 2.10.0\r\ndatasets: 1.0.2\r\ndill: 0.3.2\r\npython: 3.6.8\r\n\r\nOS: ubuntu 16.04 (Docker Image) on [Deep Learning VM](https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning) (GCP)\r\nGPU: Tesla P100 (CUDA 10)\r\n",
"> I have the same issue with `transformers/BertJapaneseTokenizer`.\r\n\r\nIt looks like it this tokenizer is not supported unfortunately.\r\nThis is because `t.word_tokenizer.mecab` is a `fugashi.fugashi.GenericTagger` which is not compatible with pickle nor dill.\r\n\r\nWe need objects passes to `map` to be picklable for our caching system to work properly.\r\nHere it crashes because the caching system is not able to pickle the GenericTagger.\r\n\r\n\\> Maybe you can create an issue on [fugashi](https://github.com/polm/fugashi/issues) 's repo and ask to make `fugashi.fugashi.GenericTagger` compatible with pickle ?\r\n\r\nWhat you can do in the meantime is use a picklable wrapper of the tokenizer:\r\n\r\n\r\n```python\r\nfrom transformers import BertJapaneseTokenizer, MecabTokenizer\r\n\r\nclass PicklableTokenizer(BertJapaneseTokenizer):\r\n\r\n def __getstate__(self):\r\n state = dict(self.__dict__)\r\n state[\"do_lower_case\"] = self.word_tokenizer.do_lower_case\r\n state[\"never_split\"] = self.word_tokenizer.never_split \r\n del state[\"word_tokenizer\"]\r\n return state\r\n\r\n def __setstate__(self, state):\r\n do_lower_case = state.pop(\"do_lower_case\")\r\n never_split = state.pop(\"never_split\")\r\n self.__dict__ = state\r\n self.word_tokenizer = MecabTokenizer(\r\n do_lower_case=do_lower_case, never_split=never_split)\r\n )\r\n\r\nt = PicklableTokenizer.from_pretrained(\"cl-tohoku/bert-base-japanese-whole-word-masking\")\r\nencoded = train_ds.map(lambda examples: {'tokens': t.encode(examples['title'])}, batched=True) # it works\r\n```",
"We can also update the `BertJapaneseTokenizer` in `transformers` as you just shown @lhoestq to make it compatible with pickle. It will be faster than asking on fugashi 's repo and good for the other users of `transformers` as well.\r\n\r\nI'm currently working on `transformers` I'll include it in the https://github.com/huggingface/transformers/pull/7141 PR and the next release of `transformers`.",
"Thank you for the rapid and polite response!\r\n\r\n@lhoestq Thanks for the suggestion! I've passed the pickle phase, but another `ArrowInvalid` problem occored. I created another issue #687 .\r\n\r\n@thomwolf Wow, really fast work. I'm looking forward to the next release π€"
] | 1,600,835,294,000 | 1,602,149,536,000 | 1,602,149,536,000 | NONE | null | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode_plus(input_pairs, pad_to_max_length=True, max_length=512)
context_encodings = tokenizer.encode_plus(example['context'])
# Compute start and end tokens for labels using Transformers's fast tokenizers alignement methodes.
# this will give us the position of answer span in the context text
start_idx, end_idx = get_correct_alignement(example['context'], example['answers'])
start_positions_context = context_encodings.char_to_token(start_idx)
end_positions_context = context_encodings.char_to_token(end_idx-1)
# here we will compute the start and end position of the answer in the whole example
# as the example is encoded like this <s> question</s></s> context</s>
# and we know the postion of the answer in the context
# we can just find out the index of the sep token and then add that to position + 1 (+1 because there are two sep tokens)
# this will give us the position of the answer span in whole example
sep_idx = encodings['input_ids'].index(tokenizer.sep_token_id)
start_positions = start_positions_context + sep_idx + 1
end_positions = end_positions_context + sep_idx + 1
if end_positions > 512:
start_positions, end_positions = 0, 0
encodings.update({'start_positions': start_positions,
'end_positions': end_positions,
'attention_mask': encodings['attention_mask']})
return encodings
```
Then I run `dataset.map(convert_to_features)`, it raise
```
In [59]: a.map(convert_to_features)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-59-c453b508761d> in <module>
----> 1 a.map(convert_to_features)
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1242 fn_kwargs=fn_kwargs,
1243 new_fingerprint=new_fingerprint,
-> 1244 update_data=update_data,
1245 )
1246 else:
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
151 "output_all_columns": self._output_all_columns,
152 }
--> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
154 if new_format["columns"] is not None:
155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names))
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name
157 kwargs[fingerprint_name] = update_fingerprint(
--> 158 self._fingerprint, transform, kwargs_for_fingerprint
159 )
160
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)
103 for key in sorted(transform_args):
104 hasher.update(key)
--> 105 hasher.update(transform_args[key])
106 return hasher.hexdigest()
107
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update(self, value)
55 def update(self, value):
56 self.m.update(f"=={type(value)}==".encode("utf8"))
---> 57 self.m.update(self.hash(value).encode("utf-8"))
58
59 def hexdigest(self):
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
/opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dumps(obj)
365 file = StringIO()
366 with _no_cache_fields(obj):
--> 367 dump(obj, file)
368 return file.getvalue()
369
/opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dump(obj, file)
337 def dump(obj, file):
338 """pickle an object to a file"""
--> 339 Pickler(file, recurse=True).dump(obj)
340 return
341
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in dump(self, obj)
444 raise PicklingError(msg)
445 else:
--> 446 StockPickler.dump(self, obj)
447 stack.clear() # clear record of 'recursion-sensitive' pickled objects
448 return
/opt/conda/lib/python3.7/pickle.py in dump(self, obj)
435 if self.proto >= 4:
436 self.framer.start_framing()
--> 437 self.save(obj)
438 self.write(STOP)
439 self.framer.end_framing()
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_function(pickler, obj)
1436 globs, obj.__name__,
1437 obj.__defaults__, obj.__closure__,
-> 1438 obj.__dict__, fkwdefaults), obj=obj)
1439 else:
1440 _super = ('super' in getattr(obj.func_code,'co_names',())) and (_byref is not None) and getattr(pickler, '_recurse', False)
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
636 else:
637 save(func)
--> 638 save(args)
639 write(REDUCE)
640
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/pickle.py in save_tuple(self, obj)
787 write(MARK)
788 for element in obj:
--> 789 save(element)
790
791 if id(obj) in memo:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
522 reduce = getattr(obj, "__reduce_ex__", None)
523 if reduce is not None:
--> 524 rv = reduce(self.proto)
525 else:
526 reduce = getattr(obj, "__reduce__", None)
TypeError: can't pickle Tokenizer objects
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/665/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/665/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/664 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/664/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/664/comments | https://api.github.com/repos/huggingface/datasets/issues/664/events | https://github.com/huggingface/datasets/issues/664 | 707,017,791 | MDU6SXNzdWU3MDcwMTc3OTE= | 664 | load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable | {
"login": "xixiaoyao",
"id": 24541791,
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xixiaoyao",
"html_url": "https://github.com/xixiaoyao",
"followers_url": "https://api.github.com/users/xixiaoyao/followers",
"following_url": "https://api.github.com/users/xixiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/xixiaoyao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xixiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xixiaoyao/subscriptions",
"organizations_url": "https://api.github.com/users/xixiaoyao/orgs",
"repos_url": "https://api.github.com/users/xixiaoyao/repos",
"events_url": "https://api.github.com/users/xixiaoyao/events{/privacy}",
"received_events_url": "https://api.github.com/users/xixiaoyao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi !\r\nThanks for reporting.\r\nIt looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script.\r\n\r\nCould you check that there exist at least one dataset builder class ?",
"Hi @xixiaoyao did you manage to fix your issue ?",
"No activity, closing"
] | 1,600,833,216,000 | 1,603,184,773,000 | 1,603,184,773,000 | NONE | null |
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.
```
train_dataset = datasets.load_dataset('./my_squad.py')
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-25a84b4d1581> in <module>
----> 1 train_dataset = nlp.load_dataset('./my_squad.py')
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
TypeError: 'NoneType' object is not callable
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/664/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/664/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/663/comments | https://api.github.com/repos/huggingface/datasets/issues/663/events | https://github.com/huggingface/datasets/pull/663 | 706,732,636 | MDExOlB1bGxSZXF1ZXN0NDkxMjI3NzUz | 663 | Created dataset card snli.md | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | closed | false | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Adding a direct link to the rendered markdown:\r\nhttps://github.com/mcmillanmajora/datasets/blob/add_dataset_documentation/datasets/snli/README.md\r\n",
"It would be amazing if we ended up with this much information on all of our datasets :) \r\n\r\nI don't think there's too much repetition, everything that is in here is relevant. The main challenge will be to figure out how to structure the sheet so that all of the information can be presented without overwhelming the reader. We'll also want to have as much of it as possible in structured form so it can be easily navigated.",
"@mcmillanmajora for now can you remove the prompts / quoted blocks so we can see what the datasheet would look like on its own?\r\n\r\nWould also love to hear if @sgugger has some first impressions",
"I removed the prompts. It's definitely a little easier to read without them!",
"Should we name the file `README.md` for consistency with models?",
"Asked @sleepinyourhat for some insights too :) ",
"Thank you for taking the time to look through the card and for all your comments @sleepinyourhat ! I've incorporated them in the latest update. ",
"Be careful to keep the βsaβ term in the license. Itβs something we\ninherited from the Flickr captions.\n\nOn Thu, Oct 1, 2020 at 10:09 AM Julien Chaumond <notifications@github.com>\nwrote:\n\n> *@julien-c* commented on this pull request.\n> ------------------------------\n>\n> In datasets/snli/README.md\n> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_huggingface_datasets_pull_663-23discussion-5Fr498273172&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=PHPCew9Xj3CBQrudcaii70ln-wpRtbngE_tj3Ioy3NI&s=WbEkKXCbL6j5Ui3sox_WqvzrbShbJn2WW-51SENL2ZQ&e=>\n> :\n>\n> > +---\n> +language:\n> +- en\n> +task:\n> +- text-classification\n> +purpose:\n> +- NLI\n> +size:\n> +- \">100k\"\n> +language producers:\n> +- crowdsourced\n> +annotation:\n> +- crowdsourced\n> +tags:\n> +- extended-from-other-datasets\n> +license: \"CC BY-SA 4.0\"\n>\n> β¬οΈ Suggested change\n>\n> -license: \"CC BY-SA 4.0\"\n> +license: cc-by-4.0\n>\n> For models (documented at\n> https://huggingface.co/docs#what-metadata-can-i-add-to-my-model-card\n> <https://urldefense.proofpoint.com/v2/url?u=https-3A__huggingface.co_docs-23what-2Dmetadata-2Dcan-2Di-2Dadd-2Dto-2Dmy-2Dmodel-2Dcard&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=PHPCew9Xj3CBQrudcaii70ln-wpRtbngE_tj3Ioy3NI&s=ck3x8c_ujrwKReDTSGuWWgD9W6REHEPbZaO7S4GFRd4&e=>)\n> we use the License keywords listed by GitHub at\n> https://docs.github.com/en/free-pro-team@latest/github/creating-cloning-and-archiving-repositories/licensing-a-repository#searching-github-by-license-type\n> <https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.github.com_en_free-2Dpro-2Dteam-40latest_github_creating-2Dcloning-2Dand-2Darchiving-2Drepositories_licensing-2Da-2Drepository-23searching-2Dgithub-2Dby-2Dlicense-2Dtype&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=PHPCew9Xj3CBQrudcaii70ln-wpRtbngE_tj3Ioy3NI&s=dWBP-ZvtMErD-egoBiBTCKA4500mjDXVSk03oW1g16U&e=>\n>\n> (Hopefully we'll plug some sort of form validation for users at some point)\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_huggingface_datasets_pull_663-23pullrequestreview-2D500386385&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=PHPCew9Xj3CBQrudcaii70ln-wpRtbngE_tj3Ioy3NI&s=HU2Hwi7HH9W2NtMoCIiQlhXxxEULLi8L9gnWU5PBAPY&e=>,\n> or unsubscribe\n> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAJZSWL63W2LB7SBICA2GMTSISEPZANCNFSM4RWKAZRA&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=PHPCew9Xj3CBQrudcaii70ln-wpRtbngE_tj3Ioy3NI&s=086__lKQLxTanHfjE8kOIpaJbaWPzBB9gGIt_prWeH8&e=>\n> .\n>\n",
"@sleepinyourhat You're right, wrong copy/paste",
"Question: Where does this standard come from? It looks similar to both\n'Data Statements' and 'Datasheets for Datasets', but it doesn't look quite\nlike either.\n\nOn Mon, Oct 12, 2020 at 4:27 PM Yacine Jernite <notifications@github.com>\nwrote:\n\n> Merged #663\n> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_huggingface_datasets_pull_663&d=DwMCaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=D34WbiHBTYHOdXsI9JV9wJqSieP6zAPGqGKDziM5uKU&s=s4_X-BSEnTKgGg9rPLBt3cyVptyMX_iWD5Ql3UMBi-I&e=>\n> into master.\n>\n> β\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_huggingface_datasets_pull_663-23event-2D3868180429&d=DwMCaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=D34WbiHBTYHOdXsI9JV9wJqSieP6zAPGqGKDziM5uKU&s=elcM4umqReQfIrgHhpey9W_wPaq5QRgq7xNlubM47QI&e=>,\n> or unsubscribe\n> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAJZSWJVGQRCR4OTTV27VTTSKNRBXANCNFSM4RWKAZRA&d=DwMCaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=D34WbiHBTYHOdXsI9JV9wJqSieP6zAPGqGKDziM5uKU&s=NB6nEROnTPgwNyF3ZklOmHnvP7kOkOm7sEa740KbVCs&e=>\n> .\n>\n",
"@sleepinyourhat The schema is definitely drawing from Data Statements and Datasheets for Datasets but we also wanted to include some more general information to introduce the dataset to new users. If you have any suggestions for changes to the schema itself, please let us know!"
] | 1,600,813,777,000 | 1,602,608,720,000 | 1,602,534,412,000 | CONTRIBUTOR | null | First draft of a dataset card using the SNLI corpus as an example.
This is mostly based on the [Google Doc draft](https://docs.google.com/document/d/1dKPGP-dA2W0QoTRGfqQ5eBp0CeSsTy7g2yM8RseHtos/edit), but I added a few sections and moved some things around.
- I moved **Who Was Involved** to follow **Language**, both because I thought the authors should be presented more towards the front and because I think it makes sense to present the speakers close to the language so it doesn't have to be repeated.
- I created a section I called **Data Characteristics** by pulling some things out of the other sections. I was thinking that this would be more about the language use in context of the specific task construction. That name isn't very descriptive though and could probably be improved.
-- Domain and language type out of **Language**. I particularly wanted to keep the Language section as simple and as abstracted from the task as possible.
-- 'How was the data collected' out of **Who Was Involved**
-- Normalization out of **Features/Dataset Structure**
-- I also added an annotation process section.
- I kept the **Features** section mostly the same as the Google Doc, but I renamed it **Dataset Structure** to more clearly separate it from the language use, and added some links to the documentation pages.
- I also kept **Tasks Supported**, **Known Limitations**, and **Licensing Information** mostly the same. Looking at it again though, maybe **Tasks Supported** should come before **Data Characteristics**?
The trickiest part about writing a dataset card for the SNLI corpus specifically is that it's built on datasets which are themselves built on datasets so I had to dig in a lot of places to find information. I think this will be easier with other datasets and once there is more uptake of dataset cards so they can just link to each other. (Maybe that needs to be an added section?)
I also made an effort not to repeat information across the sections or to refer to a previous section if the information was relevant in a later one. Is there too much repetition still? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/663/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/663/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/663",
"html_url": "https://github.com/huggingface/datasets/pull/663",
"diff_url": "https://github.com/huggingface/datasets/pull/663.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/663.patch",
"merged_at": 1602534412000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/662/comments | https://api.github.com/repos/huggingface/datasets/issues/662/events | https://github.com/huggingface/datasets/pull/662 | 706,689,866 | MDExOlB1bGxSZXF1ZXN0NDkxMTkyNTM3 | 662 | Created dataset card snli.md | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | closed | false | null | [] | null | [
"Resubmitting on a new fork"
] | 1,600,808,417,000 | 1,600,809,981,000 | 1,600,809,981,000 | CONTRIBUTOR | null | First draft of a dataset card using the SNLI corpus as an example | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/662/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/662",
"html_url": "https://github.com/huggingface/datasets/pull/662",
"diff_url": "https://github.com/huggingface/datasets/pull/662.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/662.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/661/comments | https://api.github.com/repos/huggingface/datasets/issues/661/events | https://github.com/huggingface/datasets/pull/661 | 706,465,936 | MDExOlB1bGxSZXF1ZXN0NDkxMDA3NjEw | 661 | Replace pa.OSFile by open | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,600,787,159,000 | 1,620,239,076,000 | 1,600,787,725,000 | MEMBER | null | It should fix #643 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/661/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/661",
"html_url": "https://github.com/huggingface/datasets/pull/661",
"diff_url": "https://github.com/huggingface/datasets/pull/661.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/661.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/660/comments | https://api.github.com/repos/huggingface/datasets/issues/660/events | https://github.com/huggingface/datasets/pull/660 | 706,324,032 | MDExOlB1bGxSZXF1ZXN0NDkwODkyMjQ0 | 660 | add openwebtext | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"BTW, is there a one-line command to make our building scripts pass flake8 test? (included code quality test), I got like trailing space or mixed space and tab warning and error, and fixed them manually.",
"> BTW, is there a one-line command to make our building scripts pass flake8 test? (included code quality test), I got like trailing space or mixed space and tab warning and error, and fixed them manually.\r\n\r\nI don't think so.\r\nWe have a command for black and isort but not flake8 as far as I know.",
"Thanks for your awesome work too.\r\nBTW a little reminder, this solves #132 "
] | 1,600,776,322,000 | 1,601,976,010,000 | 1,601,284,046,000 | CONTRIBUTOR | null | This adds [The OpenWebText Corpus](https://skylion007.github.io/OpenWebTextCorpus/), which is a clean and large text corpus for nlp pretraining. It is an open source effort to reproduce OpenAIβs WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA.
It solves #132 .
### Besides dataset building script, I made some changes to the library.
1. Extract large amount of compressed files with multi processing
I add a `num_proc` argument to `DownloadManager.extract` and pass this `num_proc` to `map_nested`. So I can decompress 20 thousands compressed files faster. `num_proc` I add is default to `None`, so it shouldn't break any other thing.
2. In `cached_path`, I change the order to deal with different kind of compressed files (zip, tar, gzip)
Because there is no way to 100% detect a file is a zip file (see [this](https://stackoverflow.com/questions/18194688/how-can-i-determine-if-a-file-is-a-zip-file)), I found it wrongly detect `'./datasets/downloads/extracted/58764bd6898fa339b25d92e7fbbc3d8dbf64fb504edff1a30a1d7d99d1561027/openwebtext/urlsf_subset13-630_data.xz'` as a zip and try decompress it with zip, sure it will get error. So I made it detect wheter the file is tar or gzip first and detect zip in the last.
3. `MockDownloadManager.extract`
Cuz I pass `num_proc` to `DownloadManager.extract`, I also have to make `MockDownloadManager.extract` to accept extra keywork arguments. So I make it `extract(path, *args, **kwargs)`, but just return the path as original implementation.
**Note**: If there is better way for points mentioned above, thought I would like to help, unless we can solve point4 (make dataset building fast), I may not be able to afford rebuild the dataset again because of change of the dataset script (Building the dataset cost me 4 days).
### There is something I think we can improve
4. Long time to decompress compressed files
Even I decompress those 20 thousands compressed files with 12 process on my 16 core 3.x Ghz server. It still took about 3 ~ 4days to complete dataset building. Most of time spent on decompress those files.
### Info about the source data
The source data is an tar.xz file with following structure, files/directory beyond compressed file is what can we get after decompress it.
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
And this the structure of dummy data, same as the original one.
```
dummy_data.zip
|__ dummy_data
|__ openwebtext
|__fake_subset-1_data-dirxz # actually it is a directory
| |__ ....txt
| |__ ....txt
|__ fake_subset-2_data-dirxz
|__ ....txt
|__ ....txt
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/660/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/660/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/660",
"html_url": "https://github.com/huggingface/datasets/pull/660",
"diff_url": "https://github.com/huggingface/datasets/pull/660.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/660.patch",
"merged_at": 1601284046000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/659/comments | https://api.github.com/repos/huggingface/datasets/issues/659/events | https://github.com/huggingface/datasets/pull/659 | 706,231,506 | MDExOlB1bGxSZXF1ZXN0NDkwODE4NTY1 | 659 | Keep new columns in transmit format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,600,768,043,000 | 1,600,769,242,000 | 1,600,769,240,000 | MEMBER | null | When a dataset is formatted with a list of columns that `__getitem__` should return, then calling `map` to add new columns doesn't add the new columns to this list.
It caused `KeyError` issues in #620
I changed the logic to add those new columns to the list that `__getitem__` should return. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/659/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/659",
"html_url": "https://github.com/huggingface/datasets/pull/659",
"diff_url": "https://github.com/huggingface/datasets/pull/659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/659.patch",
"merged_at": 1600769240000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/658/comments | https://api.github.com/repos/huggingface/datasets/issues/658/events | https://github.com/huggingface/datasets/pull/658 | 706,206,247 | MDExOlB1bGxSZXF1ZXN0NDkwNzk4MDc0 | 658 | Fix squad metric's Features | {
"login": "tshrjn",
"id": 8372098,
"node_id": "MDQ6VXNlcjgzNzIwOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshrjn",
"html_url": "https://github.com/tshrjn",
"followers_url": "https://api.github.com/users/tshrjn/followers",
"following_url": "https://api.github.com/users/tshrjn/following{/other_user}",
"gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions",
"organizations_url": "https://api.github.com/users/tshrjn/orgs",
"repos_url": "https://api.github.com/users/tshrjn/repos",
"events_url": "https://api.github.com/users/tshrjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/tshrjn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Closing this one in favor of #670 \r\n\r\nThanks again for reporting the issue and proposing this fix !\r\nLet me know if you have other remarks"
] | 1,600,765,792,000 | 1,601,395,110,000 | 1,601,395,110,000 | NONE | null | Resolves issue [657](https://github.com/huggingface/datasets/issues/657). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/658/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/658",
"html_url": "https://github.com/huggingface/datasets/pull/658",
"diff_url": "https://github.com/huggingface/datasets/pull/658.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/658.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/657/comments | https://api.github.com/repos/huggingface/datasets/issues/657/events | https://github.com/huggingface/datasets/issues/657 | 706,204,383 | MDU6SXNzdWU3MDYyMDQzODM= | 657 | Squad Metric Description & Feature Mismatch | {
"login": "tshrjn",
"id": 8372098,
"node_id": "MDQ6VXNlcjgzNzIwOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshrjn",
"html_url": "https://github.com/tshrjn",
"followers_url": "https://api.github.com/users/tshrjn/followers",
"following_url": "https://api.github.com/users/tshrjn/following{/other_user}",
"gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions",
"organizations_url": "https://api.github.com/users/tshrjn/orgs",
"repos_url": "https://api.github.com/users/tshrjn/repos",
"events_url": "https://api.github.com/users/tshrjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/tshrjn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting !\r\nThere indeed a mismatch between the features and the kwargs description\r\n\r\nI believe `answer_start` was added to match the squad dataset format for consistency, even though it is not used in the metric computation. I think I'd rather keep it this way, so that you can just give `references=squad[\"answers\"]` to `.compute()`.\r\nMaybe we can just fix the description then.",
"But then providing the `answer_start` becomes mandatory since the format of the features is checked against the one provided in the squad [file](https://github.com/huggingface/datasets/pull/658/files)."
] | 1,600,765,620,000 | 1,602,555,416,000 | 1,601,395,058,000 | NONE | null | The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/657/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/656/comments | https://api.github.com/repos/huggingface/datasets/issues/656/events | https://github.com/huggingface/datasets/pull/656 | 705,736,319 | MDExOlB1bGxSZXF1ZXN0NDkwNDEwODAz | 656 | Use multiprocess from pathos for multiprocessing | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"We can just install multiprocess actually, I'll change that",
"Just an FYI: I remember that I wanted to try pathos a couple of years back and I ran into issues considering cross-platform; the code would just break on Windows. If I can verify this PR by running CPU tests on Windows, let me know!",
"That's good to know thanks\r\nI guess we can just wait for #644 to be merged first. I'm working on fixing the tests for windows",
"Looks like all the CI jobs on windows passed !\r\nI also tested locally on my windows and it works great :) \r\n\r\nI think this is ready to merge, let me know if you have any remarks @thomwolf @BramVanroy "
] | 1,600,704,739,000 | 1,601,304,340,000 | 1,601,304,339,000 | MEMBER | null | [Multiprocess](https://github.com/uqfoundation/multiprocess) (from the [pathos](https://github.com/uqfoundation/pathos) project) allows to use lambda functions in multiprocessed map.
It was suggested to use it by @kandorm.
We're already using dill which is its only dependency. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/656/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/656",
"html_url": "https://github.com/huggingface/datasets/pull/656",
"diff_url": "https://github.com/huggingface/datasets/pull/656.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/656.patch",
"merged_at": 1601304339000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/655/comments | https://api.github.com/repos/huggingface/datasets/issues/655/events | https://github.com/huggingface/datasets/pull/655 | 705,672,208 | MDExOlB1bGxSZXF1ZXN0NDkwMzU4OTQ3 | 655 | added Winogrande debiased subset | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"To fix the CI you just have to copy the dummy data to the 1.1.0 folder, and maybe create the dummy ones for the `debiased` configuration",
"Fixed! Thanks @lhoestq "
] | 1,600,699,868,000 | 1,600,705,240,000 | 1,600,704,964,000 | MEMBER | null | The [Winogrande](https://arxiv.org/abs/1907.10641) paper mentions a `debiased` subset that wasn't in the first release; this PR adds it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/655/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/655",
"html_url": "https://github.com/huggingface/datasets/pull/655",
"diff_url": "https://github.com/huggingface/datasets/pull/655.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/655.patch",
"merged_at": 1600704964000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/654/comments | https://api.github.com/repos/huggingface/datasets/issues/654/events | https://github.com/huggingface/datasets/pull/654 | 705,511,058 | MDExOlB1bGxSZXF1ZXN0NDkwMjI1Nzk3 | 654 | Allow empty inputs in metrics | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,600,687,596,000 | 1,601,956,308,000 | 1,600,704,818,000 | MEMBER | null | There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/654/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/654",
"html_url": "https://github.com/huggingface/datasets/pull/654",
"diff_url": "https://github.com/huggingface/datasets/pull/654.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/654.patch",
"merged_at": 1600704818000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/653/comments | https://api.github.com/repos/huggingface/datasets/issues/653/events | https://github.com/huggingface/datasets/pull/653 | 705,482,391 | MDExOlB1bGxSZXF1ZXN0NDkwMjAxOTg4 | 653 | handle data alteration when trying type | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,600,684,909,000 | 1,600,704,786,000 | 1,600,704,785,000 | MEMBER | null | Fix #649
The bug came from the type inference that didn't handle a weird case in Pyarrow.
Indeed this code runs without error but alters the data in arrow:
```python
import pyarrow as pa
type = pa.struct({"a": pa.struct({"b": pa.string()})})
array_with_altered_data = pa.array([{"a": {"b": "foo", "c": "bar"}}] * 10, type=type)
print(array_with_altered_data[0].as_py())
# {'a': {'b': 'foo'}} -> the sub-field "c" is missing
```
(I don't know if this is intended in pyarrow tbh)
We didn't take this case into account during type inference. By default it was keeping old features and maybe alter data.
To fix that I added a line that checks that the first element of the array is not altered. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/653/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/653",
"html_url": "https://github.com/huggingface/datasets/pull/653",
"diff_url": "https://github.com/huggingface/datasets/pull/653.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/653.patch",
"merged_at": 1600704785000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/652/comments | https://api.github.com/repos/huggingface/datasets/issues/652/events | https://github.com/huggingface/datasets/pull/652 | 705,390,850 | MDExOlB1bGxSZXF1ZXN0NDkwMTI3MjIx | 652 | handle connection error in download_prepared_from_hf_gcs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,600,676,471,000 | 1,600,676,923,000 | 1,600,676,922,000 | MEMBER | null | Fix #647 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/652/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/652",
"html_url": "https://github.com/huggingface/datasets/pull/652",
"diff_url": "https://github.com/huggingface/datasets/pull/652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/652.patch",
"merged_at": 1600676922000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/651/comments | https://api.github.com/repos/huggingface/datasets/issues/651/events | https://github.com/huggingface/datasets/issues/651 | 705,212,034 | MDU6SXNzdWU3MDUyMTIwMzQ= | 651 | Problem with JSON dataset format | {
"login": "vikigenius",
"id": 12724810,
"node_id": "MDQ6VXNlcjEyNzI0ODEw",
"avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikigenius",
"html_url": "https://github.com/vikigenius",
"followers_url": "https://api.github.com/users/vikigenius/followers",
"following_url": "https://api.github.com/users/vikigenius/following{/other_user}",
"gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions",
"organizations_url": "https://api.github.com/users/vikigenius/orgs",
"repos_url": "https://api.github.com/users/vikigenius/repos",
"events_url": "https://api.github.com/users/vikigenius/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikigenius/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Currently the `json` dataset doesn't support this format unfortunately.\r\nHowever you could load it with\r\n```python\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\n\r\ndf = pd.read_json(\"path_to_local.json\", orient=\"index\")\r\ndataset = Dataset.from_pandas(df)\r\n```",
"or you can make a custom dataset script as explained in doc here: https://huggingface.co/docs/datasets/add_dataset.html"
] | 1,600,646,234,000 | 1,600,690,464,000 | null | NONE | null | I have a local json dataset with the following form.
{
'id01234': {'key1': value1, 'key2': value2, 'key3': value3},
'id01235': {'key1': value1, 'key2': value2, 'key3': value3},
.
.
.
'id09999': {'key1': value1, 'key2': value2, 'key3': value3}
}
Note that instead of a list of records it's basically a dictionary of key value pairs with the keys being the record_ids and the values being the corresponding record.
Reading this with json:
```
data = datasets.load('json', data_files='path_to_local.json')
```
Throws an error and asks me to chose a field. What's the right way to handle this? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/651/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/650/comments | https://api.github.com/repos/huggingface/datasets/issues/650/events | https://github.com/huggingface/datasets/issues/650 | 704,861,844 | MDU6SXNzdWU3MDQ4NjE4NDQ= | 650 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi :) \r\nIn your dummy data zip file you can just have `subset000.xz` as directories instead of compressed files.\r\nLet me know if it helps",
"Thanks for your comment @lhoestq ,\r\nJust for confirmation, changing dummy data like this won't make dummy test test the functionality to extract `subsetxxx.xz` but actually kind of circumvent it. But since we will test the real data so it is ok ?",
"Yes it's fine for now. We plan to add a job for slow tests.\r\nAnd at one point we'll also do another pass on the dummy data handling and consider extracting files.",
"Thanks for the confirmation.\r\nAlso the suggestion works. Thank you."
] | 1,600,513,623,000 | 1,600,775,650,000 | 1,600,775,649,000 | CONTRIBUTOR | null | Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
def _split_generators(self, dl_manager):
dl_dir = dl_manager.download_and_extract(_URL)
owt_dir = os.path.join(dl_dir, 'openwebtext')
subset_xzs = [
os.path.join(owt_dir, file_name) for file_name in os.listdir(owt_dir) if file_name.endswith('xz') # filter out ...xz.lock
]
ex_dirs = dl_manager.extract(subset_xzs, num_proc=round(os.cpu_count()*0.75))
nested_txt_files = [
[
os.path.join(ex_dir,txt_file_name) for txt_file_name in os.listdir(ex_dir) if txt_file_name.endswith('txt')
] for ex_dir in ex_dirs
]
txt_files = chain(*nested_txt_files)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"txt_files": txt_files}
),
]
```
All went good, I can load and use real openwebtext, except when I try to test with dummy data. The problem is `MockDownloadManager.extract` do nothing, so `ex_dirs = dl_manager.extract(subset_xzs)` won't decompress `subset_xxx.xz`s for me.
How should I do ? Or you can modify `MockDownloadManager` to make it like a real `DownloadManager` ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/650/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/649/comments | https://api.github.com/repos/huggingface/datasets/issues/649/events | https://github.com/huggingface/datasets/issues/649 | 704,838,415 | MDU6SXNzdWU3MDQ4Mzg0MTU= | 649 | Inconsistent behavior in map | {
"login": "krandiash",
"id": 10166085,
"node_id": "MDQ6VXNlcjEwMTY2MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/10166085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krandiash",
"html_url": "https://github.com/krandiash",
"followers_url": "https://api.github.com/users/krandiash/followers",
"following_url": "https://api.github.com/users/krandiash/following{/other_user}",
"gists_url": "https://api.github.com/users/krandiash/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krandiash/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krandiash/subscriptions",
"organizations_url": "https://api.github.com/users/krandiash/orgs",
"repos_url": "https://api.github.com/users/krandiash/repos",
"events_url": "https://api.github.com/users/krandiash/events{/privacy}",
"received_events_url": "https://api.github.com/users/krandiash/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting !\r\n\r\nThis issue must have appeared when we refactored type inference in `nlp`\r\nBy default the library tries to keep the same feature types when applying `map` but apparently it has troubles with nested structures. I'll try to fix that next week"
] | 1,600,504,872,000 | 1,600,704,785,000 | 1,600,704,785,000 | NONE | null | I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem.
```python
import datasets
# Dataset with a single feature called 'field' consisting of two examples
dataset = datasets.Dataset.from_dict({'field': ['a', 'b']})
print(dataset[0])
# outputs
{'field': 'a'}
# Map this dataset to create another feature called 'otherfield', which is a dictionary containing a key called 'capital'
dataset = dataset.map(lambda example: {'otherfield': {'capital': example['field'].capitalize()}})
print(dataset[0])
# output is okay
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Now I want to map again to modify 'otherfield', by adding another key called 'append_x' to the dictionary under 'otherfield'
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x'}})[0])
# printing out the first example after applying the map shows that the new key 'append_x' doesn't get added
# it also messes up the value stored at 'capital'
{'field': 'a', 'otherfield': {'capital': None}}
# Instead, I try to do the same thing by using a different mapped fn
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}})[0])
# this preserves the value under capital, but still no 'append_x'
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Instead, I try to pass 'otherfield' to remove_columns
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}}, remove_columns=['otherfield'])[0])
# this still doesn't fix the problem
{'field': 'a', 'otherfield': {'capital': 'A'}}
# Alternately, here's what happens if I just directly map both 'capital' and 'append_x' on a fresh dataset.
# Recreate the dataset
dataset = datasets.Dataset.from_dict({'field': ['a', 'b']})
# Now map the entire 'otherfield' dict directly, instead of incrementally as before
print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['field'].capitalize()}})[0])
# This looks good!
{'field': 'a', 'otherfield': {'append_x': 'ax', 'capital': 'A'}}
```
This might be a new issue, because I didn't see this behavior in the `nlp` library.
Any help is appreciated! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/649/timeline | null | null | null | false |