url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.47B
node_id
stringlengths
18
32
number
int64
1
5.33k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/550
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/550/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/550/comments
https://api.github.com/repos/huggingface/datasets/issues/550/events
https://github.com/huggingface/datasets/pull/550
689,775,914
MDExOlB1bGxSZXF1ZXN0NDc2NzgyNDY1
550
[BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539)
{ "avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4", "events_url": "https://api.github.com/users/gaguilar/events{/privacy}", "followers_url": "https://api.github.com/users/gaguilar/followers", "following_url": "https://api.github.com/users/gaguilar/following{/other_user}", "gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gaguilar", "id": 5833357, "login": "gaguilar", "node_id": "MDQ6VXNlcjU4MzMzNTc=", "organizations_url": "https://api.github.com/users/gaguilar/orgs", "received_events_url": "https://api.github.com/users/gaguilar/received_events", "repos_url": "https://api.github.com/users/gaguilar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions", "type": "User", "url": "https://api.github.com/users/gaguilar" }
[]
closed
false
null
[]
null
[ "Thanks a lot for that!\r\nThe line you are mentioning is a bug indeed, do you mind fixing it at the same time?", "No worries! \r\n\r\nI pushed right away the fix, but then I realized that the master branch already had it, so I ended up merging the master branch with lince locally and then overwriting the previous commit in origin/lince. Hopefully, this is not too messy :)\r\n" ]
2020-09-01T03:27:03Z
2020-09-03T09:06:01Z
2020-09-03T09:06:01Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/550.diff", "html_url": "https://github.com/huggingface/datasets/pull/550", "merged_at": "2020-09-03T09:06:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/550.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/550" }
Hi, I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory: ``` python nlp-cli test ./datasets/lince --save_infos --all_configs ``` **NOTE**: I needed to change [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/commands/dummy_data.py#L8) from: `from .utils.logging import get_logger` to `from nlp.utils.logging import get_logger`, otherwise the script was not able to import `get_logger`. However, I did not include that in this PR since that could have been just my environment (and another PR could be fixing this already if it is actually an issue).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/550/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/550/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/549
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/549/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/549/comments
https://api.github.com/repos/huggingface/datasets/issues/549/events
https://github.com/huggingface/datasets/pull/549
689,766,465
MDExOlB1bGxSZXF1ZXN0NDc2Nzc0OTI1
549
Fix bleurt logging import
{ "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jbragg", "id": 2238344, "login": "jbragg", "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "organizations_url": "https://api.github.com/users/jbragg/orgs", "received_events_url": "https://api.github.com/users/jbragg/received_events", "repos_url": "https://api.github.com/users/jbragg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "type": "User", "url": "https://api.github.com/users/jbragg" }
[]
closed
false
null
[]
null
[ "That’s a good point that we started to discuss internally as well. We should pin the dataset en metrics code by default indeed.\r\nLet’s update this in the coming release.", "Ok closed this with #567 and we are working on a more general solution to pin dataset version in #562 (should be in the coming release)." ]
2020-09-01T03:01:25Z
2020-09-03T18:04:46Z
2020-09-03T09:04:20Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/549.diff", "html_url": "https://github.com/huggingface/datasets/pull/549", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/549.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/549" }
Bleurt started throwing an error in some code we have. This looks like the fix but... It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems). Any way for us to pin your metrics code so that they are guaranteed not to to change and possibly fail on repository changes? Thanks (and also for your continued work on the lib...)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/549/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/549/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/548/comments
https://api.github.com/repos/huggingface/datasets/issues/548/events
https://github.com/huggingface/datasets/pull/548
689,285,996
MDExOlB1bGxSZXF1ZXN0NDc2MzYzMjU1
548
[Breaking] Switch text loading to multi-threaded PyArrow loading
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[ "Awesome !\r\nAlso I was wondering if we should try to make the hashing of the `data_files` faster (it is used to build the cache directory of datasets like `text` or `json`). Right now it reads each file and hashes all of its data. We could simply hash the path and some metadata including the `time last modified` tag no ? Apparently we can get this tag with `os.path.getmtime(path)`", "I just rebased from master to include the hashing changes from #573 ", "I think this is ready to merge, no?", "Indeed it's ready to merge :)", "Ok added the breaking change info and we can merge indeed.\r\n" ]
2020-08-31T15:15:41Z
2020-09-08T10:19:58Z
2020-09-08T10:19:57Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/548.diff", "html_url": "https://github.com/huggingface/datasets/pull/548", "merged_at": "2020-09-08T10:19:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/548.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/548" }
Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader. If it works ok, it would fix #546. **Breaking change**: The text lines now do not include final line-breaks anymore.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/548/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/548/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/547
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/547/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/547/comments
https://api.github.com/repos/huggingface/datasets/issues/547/events
https://github.com/huggingface/datasets/pull/547
689,268,589
MDExOlB1bGxSZXF1ZXN0NDc2MzQ4OTk5
547
[Distributed] Making loading distributed datasets a bit safer
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-08-31T14:51:34Z
2020-08-31T15:16:30Z
2020-08-31T15:16:29Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/547.diff", "html_url": "https://github.com/huggingface/datasets/pull/547", "merged_at": "2020-08-31T15:16:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/547.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/547" }
Add some file-locks during dataset loading
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/547/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/547/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/546
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/546/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/546/comments
https://api.github.com/repos/huggingface/datasets/issues/546/events
https://github.com/huggingface/datasets/issues/546
689,186,526
MDU6SXNzdWU2ODkxODY1MjY=
546
Very slow data loading on large dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/agemagician", "id": 6087313, "login": "agemagician", "node_id": "MDQ6VXNlcjYwODczMTM=", "organizations_url": "https://api.github.com/users/agemagician/orgs", "received_events_url": "https://api.github.com/users/agemagician/received_events", "repos_url": "https://api.github.com/users/agemagician/repos", "site_admin": false, "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "type": "User", "url": "https://api.github.com/users/agemagician" }
[]
closed
false
null
[]
null
[ "When you load a text file for the first time with `nlp`, the file is converted into Apache Arrow format. Arrow allows to use memory-mapping, which means that you can load an arbitrary large dataset.\r\n\r\nNote that as soon as the conversion has been done once, the next time you'll load the dataset it will be much faster.\r\n\r\nHowever for a 1TB dataset, the conversion can indeed take time. You could try to load parts of it in parallel, and then use `nlp.concatenate_datasets` to get your full dataset.", "Humm, we can give a look at these large scale datasets indeed.\r\n\r\nDo you mind sharing a few stats on your dataset so I can try to test on a similar one?\r\n\r\nIn particular some orders of magnitudes for the number of files, number of lines per files, line lengths.", "@lhoestq Yes, I understand that the first time requires more time. The concatenate_datasets seems to be a workaround, but I believe a multi-processing method should be integrated into load_dataset to make it easier and more efficient for users.\r\n\r\n@thomwolf Sure, here are the statistics:\r\nNumber of lines: 4.2 Billion\r\nNumber of files: 6K\r\nNumber of tokens: 800 Billion\r\nThe number of lines is distributed equally across these 6k files.\r\nThe line length varies between 100 tokens to 40k tokens.\r\n", "@agemagician you can give a try at a multithreaded version if you want (currently on the #548).\r\n\r\nTo test it, you just need to copy the new `text` processing script which is [here](https://github.com/huggingface/nlp/blob/07d92a82b7594498ff702f3cca55c074e2052257/datasets/text/text.py) somewhere on your drive and give it's local path instead of `text` to `load_dataset`. E.g. in your example:\r\n```python\r\ntrain_files = glob.glob(\"xxx/*.txt\",recursive=True)\r\nrandom.shuffle(train_files)\r\n\r\nprint(train_files)\r\n\r\ndataset = nlp.load_dataset('./datasets/text.py', # path to where you've dowloaded the multi-threaded text loading script\r\n data_files=train_files,\r\n name=\"customDataset\",\r\n version=\"1.0.0\",\r\n cache_dir=\"xxx/nlp\")\r\n```", "I have already generated the dataset, but now I tried to reload it and it is still very slow.\r\n\r\nI also have installed your commit and it is slow, even after the dataset was already generated.\r\n`pip install git+https://github.com/huggingface/nlp.git@07d92a82b7594498ff702f3cca55c074e2052257`\r\n\r\nIt uses only a single thread.\r\n\r\nDid I miss something ?", "As mentioned in #548 , each time you call `load_dataset` with `data_files=`, they are hashed to get the cache directory name. Hashing can be too slow with 1TB of data. I feel like we should have a faster way of getting a hash that identifies the input data files", "I believe this is really a very important feature, otherwise, we will still have the issue of too slow loading problems even if the data cache generation is fast.", "Hmm ok then maybe it's the hashing step indeed.\r\n\r\nLet's see if we can improve this as well.\r\n\r\n(you will very likely have to regenerate your dataset if we change this part of the lib though since I expect modifications on this part of the lib to results in new hashes)", "Also, @agemagician you have to follow the step I indicate in my previous message [here](https://github.com/huggingface/nlp/issues/546#issuecomment-684648927) to use the new text loading script.\r\n\r\nJust doing `pip install git+https://github.com/huggingface/nlp.git@07d92a82b7594498ff702f3cca55c074e2052257` like you did won't use the new script (they are not inside the library but hosted on our hub).", "No problem, I will regenerate it. This will make us see if we solved both issues and now both the data generation step, as well as the hashing step, is fast.", "Any news for the hashing ?", "I'm working on it today :)", "Ok so now the text files won't be hashed.\r\n\r\nI also updated #548 to include this change.\r\nLet us know if it helps @agemagician :)", "Perfect thanks for your amazing work.", "Right now, for caching 18Gb data, it is taking 1 hour 10 minute. Is that proper expected time? @lhoestq @agemagician \r\nIn this rate (assuming large file will caching at the same rate) caching full mC4 (27TB) requires a month (~26 days). \r\n", "Hi ! Currently it is that slow because we haven't implemented parallelism for the dataset generation yet.\r\nThough we will definitely work on this :)\r\n\r\nFor now I'd recommend loading the dataset shard by shard in parallel, and then concatenate them:\r\n```python\r\n# in one process, load first 100 files for english\r\nshard1 = load_dataset(\"allenai/c4\", data_files=\"multilingual/c4-en.tfrecord-000**.json.gz\")\r\n# in another process load next 100 files for english\r\nshard2 = load_dataset(\"allenai/c4\", data_files=\"multilingual/c4-en.tfrecord-001**.json.gz\")\r\n\r\n# finally\r\nconcatenate_datasets([shard1, shard2, ...])", "Thanks for the help..!!!", "Sorry to write on a closed issue but, has there been any progress on parallelizing the `load_dataset` function?", "Hi ! No but this is in our plans (probably a few weeks)", "I'm literally crying waiting for the trainer to restart from checkpoint. It's getting stuck at `get_train_dataloader` and I think this is to do with the same issue... has there been any progress on this?", "> I'm literally crying waiting for the trainer to restart from checkpoint. It's getting stuck at get_train_dataloader and I think this is to do with the same issue...\r\n\r\nOnce the dataset is cached once, it's not regenerated again. Your issue seems different", "hmmm, yes. I'll come back with details on this, fairly easy to reproduce. Takes about 30 minutes to get from checkpoint loading to starting training..." ]
2020-08-31T12:57:23Z
2022-06-17T17:06:51Z
2020-09-08T10:19:57Z
NONE
null
null
null
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_files = glob.glob("xxx/*.txt",recursive=True) random.shuffle(train_files) print(train_files) dataset = nlp.load_dataset('text', data_files=train_files, name="customDataset", version="1.0.0", cache_dir="xxx/nlp") ``` Is there something that I am missing ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/546/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/546/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/545
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/545/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/545/comments
https://api.github.com/repos/huggingface/datasets/issues/545/events
https://github.com/huggingface/datasets/issues/545
689,138,878
MDU6SXNzdWU2ODkxMzg4Nzg=
545
New release coming up for this library
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[ "Update: release is planed mid-next week." ]
2020-08-31T11:37:38Z
2021-01-13T10:59:04Z
2021-01-13T10:59:04Z
MEMBER
null
null
null
Hi all, A few words on the roadmap for this library. The next release will be a big one and is planed at the end of this week. In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will: - have support for multi-modal datasets - include various significant improvements on speed for standard processing (map, shuffling, ...) - have a better support for metrics (better caching, and a robust API) and a bigger focus on reproductibility - change the name to the final name (voted by the community): `datasets` - be the 1.0.0 release as we think the API will be mostly stabilized from now on
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 4, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/545/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/545/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/544
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/544/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/544/comments
https://api.github.com/repos/huggingface/datasets/issues/544/events
https://github.com/huggingface/datasets/pull/544
689,062,519
MDExOlB1bGxSZXF1ZXN0NDc2MTc4MDM2
544
[Distributed] Fix load_dataset error when multiprocessing + add test
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-08-31T09:30:10Z
2020-08-31T11:15:11Z
2020-08-31T11:15:10Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/544.diff", "html_url": "https://github.com/huggingface/datasets/pull/544", "merged_at": "2020-08-31T11:15:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/544.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/544" }
Fix #543 + add test
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/544/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/544/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/543/comments
https://api.github.com/repos/huggingface/datasets/issues/543/events
https://github.com/huggingface/datasets/issues/543
688,644,407
MDU6SXNzdWU2ODg2NDQ0MDc=
543
nlp.load_dataset is not safe for multi processes when loading from local files
{ "avatar_url": "https://avatars.githubusercontent.com/u/55288513?v=4", "events_url": "https://api.github.com/users/luyug/events{/privacy}", "followers_url": "https://api.github.com/users/luyug/followers", "following_url": "https://api.github.com/users/luyug/following{/other_user}", "gists_url": "https://api.github.com/users/luyug/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/luyug", "id": 55288513, "login": "luyug", "node_id": "MDQ6VXNlcjU1Mjg4NTEz", "organizations_url": "https://api.github.com/users/luyug/orgs", "received_events_url": "https://api.github.com/users/luyug/received_events", "repos_url": "https://api.github.com/users/luyug/repos", "site_admin": false, "starred_url": "https://api.github.com/users/luyug/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luyug/subscriptions", "type": "User", "url": "https://api.github.com/users/luyug" }
[]
closed
false
null
[]
null
[ "I'll take a look!" ]
2020-08-30T03:20:34Z
2020-08-31T11:15:10Z
2020-08-31T11:15:10Z
NONE
null
null
null
Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])` concurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/builder.py#L423-L438 Likely because multiple processes step into download_and_prepare, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/load.py#L550-L554 This can happen when launching distributed training with commands like `python -m torch.distributed.launch --nproc_per_node 4` on a new collection of files never loaded before. I can create a PR that puts in some file locks. It would be helpful if I can be informed of the convention for naming and placement of the lock.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/543/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/543/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/542
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/542/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/542/comments
https://api.github.com/repos/huggingface/datasets/issues/542/events
https://github.com/huggingface/datasets/pull/542
688,555,036
MDExOlB1bGxSZXF1ZXN0NDc1NzkyNTY0
542
Add TensorFlow example
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
[]
closed
false
null
[]
null
[]
2020-08-29T15:39:27Z
2020-08-31T09:49:20Z
2020-08-31T09:49:19Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/542.diff", "html_url": "https://github.com/huggingface/datasets/pull/542", "merged_at": "2020-08-31T09:49:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/542.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/542" }
Update the Quick Tour documentation in order to add the TensorFlow equivalent source code for the classification example. Now it is possible to select either the code in PyTorch or in TensorFlow in the Quick tour.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/542/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/542/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/541/comments
https://api.github.com/repos/huggingface/datasets/issues/541/events
https://github.com/huggingface/datasets/issues/541
688,521,224
MDU6SXNzdWU2ODg1MjEyMjQ=
541
Best practices for training tokenizers with nlp
{ "avatar_url": "https://avatars.githubusercontent.com/u/11806234?v=4", "events_url": "https://api.github.com/users/moskomule/events{/privacy}", "followers_url": "https://api.github.com/users/moskomule/followers", "following_url": "https://api.github.com/users/moskomule/following{/other_user}", "gists_url": "https://api.github.com/users/moskomule/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/moskomule", "id": 11806234, "login": "moskomule", "node_id": "MDQ6VXNlcjExODA2MjM0", "organizations_url": "https://api.github.com/users/moskomule/orgs", "received_events_url": "https://api.github.com/users/moskomule/received_events", "repos_url": "https://api.github.com/users/moskomule/repos", "site_admin": false, "starred_url": "https://api.github.com/users/moskomule/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moskomule/subscriptions", "type": "User", "url": "https://api.github.com/users/moskomule" }
[]
closed
false
null
[]
null
[ "Docs that explain how to train a tokenizer with `datasets` are available here: https://huggingface.co/docs/tokenizers/training_from_memory#using-the-datasets-library" ]
2020-08-29T12:06:49Z
2022-10-04T17:28:04Z
2022-10-04T17:28:04Z
NONE
null
null
null
Hi, thank you for developing this library. What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/541/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/541/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/540/comments
https://api.github.com/repos/huggingface/datasets/issues/540/events
https://github.com/huggingface/datasets/pull/540
688,475,884
MDExOlB1bGxSZXF1ZXN0NDc1NzMzNzMz
540
[BUGFIX] Fix Race Dataset Checksum bug
{ "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abarbosa94", "id": 6608232, "login": "abarbosa94", "node_id": "MDQ6VXNlcjY2MDgyMzI=", "organizations_url": "https://api.github.com/users/abarbosa94/orgs", "received_events_url": "https://api.github.com/users/abarbosa94/received_events", "repos_url": "https://api.github.com/users/abarbosa94/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions", "type": "User", "url": "https://api.github.com/users/abarbosa94" }
[]
closed
false
null
[]
null
[ "I'm not sure this would fix #537 .\r\nHowever your point about the missing `middle` data is right and we probably want to include these data as well.\r\nDo you think it would we worth having different configurations for this dataset for users who want to only load part of it (`high school` or `middle` or `all`) ?", "This has fixed #537 at least on my machine hahaha.\r\n\r\nNice point! I think it would totally worth it :) What the best implementation approach would you suggest?\r\n\r\nWould it be possible to have `high school`, `middle` and `all` inside each portion of `train`, `validation` and `test`? Would this make sense?", "I think we could have one dataset configuration for `high school`, one for `middle` and one for `all`.\r\nYou just need to add\r\n```python\r\n BUILDER_CONFIGS = [\r\n nlp.BuilderConfig(\r\n name=\"high school\",\r\n description=\"insert description here\",\r\n ),\r\n nlp.BuilderConfig(\r\n name=\"middle\",\r\n description=\"insert description here\",\r\n ),\r\n nlp.BuilderConfig(\r\n name=\"all\",\r\n description=\"insert description here\",\r\n ),\r\n ]\r\n```\r\nas a class attribute for the `Race` class.\r\n\r\nThen in `generate_examples` you can check the value of `self.config.name` and choose which files to include when generating examples.\r\n\r\nYou can check [mlsum](https://github.com/huggingface/nlp/blob/master/datasets/mlsum/mlsum.py) for example if you want to see how it done in general, it's a dataset that has five configurations, and each config has train/val/test splits.", "Hi @lhoestq sorry for the delay in addressing your comments. Thanks for your assistance :)\r\n\r\nYou were correct as well, as I was using the script without the `datasets/race/dataset_infos.json` file, it did not verify the checksum. I already fix it as well :)\r\n\r\nI managed to get everything running smoothly by now. Please let me know if you think that I could improve my solution" ]
2020-08-29T07:00:10Z
2020-09-18T11:42:20Z
2020-09-18T11:42:20Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/540.diff", "html_url": "https://github.com/huggingface/datasets/pull/540", "merged_at": "2020-09-18T11:42:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/540.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/540" }
In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :) Moreover, I have added some descriptions.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/540/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/540/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/539
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/539/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/539/comments
https://api.github.com/repos/huggingface/datasets/issues/539/events
https://github.com/huggingface/datasets/issues/539
688,323,602
MDU6SXNzdWU2ODgzMjM2MDI=
539
[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data
{ "avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4", "events_url": "https://api.github.com/users/gaguilar/events{/privacy}", "followers_url": "https://api.github.com/users/gaguilar/followers", "following_url": "https://api.github.com/users/gaguilar/following{/other_user}", "gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gaguilar", "id": 5833357, "login": "gaguilar", "node_id": "MDQ6VXNlcjU4MzMzNTc=", "organizations_url": "https://api.github.com/users/gaguilar/orgs", "received_events_url": "https://api.github.com/users/gaguilar/received_events", "repos_url": "https://api.github.com/users/gaguilar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions", "type": "User", "url": "https://api.github.com/users/gaguilar" }
[]
closed
false
null
[]
null
[ "Hi @gaguilar \r\n\r\nIf you want to take care of this, it very simple, you just need to regenerate the `dataset_infos.json` file as indicated [in the doc](https://huggingface.co/nlp/share_dataset.html#adding-metadata) by [installing from source](https://huggingface.co/nlp/installation.html#installing-from-source) and running the following command from the root of the repo:\r\n```bash\r\npython nlp-cli test ./datasets/lince --save_infos --all_configs\r\n```\r\nAnd then you can open a pull-request with the updated json file.\r\n\r\nOtherwise we'll do it sometime this week.", "Hi @thomwolf \r\n\r\nThanks for the details! I just created a PR with the updated `dataset_infos.json` file (#550).", "Thanks for updating the json file. Closing this one" ]
2020-08-28T19:55:51Z
2020-09-03T16:34:02Z
2020-09-03T16:34:01Z
CONTRIBUTOR
null
null
null
Hi, There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset. How can I update the checksum of the library to solve this issue? The error is below and it also appears in the [nlp viewer](https://huggingface.co/nlp/viewer/?dataset=lince&config=lid_msaea): ```python import nlp nlp.load_dataset('lince', 'lid_msaea') ``` Output: ``` NonMatchingChecksumError: ['https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/lid_msaea.zip'] Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 196, in <module> dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None) File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp-viewer/run.py", line 150, in get builder_instance.download_and_prepare() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare download_config.force_download = download_mode == FORCE_REDOWNLOAD File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 469, in _download_and_prepare File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 36, in verify_checksums raise NonMatchingChecksumError(str(bad_urls)) ``` Thank you in advance! @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/539/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/539/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/538/comments
https://api.github.com/repos/huggingface/datasets/issues/538/events
https://github.com/huggingface/datasets/pull/538
688,015,912
MDExOlB1bGxSZXF1ZXN0NDc1MzU3MjY2
538
[logging] Add centralized logging - Bump-up cache loads to warnings
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-08-28T11:42:29Z
2020-08-31T11:42:51Z
2020-08-31T11:42:51Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/538.diff", "html_url": "https://github.com/huggingface/datasets/pull/538", "merged_at": "2020-08-31T11:42:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/538.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/538" }
Add a `nlp.logging` module to set the global logging level easily. The verbosity level also controls the tqdm bars (disabled when set higher than INFO). You can use: ``` nlp.logging.set_verbosity(verbosity: int) nlp.logging.set_verbosity_info() nlp.logging.set_verbosity_warning() nlp.logging.set_verbosity_debug() nlp.logging.set_verbosity_error() nlp.logging.get_verbosity() -> int ``` And use the levels: ``` nlp.logging.CRITICAL nlp.logging.DEBUG nlp.logging.ERROR nlp.logging.FATAL nlp.logging.INFO nlp.logging.NOTSET nlp.logging.WARN nlp.logging.WARNING ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/538/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/538/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/537
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/537/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/537/comments
https://api.github.com/repos/huggingface/datasets/issues/537/events
https://github.com/huggingface/datasets/issues/537
687,614,699
MDU6SXNzdWU2ODc2MTQ2OTk=
537
[Dataset] RACE dataset Checksums error
{ "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abarbosa94", "id": 6608232, "login": "abarbosa94", "node_id": "MDQ6VXNlcjY2MDgyMzI=", "organizations_url": "https://api.github.com/users/abarbosa94/orgs", "received_events_url": "https://api.github.com/users/abarbosa94/received_events", "repos_url": "https://api.github.com/users/abarbosa94/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions", "type": "User", "url": "https://api.github.com/users/abarbosa94" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "`NonMatchingChecksumError` means that the checksum of the downloaded file is not the expected one.\r\nEither the file you downloaded was corrupted along the way, or the host updated the file.\r\nCould you try to clear your cache and run `load_dataset` again ? If the error is still there, it means that there was an update in the data, and we may have to update the expected checksum value.", "I just cleared the cache an run it again. The error persists ):\r\n\r\n```\r\n nlp (master) $ rm -rf /Users/abarbosa/.cache/huggingface/\r\n nlp (master) $ python\r\nPython 3.8.5 (default, Aug 5 2020, 03:39:04)\r\n[Clang 10.0.0 ] :: Anaconda, Inc. on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import nlp\r\n>>> dataset = nlp.load_dataset(\"race\")\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.39k/4.39k [00:00<00:00, 661kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.81k/1.81k [00:00<00:00, 644kB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset race/default (download: 84.52 MiB, generated: 132.61 MiB, post-processed: Unknown size, total: 217.13 MiB) to /Users/abarbosa/.cache/huggingface/datasets/race/default/0.1.0/5461327f1a83549ca0d845a3159c806d2baf4f8d0d8f7d657157ce7cdf3899c2...\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25.4M/25.4M [01:03<00:00, 401kB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/abarbosa/Documents/nlp/src/nlp/load.py\", line 550, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Users/abarbosa/Documents/nlp/src/nlp/builder.py\", line 471, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/abarbosa/Documents/nlp/src/nlp/builder.py\", line 530, in _download_and_prepare\r\n verify_checksums(\r\n File \"/Users/abarbosa/Documents/nlp/src/nlp/utils/info_utils.py\", line 38, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\nnlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz']\r\n>>>\r\n```", "Dealing with the same issue please update the checksum on nlp library end. The data seems to have changed on their end.", "We have a discussion on this datasets here: https://github.com/huggingface/nlp/pull/540\r\n\r\nFeel free to participate if you have some opinion on the scope of data which should be included in this dataset.", "At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n\r\n", "> At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n\r\nCould you upload this please?", "> > At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n> \r\n> Could you upload this please?\r\n\r\nNot sure if I can upload it according to their license (\"You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.\").", "I managed to fix it in #540 :)", "Closing since @540 is merged\r\n\r\nThanks again @abarbosa94 " ]
2020-08-27T23:58:16Z
2020-09-18T12:07:04Z
2020-09-18T12:07:04Z
CONTRIBUTOR
null
null
null
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-15-8bf7603ce0ed> in <module> ----> 1 dataset = nlp.load_dataset("race") 2 len(dataset["train"]), len(dataset["validation"]) ~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 546 547 # Download and prepare data --> 548 builder_instance.download_and_prepare( 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, 550 ) ~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 460 logger.info("Dataset not on Hf google storage. Downloading and preparing it from source") 461 if not downloaded_from_gcs: --> 462 self._download_and_prepare( 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 464 ) ~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 519 # Checksums verification 520 if verify_infos: --> 521 verify_checksums( 522 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 523 ) ~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 39 logger.info("All the checksums matched successfully" + for_verification_name) 40 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz'] ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/537/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/537/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/536
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/536/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/536/comments
https://api.github.com/repos/huggingface/datasets/issues/536/events
https://github.com/huggingface/datasets/pull/536
687,378,332
MDExOlB1bGxSZXF1ZXN0NDc0ODE0NzY1
536
Fingerprint
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I changed the way I implemented fingerprint updates to use decorator functions.\r\n\r\nI also added a new attribute called `_inplace_history` that stores the in-place history of transforms (like cast_, rename_columns, etc.). This history is useful to replay the changes that were done in-place when unpickling a dataset that is memory mapped from a file.\r\n\r\nLet me know what you think @thomwolf " ]
2020-08-27T16:27:09Z
2020-08-31T14:20:40Z
2020-08-31T14:20:39Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/536.diff", "html_url": "https://github.com/huggingface/datasets/pull/536", "merged_at": "2020-08-31T14:20:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/536.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/536" }
This PR is a continuation of #513 , in which many in-place functions were introduced or updated (cast_, flatten_) etc. However the caching didn't handle these changes. Indeed the caching took into account only the previous cache file name of the table, and not the possible in-place transforms of the table. To fix that, I added the concept of dataset fingerprint, that is updated after each transform (in place or not), and stored inside the table metadata. When a dataset is created, an initial fingerprint is computed. If the dataset is memory-mapped, then the fingerprint generator doesn't read the table and only looks at the filename. However if the table is in-memory, then the fingerprint generator reads the content of the table using a batched non-crypto hashing. I added a utility class to compute hashes of arbitrary python objects in `fingerprint.py` : `Hasher`. The API is close to standard hashing tools (`.update`, `.hexdigest`). It also supports custom hashing functions depending on object types using a registry like pickle. I added a custom hashing function to hash a `pa.Table` in a batched way, and also for `nlp.DatasetInfo` to leverage its json serialization feature. Note about this PR: This is a draft PR because #513 needs to be merged first. The diff that is shown is for branches fingerprint -> indices (and not master, for now)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/536/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/536/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/535
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/535/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/535/comments
https://api.github.com/repos/huggingface/datasets/issues/535/events
https://github.com/huggingface/datasets/pull/535
686,238,315
MDExOlB1bGxSZXF1ZXN0NDczODM3Njg0
535
Benchmarks
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-08-26T11:21:26Z
2020-08-27T08:40:00Z
2020-08-27T08:39:59Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/535.diff", "html_url": "https://github.com/huggingface/datasets/pull/535", "merged_at": "2020-08-27T08:39:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/535.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/535" }
Adding some benchmarks with DVC/CML To add a new tracked benchmark: - create a new python benchmarking script in `./benchmarks/`. The script can use the utilities in `./benchmarks/utils.py` and should output a JSON file with results in `./benchmarks/results/`. - add a new pipeline stage in [dvc.yaml](./dvc.yaml) with the name of your new benchmark. That's it
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/535/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/535/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/534
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/534/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/534/comments
https://api.github.com/repos/huggingface/datasets/issues/534/events
https://github.com/huggingface/datasets/issues/534
686,115,912
MDU6SXNzdWU2ODYxMTU5MTI=
534
`list_datasets()` is broken.
{ "avatar_url": "https://avatars.githubusercontent.com/u/314169?v=4", "events_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/events{/privacy}", "followers_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/followers", "following_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/following{/other_user}", "gists_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ashutosh-dwivedi-e3502", "id": 314169, "login": "ashutosh-dwivedi-e3502", "node_id": "MDQ6VXNlcjMxNDE2OQ==", "organizations_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/orgs", "received_events_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/received_events", "repos_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ashutosh-dwivedi-e3502/subscriptions", "type": "User", "url": "https://api.github.com/users/ashutosh-dwivedi-e3502" }
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nThis has been fixed in #475 and the fix will be available in the next release", "What you can do instead to get the list of the datasets is call\r\n\r\n```python\r\nprint([dataset.id for dataset in nlp.list_datasets()])\r\n```", "Thanks @lhoestq . " ]
2020-08-26T08:19:01Z
2020-08-27T06:31:11Z
2020-08-27T06:31:11Z
NONE
null
null
null
version = '0.4.0' `list_datasets()` is broken. It results in the following error : ``` In [3]: nlp.list_datasets() Out[3]: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj) 700 type_pprinters=self.type_printers, 701 deferred_pprinters=self.deferred_printers) --> 702 printer.pretty(obj) 703 printer.flush() 704 return stream.getvalue() ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj) 375 if cls in self.type_pprinters: 376 # printer registered in self.type_pprinters --> 377 return self.type_pprinters[cls](obj, self, cycle) 378 else: 379 # deferred printer ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in inner(obj, p, cycle) 553 p.text(',') 554 p.breakable() --> 555 p.pretty(x) 556 if len(obj) == 1 and type(obj) is tuple: 557 # Special case for 1-item tuples. ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj) 392 if cls is not object \ 393 and callable(cls.__dict__.get('__repr__')): --> 394 return _repr_pprint(obj, self, cycle) 395 396 return _default_pprint(obj, self, cycle) ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle) 698 """A pprint that just redirects to the normal repr function.""" 699 # Find newlines and replace them with p.break_() --> 700 output = repr(obj) 701 lines = output.splitlines() 702 with p.group(): ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/nlp/hf_api.py in __repr__(self) 110 111 def __repr__(self): --> 112 single_line_description = self.description.replace("\n", "") 113 return f"nlp.ObjectInfo(id='{self.id}', description='{single_line_description}', files={self.siblings})" 114 AttributeError: 'NoneType' object has no attribute 'replace' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/534/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/534/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/533/comments
https://api.github.com/repos/huggingface/datasets/issues/533/events
https://github.com/huggingface/datasets/pull/533
685,585,914
MDExOlB1bGxSZXF1ZXN0NDczMjg4OTgx
533
Fix ArrayXD for pyarrow 0.17.1 by using non fixed length list arrays
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-08-25T15:32:44Z
2020-08-26T08:02:24Z
2020-08-26T08:02:23Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/533.diff", "html_url": "https://github.com/huggingface/datasets/pull/533", "merged_at": "2020-08-26T08:02:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/533.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/533" }
It should fix the CI problems in #513
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/533/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/533/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/532
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/532/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/532/comments
https://api.github.com/repos/huggingface/datasets/issues/532/events
https://github.com/huggingface/datasets/issues/532
685,540,614
MDU6SXNzdWU2ODU1NDA2MTQ=
532
File exists error when used with TPU
{ "avatar_url": "https://avatars.githubusercontent.com/u/20531705?v=4", "events_url": "https://api.github.com/users/go-inoue/events{/privacy}", "followers_url": "https://api.github.com/users/go-inoue/followers", "following_url": "https://api.github.com/users/go-inoue/following{/other_user}", "gists_url": "https://api.github.com/users/go-inoue/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/go-inoue", "id": 20531705, "login": "go-inoue", "node_id": "MDQ6VXNlcjIwNTMxNzA1", "organizations_url": "https://api.github.com/users/go-inoue/orgs", "received_events_url": "https://api.github.com/users/go-inoue/received_events", "repos_url": "https://api.github.com/users/go-inoue/repos", "site_admin": false, "starred_url": "https://api.github.com/users/go-inoue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/go-inoue/subscriptions", "type": "User", "url": "https://api.github.com/users/go-inoue" }
[]
open
false
null
[]
null
[ "I am facing probably facing similar issues with \r\n\r\n`wiki40b_en_100_0`", "Could you try to run `dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")` once before calling the script ?\r\n\r\nIt looks like several processes try to create the dataset in arrow format at the same time. If the dataset is already created it should be fine", "Thanks! I tested on 328MB text data on `n1-standard-8 (8 vCPUs, 30 GB memory)`. The main script ran without any issue, but it seems to require a huge space in the drive.\r\n\r\nAs suggested, I ran the following script before running the pre-training command with `xla_spawn.py`.\r\n\r\n```python\r\nfrom nlp import load_dataset\r\n\r\nfile_path=\"your_file_name\"\r\nload_dataset(\"text\", data_files=file_path, split=\"train\")\r\n```\r\nThis will create `text-train.arrow` under the default cache directory. Then, I run the script with `xla_spawn.py`. It will load data from the cached file. My understanding is that there's no other way but to do this two-step process with the current version (0.4) of `nlp`.\r\n\r\nDuring another caching process that happens in the main script:\r\n\r\n```\r\n08/26/2020 09:19:51 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 09:19:53 - INFO - nlp.arrow_dataset - Caching processed dataset at /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d/cache-f90f341e5308a7469\r\n8d872bcc88f9c0e.arrow\r\n```\r\n\r\n`nlp` generates a temporary file per core, each of which is three times larger than the original text data. If each process is actually writing on the disk, you will need a huge amount of space in your drive. (Maybe I'm missing something.)\r\n\r\n```\r\n-rw-r--r-- 1 ***** ***** 674 Aug 26 09:19 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 09:19 LICENSE\r\n-rw-r--r-- 1 ***** ***** 332M Aug 26 09:10 text-train.arrow\r\n-rw------- 1 ***** ***** 940M Aug 26 09:31 tmp0k43sazw\r\n-rw------- 1 ***** ***** 940M Aug 26 09:31 tmp7sxs9mj5\r\n-rw------- 1 ***** ***** 939M Aug 26 09:31 tmpbbiqw2vp\r\n-rw------- 1 ***** ***** 937M Aug 26 09:31 tmpjxb5ptyu\r\n-rw------- 1 ***** ***** 933M Aug 26 09:31 tmpk3hkdh0e\r\n-rw------- 1 ***** ***** 944M Aug 26 09:31 tmpnoalwftz\r\n-rw------- 1 ***** ***** 931M Aug 26 09:31 tmpuxdr_dz3\r\n-rw------- 1 ***** ***** 945M Aug 26 09:31 tmpxjyuy6dk\r\n```\r\nAfter the caching process, they seem to be merged into one file.\r\n\r\n```\r\n-rw------- 1 ***** ***** 989M Aug 26 09:32 cache-f90f341e5308a74698d872bcc88f9c0e.arrow\r\n-rw-r--r-- 1 ***** ***** 674 Aug 26 09:19 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 09:19 LICENSE\r\n-rw-r--r-- 1 ***** ***** 332M Aug 26 09:10 text-train.arrow\r\n```", "Again it looks like every process tries to tokenize the full dataset at the same time.\r\nIf you do the tokenization before calling `xla_spawn.py` once, then each process will then use the tokenized cached file `cache-f90f341e5308a74698d872bcc88f9c0e.arrow` and not recompute it.\r\n\r\nNot sure if there's a better way to do that cc @julien-c @thomwolf ", "I wrote a separate script just for preparing a cached file, including tokenization. Each process did use the tokenized cached file.\r\n\r\nCurrently I'm testing the pipeline on 24GB text data. It took about 1.5 hour to create a cached file on `n1-highmem-16 (16 vCPUs, 104 GB memory)`. I assume loading this cached file in the main script with `xla_spawn.py` won't be an issue (even if there are 8 processes).\r\n\r\n```\r\ntotal 98G\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 13:38 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 12:24 ..\r\n-rw------- 1 ***** ***** 74G Aug 26 13:38 cache-a7aa04134ba7b1aff5d9710f14a4e334.arrow\r\n-rw-r--r-- 1 ***** ***** 681 Aug 26 12:24 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 12:24 LICENSE\r\n-rw-r--r-- 1 ***** ***** 25G Aug 26 12:24 text-train.arrow\r\n```", "Yes loading the cached file should be fine from different processes", "Sorry, I thought it was working, but actually the second call doesn't use the cached file that was generated separately, and it will generate another cache-****.arrorw file with a different name. If I run the training script again (with `xla_spawn.py`), it will use the second cached file, which was generated by the training script itself in the previous run.\r\n\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 15:35 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:29 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 15:35 cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n-rw------- 1 ***** ***** 99M Aug 26 15:29 cache-69633651476e943b93c89ace715f9487.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 15:33 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 15:33 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:29 text-train.arrow\r\n```", "So if I understand correctly it means that the cached file generated by your separated script is different by the one used by the training script ?", "Yes.\r\n\r\n1. `cache-69633651476e943b93c89ace715f9487.arrow` was generated with a separate script. \r\n2. I ran the entire script with `xla_spawn.py`.\r\n3. `cache-69633651476e943b93c89ace715f9487.arrow` is not used.\r\n4. `cache-0d77dfce704493dbe63f071eed6a5431.arrow` is created.\r\n5. training starts...\r\n\r\nNow, if I kill the process at step 5, and do the step 2 again, it will use `cache-0d77dfce704493dbe63f071eed6a5431.arrow` (cached file created at step 4) without any issue.\r\n\r\nI used the following to generate the first cached file.\r\n```python\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids'])\r\n```", "1. Here's the log from the first step.\r\n```\r\nDownloading and preparing dataset text/default-e84dd29acc4ad9ef (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nDataset text downloaded and prepared to /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d. Subsequent calls will reuse this data.\r\n```\r\nThere's a file named `cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow`, so it did create a cached file.\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 15:59 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:58 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 15:59 cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 15:58 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 15:58 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:58 text-train.arrow\r\n```\r\n2. Ideally, `cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow` should be used in `run_language_modeling.py` (modified version using `nlp`) with `xla_spawn.py`. But it looks like it's creating a new cached file.\r\n\r\n```\r\n08/26/2020 16:13:03 - INFO - filelock - Lock 139635836351096 released on /home/*****/.cache/huggingface/datasets/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.202fa4f84f552bff1f5400ae012663839c61efb3de068c6c8722d34ac0ea6192\r\n.py.lock\r\n08/26/2020 16:13:03 - WARNING - nlp.builder - Using custom data configuration default\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08/26/2020 16:13:03 - INFO - nlp.info - Loading Dataset info from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Reusing dataset text (/home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Constructing Dataset for split train, from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08/26/2020 16:13:03 - INFO - nlp.info - Loading Dataset info from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Reusing dataset text (/home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08/26/2020 16:13:03 - INFO - nlp.builder - Constructing Dataset for split train, from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:13:03 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 16:13:05 - INFO - nlp.arrow_dataset - Caching processed dataset at /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d/cache-0d77dfce704493dbe\r\n63f071eed6a5431.arrow\r\n^M 0%| | 0/100 [00:00<?, ?it/s]08/26/2020 16:13:05 - INFO - nlp.arrow_dataset - Caching processed dataset at /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6\r\nfe661fe4d070d380d/cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n```\r\n\r\nThere are two cached files in the directory:\r\n\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 16:14 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:58 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 16:14 cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n-rw------- 1 ***** ***** 99M Aug 26 15:59 cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 16:13 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 16:13 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:58 text-train.arrow\r\n```\r\n\r\nIf I kill the process, and run it again, it will use the second cached file.\r\n\r\n```\r\n08/26/2020 16:19:52 - WARNING - nlp.builder - Using custom data configuration default\r\n08/26/2020 16:19:52 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08/26/2020 16:19:52 - INFO - nlp.info - Loading Dataset info from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:19:52 - INFO - nlp.builder - Reusing dataset text (/home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08/26/2020 16:19:52 - INFO - nlp.builder - Constructing Dataset for split train, from /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08/26/2020 16:19:52 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08/26/2020 16:19:53 - INFO - nlp.arrow_dataset - Loading cached processed dataset at /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d/cache-0d77dfce70\r\n4493dbe63f071eed6a5431.arrow\r\n08/26/2020 16:19:53 - INFO - nlp.arrow_dataset - Set __getitem__(key) output type to torch for ['input_ids'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\n```", "Thanks for all the details.\r\nThe two cached files are supposed to be the same. I suspect that the caching has a problem with the tokenizer.\r\nWhich tokenizer did you use ?", "I trained a byte-level BPE tokenizer on my data with `tokenziers` library following this [example](https://github.com/huggingface/tokenizers/blob/master/bindings/python/examples/train_bytelevel_bpe.py).\r\n\r\nAnd I put these model files in a directory named `\"model_name\"`. I also put config.json, which is the original RoBERTa config file.\r\n\r\n```bash\r\n%ls model_name\r\nconfig.json merges.txt vocab.json\r\n```\r\n\r\n[This](https://github.com/huggingface/transformers/blob/4bd7be9a4268221d2a0000c7e8033aaeb365c03b/examples/language-modeling/run_language_modeling.py#L196) is the line where `run_language_modeling.py` loads the tokenier.\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n```\r\n\r\nI use `\"model_name\"` for `model_args.tokenizer_name`. I don't specify `model_args.cache_dir`. It is 'None' by default.", "In my separated script for caching, I'm using `use_fast=True` when initializing a tokenizer.\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(args.config_name, use_fast=True)\r\n```\r\nI wasn't using that option in the main script. That could be the reason...", "Yea it could definitely explain why you have two different cache files.\r\nLet me know if using the same tokenizers on both sides fixes the issue", "It still creates a new file even if I remove `use_fast=True`... \r\n\r\nHere's the script used to create a cached file.\r\n```python \r\n#!/usr/bin/env python3\r\n\r\nimport argparse\r\n\r\nfrom transformers import AutoTokenizer\r\n\r\nfrom nlp import load_dataset\r\n\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description='description')\r\n parser.add_argument('--config_name', type=str, help='Pretrained config name or path if not the same as model_name')\r\n parser.add_argument('--data_file', type=str, help='The input data file (a text file).')\r\n parser.add_argument('--block_size', type=int, default=-1, help='The training dataset will be truncated in block of this size for training')\r\n args = parser.parse_args()\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(args.config_name)\r\n\r\n dataset = load_dataset(\"text\", data_files=args.data_file, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nHere's how the data is loaded in the modified `run_language_modeling.py`. [[original function](https://github.com/huggingface/transformers/blob/971d1802d009d9996b36a34a34477cee849ef39f/examples/language-modeling/run_language_modeling.py#L128-L135)]\r\n\r\n```python\r\ndef get_dataset(args: DataTrainingArguments, tokenizer: PreTrainedTokenizer, evaluate=False):\r\n file_path = args.eval_data_file if evaluate else args.train_data_file\r\n split = \"validation\" if evaluate else \"train\"\r\n if args.line_by_line:\r\n # return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)\r\n dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n return dataset\r\n\r\n else:\r\n return TextDataset(\r\n tokenizer=tokenizer, file_path=file_path, block_size=args.block_size, overwrite_cache=args.overwrite_cache\r\n )\r\n```\r\n\r\nProbably I don't need this part in the main script,\r\n\r\n```python\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n```\r\nand simply do this?\r\n```python\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\nreturn dataset\r\n```", "You need this part in the main script or it will use the dataset that is not tokenized\r\n\r\n", "I can see that the tokenizer in `run_language_modeling.py` is not instantiated the same way as in your separated script.\r\nIndeed we can see L196:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n```\r\nCould you try to make it so they are instantiated the exact same way please ?", "I updated my separated script, but it's creating a cached file again. If I don't use the `model_args.cache_dir`, both will get `None`, so they should be the same.\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nimport argparse\r\n\r\nfrom transformers import AutoTokenizer\r\nfrom nlp import load_dataset\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description='description')\r\n parser.add_argument('--tokenizer_name', type=str, help='Pretrained tokenizer name or path if not the same as model_name')\r\n parser.add_argument('--data_file', type=str, help='The input data file (a text file).')\r\n parser.add_argument('--cache_dir', type=str, default=None, help='Where do you want to store the pretrained models downloaded from s3')\r\n parser.add_argument('--block_size', type=int, default=-1, help='The training dataset will be truncated in block of this size for training')\r\n\r\n model_args = parser.parse_args()\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n\r\n dataset = load_dataset(\"text\", data_files=model_args.data_file, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=model_args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nIs there a way to specify the cache file to load, and skip the re-computation?", "Could you also check that the `args.block_size` used in the lambda function is the same as well ?", "Here's a minimal working example to reproduce this issue.\r\n\r\nAssumption:\r\n- You have access to TPU.\r\n- You have installed `transformers` and `nlp`.\r\n- You have tokenizer files (`config.json`, `merges.txt`, `vocab.json`) under the directory named `model_name`.\r\n- You have `xla_spawn.py` (Download from https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py).\r\n- You have saved the following script as `prepare_cached_dataset.py`.\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nimport argparse\r\nfrom transformers import AutoTokenizer\r\nfrom nlp import load_dataset\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description='description')\r\n parser.add_argument('--tokenizer_name', type=str, help='Pretrained tokenizer name or path if not the same as model_name')\r\n parser.add_argument('--data_file', type=str, help='The input data file (a text file).')\r\n parser.add_argument('--cache_dir', type=str, default=None, help='Where do you want to store the pretrained models downloaded from s3')\r\n parser.add_argument('--block_size', type=int, default=-1, help='The training dataset will be truncated in block of this size for training')\r\n parser.add_argument('--tpu_num_cores', type=int, default=1, help='Number of TPU cores to use (1 or 8). For xla_apwan.py')\r\n model_args = parser.parse_args()\r\n \r\n tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir, use_fast=True)\r\n \r\n dataset = load_dataset(\"text\", data_files=model_args.data_file, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=model_args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n\r\ndef _mp_fn(index):\r\n # For xla_spawn (TPUs)\r\n main()\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n- Run the following command. Replace `your_training_data` with some text file.\r\n\r\n```bash\r\nexport TRAIN_DATA=your_training_data\r\n\r\npython prepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n- Check the cached directory.\r\n```bash\r\nls -lha /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 132M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:08 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:08 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:08 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n\r\n- Run the same script again. (The output should be just `Using custom data configuration default`.)\r\n```\r\npython prepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n- Check the cached directory.\r\n```bash\r\nls -lha /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 132M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:08 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:20 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:20 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n- The cached file (`cache-bfc7cb0702426d19242db5e8c079f04b.arrow`) is reused.\r\n- Now, run this script with `xla_spawn.py`. Ideally, it should reuse the cached file, however, you will see each process is creating a cache file again.\r\n\r\n```bash\r\npython xla_spawn.py --num_cores 8 \\\r\nprepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n\r\n- Check the cached directory. There are two arrrow files.\r\n```bash\r\nls -lha /home/*****/.cache/huggingface/datasets/text/default-e84dd29acc4ad9ef/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 230M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:25 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw------- 1 ***** ***** 99M Aug 28 13:25 cache-e0e2313e49c8a110aafcc8133154c19a.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:24 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:24 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n", "I ended up specifying the `cache_file_name` argument when I call `map` function.\r\n\r\n```python\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True, truncation=True, max_length=args.block_size),\r\n batched=True,\r\n cache_file_name=cache_file_name)\r\n```\r\n\r\nNote:\r\n- `text` dataset in `nlp` does not strip `\"\\n\"`. If you want the same output as in [`LineByLineTextDataset`](https://github.com/huggingface/transformers/blob/afc4ece462ad83a090af620ff4da099a0272e171/src/transformers/data/datasets/language_modeling.py#L88-L111), you would need to create your own dataset class where you replace `line` to `line.strip()` [here](https://github.com/huggingface/nlp/blob/master/datasets/text/text.py#L35).\r\n" ]
2020-08-25T14:36:38Z
2020-09-01T12:14:56Z
null
NONE
null
null
null
Hi, I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8). I modified [line 131 in the original `run_language_modeling.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L131) as follows: ```python # line 131: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size) dataset = load_dataset("text", data_files=file_path, split="train") dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True) dataset.set_format(type='torch', columns=['input_ids']) return dataset ``` When I run this with [`xla_spawn.py`](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py), I get the following error (it produces one message per core in TPU, which I believe is fine). It seems the current version doesn't take into account distributed training processes as in [this example](https://github.com/huggingface/transformers/blob/a573777901e662ec2e565be312ffaeedef6effec/src/transformers/data/datasets/language_modeling.py#L35-L38)? ``` 08/25/2020 13:59:41 - WARNING - nlp.builder - Using custom data configuration default 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Exception in device=TPU:6: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Exception in device=TPU:4: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Exception in device=TPU:1: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Exception in device=TPU:7: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Exception in device=TPU:3: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Exception in device=TPU:2: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Exception in device=TPU:0: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) Traceback (most recent call last): FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/532/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/532/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/531/comments
https://api.github.com/repos/huggingface/datasets/issues/531/events
https://github.com/huggingface/datasets/pull/531
685,291,036
MDExOlB1bGxSZXF1ZXN0NDczMDM4ODc4
531
add concatenate_datasets to the docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-08-25T08:40:05Z
2020-08-25T09:02:20Z
2020-08-25T09:02:19Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/531.diff", "html_url": "https://github.com/huggingface/datasets/pull/531", "merged_at": "2020-08-25T09:02:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/531.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/531" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/531/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/531/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/530
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/530/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/530/comments
https://api.github.com/repos/huggingface/datasets/issues/530/events
https://github.com/huggingface/datasets/pull/530
684,825,612
MDExOlB1bGxSZXF1ZXN0NDcyNjQ5NTk2
530
use ragged tensor by default
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Yes I agree. Maybe something that lets specify different format depending on the column ? Especially to better control dtype and shape (and ragged for tf)\r\n\r\nOh and I forgot: this one should also fix the second issue found in #477 for the next release", "I am running into the same issue with the error message on my local windows machine -\r\nAttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to_tensor'. Tensorflow version is 2.6. Anything that I can do to fix it?\r\ntrain_features = {x: tf_train_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\ntrain_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset[\"label\"]))\r\ntrain_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8)\r\n\r\neval_features = {x: tf_eval_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\neval_tf_dataset = tf.data.Dataset.from_tensor_slices((eval_features, tf_eval_dataset[\"label\"]))\r\neval_tf_dataset = eval_tf_dataset.batch(8)\r\n\r\nttributeError Traceback (most recent call last)\r\n<ipython-input-59-f50e45c2c0dc> in <module>\r\n----> 1 train_features = {x: tf_train_dataset[x].convert_to_tensor() for x in tokenizer.model_input_names}\r\n 2 train_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset[\"label\"]))\r\n 3 train_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8)\r\n 4 \r\n 5 eval_features = {x: tf_eval_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\n\r\n<ipython-input-59-f50e45c2c0dc> in <dictcomp>(.0)\r\n----> 1 train_features = {x: tf_train_dataset[x].convert_to_tensor() for x in tokenizer.model_input_names}\r\n 2 train_tf_dataset = tf.data.Dataset.from_tensor_slices((train_features, tf_train_dataset[\"label\"]))\r\n 3 train_tf_dataset = train_tf_dataset.shuffle(len(tf_train_dataset)).batch(8)\r\n 4 \r\n 5 eval_features = {x: tf_eval_dataset[x].to_tensor() for x in tokenizer.model_input_names}\r\n\r\n~\\AppData\\Roaming\\Python\\Python38\\site-packages\\tensorflow\\python\\framework\\ops.py in __getattr__(self, name)\r\n 399 from tensorflow.python.ops.numpy_ops import np_config\r\n 400 np_config.enable_numpy_behavior()\"\"\".format(type(self).__name__, name))\r\n--> 401 self.__getattribute__(name)\r\n 402 \r\n 403 @staticmethod\r\n\r\nAttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'convert_to_tensor'\r\n\r\n", "Hi ! Before calling `to_tensor`, make sure that your object is a RaggedTensor, because it may already be a regular Tensor if the shapes of your examples are all the same", "Okay. i am not familiar with how to check the difference between the two. I will research on this." ]
2020-08-24T17:06:15Z
2021-10-22T19:38:40Z
2020-08-24T19:22:25Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/530.diff", "html_url": "https://github.com/huggingface/datasets/pull/530", "merged_at": "2020-08-24T19:22:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/530.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/530" }
I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow. Previously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a ragged tensor and sometimes not. Therefore I reverted this behavior to always return a ragged tensor as we used to do.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/530/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/530/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/529
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/529/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/529/comments
https://api.github.com/repos/huggingface/datasets/issues/529/events
https://github.com/huggingface/datasets/pull/529
684,797,157
MDExOlB1bGxSZXF1ZXN0NDcyNjI2MDY4
529
Add MLSUM
{ "avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4", "events_url": "https://api.github.com/users/RachelKer/events{/privacy}", "followers_url": "https://api.github.com/users/RachelKer/followers", "following_url": "https://api.github.com/users/RachelKer/following{/other_user}", "gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RachelKer", "id": 36986299, "login": "RachelKer", "node_id": "MDQ6VXNlcjM2OTg2Mjk5", "organizations_url": "https://api.github.com/users/RachelKer/orgs", "received_events_url": "https://api.github.com/users/RachelKer/received_events", "repos_url": "https://api.github.com/users/RachelKer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions", "type": "User", "url": "https://api.github.com/users/RachelKer" }
[]
closed
false
null
[]
null
[ "Could you test to run the test using the changes in #527 and let me know if it fixes the issue ? If so I'll merge it and we'll be good to go :)", "Hello, it does work on the fixing real dataset branch. Merci Quentin :)", "Nice, glad to hear that :)\r\nde rien !" ]
2020-08-24T16:18:35Z
2020-08-26T08:04:11Z
2020-08-26T08:04:11Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/529.diff", "html_url": "https://github.com/huggingface/datasets/pull/529", "merged_at": "2020-08-26T08:04:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/529.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/529" }
Hello (again :) !), So, I started a new branch because of a [rebase issue](https://github.com/huggingface/nlp/pull/463), sorry for the mess. However, the command `pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mlsum` still fails because there is no default language dataset : the script throws an error as a specific config language is necessary. I think that setting a default language would be a bad workaround for this so I kept it as it is. Putting all the train files across languages together would also be a bad idea because of the size. Thanks for your help, Rachel
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/529/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/529/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/528
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/528/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/528/comments
https://api.github.com/repos/huggingface/datasets/issues/528/events
https://github.com/huggingface/datasets/pull/528
684,673,673
MDExOlB1bGxSZXF1ZXN0NDcyNTIzNDI1
528
fix missing variable names in docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The problem came from `default: ` that is rendered differently and hides the parameter names. I changed `default: ...` to `defaults to ...`" ]
2020-08-24T13:31:48Z
2020-08-25T09:04:04Z
2020-08-25T09:04:03Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/528.diff", "html_url": "https://github.com/huggingface/datasets/pull/528", "merged_at": "2020-08-25T09:04:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/528.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/528" }
fix #524
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/528/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/528/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/527
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/527/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/527/comments
https://api.github.com/repos/huggingface/datasets/issues/527/events
https://github.com/huggingface/datasets/pull/527
684,632,930
MDExOlB1bGxSZXF1ZXN0NDcyNDg4MzUy
527
Fix config used for slow test on real dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-08-24T12:39:34Z
2020-08-25T09:20:45Z
2020-08-25T09:20:44Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/527.diff", "html_url": "https://github.com/huggingface/datasets/pull/527", "merged_at": "2020-08-25T09:20:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/527.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/527" }
As noticed in #470, #474, #476, #504 , the slow test `test_load_real_dataset` couldn't run on datasets that require config parameters. To fix that I replaced it with one test with the first config of BUILDER_CONFIGS `test_load_real_dataset`, and another test that runs all of the configs in BUILDER_CONFIGS `test_load_real_dataset_all_configs`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/527/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/527/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/526
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/526/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/526/comments
https://api.github.com/repos/huggingface/datasets/issues/526/events
https://github.com/huggingface/datasets/pull/526
684,615,455
MDExOlB1bGxSZXF1ZXN0NDcyNDczNjcw
526
Returning None instead of "python" if dataset is unformatted
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao" }
[]
closed
false
null
[]
null
[ "We have to change the tests to expect `None` instead of `python` then", "Merging!" ]
2020-08-24T12:10:35Z
2020-08-24T12:50:43Z
2020-08-24T12:50:42Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/526.diff", "html_url": "https://github.com/huggingface/datasets/pull/526", "merged_at": "2020-08-24T12:50:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/526.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/526" }
Following the discussion on Slack, this small fix ensures that calling `dataset.set_format(type=dataset.format["type"])` works properly. Slightly breaking as calling `dataset.format` when the dataset is unformatted will return `None` instead of `python`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/526/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/526/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/525
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/525/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/525/comments
https://api.github.com/repos/huggingface/datasets/issues/525/events
https://github.com/huggingface/datasets/issues/525
683,875,483
MDU6SXNzdWU2ODM4NzU0ODM=
525
wmt download speed example
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
[]
closed
false
null
[]
null
[ "Thanks for creating the issue :)\r\nThe download link for wmt-en-de raw looks like a mirror. We should use that instead of the current url.\r\nIs this mirror official ?\r\n\r\nAlso it looks like for `ro-en` it tried to download other languages. If we manage to only download the one that is asked it'd be cool\r\n\r\nAlso cc @patrickvonplaten ", "Mirror is not official.", "Shall we host the files ourselves or it is fine to use this mirror in your opinion ?", "Should we add an argument in `load_dataset` to override some URL with a custom URL (e.g. mirror) or a local path?\r\n\r\nThis could also be used to provide local files instead of the original files as requested by some users (e.g. when you made a dataset with the same format than SQuAD and what to use it instead of the official dataset files).", "@lhoestq I think we should host it ourselves. I'll put the subset of wmt (without preprocessed files) that we need on s3 and post a link over the weekend.", "Is there a solution yet? The download speed is still too slow. 60-70kbps download for wmt16 and around 100kbps for wmt19. @sshleifer ", "I'm working on mirror links which will provide high download speed :)\r\nSee https://github.com/huggingface/datasets/issues/1892", "Resolved via https://github.com/huggingface/datasets/pull/1912" ]
2020-08-21T23:29:06Z
2022-10-04T17:45:39Z
2022-10-04T17:45:39Z
CONTRIBUTOR
null
null
null
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-en') ``` Downloads at 49.1 KB/S Whereas ``` pip install gdown # download from google drive !gdown https://drive.google.com/uc?id=1iO7um-HWoNoRKDtw27YUSgyeubn9uXqj ``` Downloads at 127 MB/s. (The file is a copy of wmt-en-de raw). ``` nlp.load_dataset('wmt16', 'ro-en') ``` goes at 27 MB/s, much faster. if we wget the same data from s3 is the same download speed, but ¼ the file size: ``` wget https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_ro_packed_200_rand.tgz ``` Finally, ``` nlp.load_dataset('wmt19', 'zh-en') ``` Starts fast, but broken. (duplicate of #493 )
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/525/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/525/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/524/comments
https://api.github.com/repos/huggingface/datasets/issues/524/events
https://github.com/huggingface/datasets/issues/524
683,686,359
MDU6SXNzdWU2ODM2ODYzNTk=
524
Some docs are missing parameter names
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
[]
closed
false
null
[]
null
[ "Indeed, good catch!" ]
2020-08-21T16:47:34Z
2020-08-25T09:04:03Z
2020-08-25T09:04:03Z
CONTRIBUTOR
null
null
null
See https://huggingface.co/nlp/master/package_reference/main_classes.html#nlp.Dataset.map. I believe this is because the parameter names are enclosed in backticks in the docstrings, maybe it's an old docstring format that doesn't work with the current Sphinx version.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/524/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/524/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/523
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/523/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/523/comments
https://api.github.com/repos/huggingface/datasets/issues/523/events
https://github.com/huggingface/datasets/pull/523
682,573,232
MDExOlB1bGxSZXF1ZXN0NDcwNzkxMjA1
523
Speed up Tokenization by optimizing cast_to_python_objects
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I took your comments into account and added tests for `cast_to_python_objects`" ]
2020-08-20T09:42:02Z
2020-08-24T08:54:15Z
2020-08-24T08:54:14Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/523.diff", "html_url": "https://github.com/huggingface/datasets/pull/523", "merged_at": "2020-08-24T08:54:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/523.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/523" }
I changed how `cast_to_python_objects` works to make it faster. It is used to cast numpy/pytorch/tensorflow/pandas objects to python lists, and it works recursively. To avoid iterating over possibly long lists, it first checks if the first element that is not None has to be casted. If the first element needs to be casted, then all the elements of the list will be casted, otherwise they'll stay the same. This trick allows to cast objects that contain tokenizers outputs without iterating over every single token for example. Speed improvement: ```python import transformers import nlp tok = transformers.BertTokenizerFast.from_pretrained("bert-base-uncased") txt = ["a " * 512] * 1000 dataset = nlp.Dataset.from_dict({"txt": txt}) # Tokenization using .map is now faster. Previously it was taking 3.5s %time _ = dataset.map(lambda x: tok(x["txt"]), batched=True, load_from_cache_file=False) # 450ms # for comparison %time _ = tok(txt) # 280ms ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/523/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/523/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/522/comments
https://api.github.com/repos/huggingface/datasets/issues/522/events
https://github.com/huggingface/datasets/issues/522
682,478,833
MDU6SXNzdWU2ODI0Nzg4MzM=
522
dictionnary typo in docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4", "events_url": "https://api.github.com/users/yonigottesman/events{/privacy}", "followers_url": "https://api.github.com/users/yonigottesman/followers", "following_url": "https://api.github.com/users/yonigottesman/following{/other_user}", "gists_url": "https://api.github.com/users/yonigottesman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yonigottesman", "id": 4004127, "login": "yonigottesman", "node_id": "MDQ6VXNlcjQwMDQxMjc=", "organizations_url": "https://api.github.com/users/yonigottesman/orgs", "received_events_url": "https://api.github.com/users/yonigottesman/received_events", "repos_url": "https://api.github.com/users/yonigottesman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yonigottesman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yonigottesman/subscriptions", "type": "User", "url": "https://api.github.com/users/yonigottesman" }
[]
closed
false
null
[]
null
[ "Thanks!" ]
2020-08-20T07:11:05Z
2020-08-20T07:52:14Z
2020-08-20T07:52:13Z
CONTRIBUTOR
null
null
null
Many places dictionary is spelled dictionnary, not sure if its on purpose or not. Fixed in this pr: https://github.com/huggingface/nlp/pull/521
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/522/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/522/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/521
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/521/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/521/comments
https://api.github.com/repos/huggingface/datasets/issues/521/events
https://github.com/huggingface/datasets/pull/521
682,477,648
MDExOlB1bGxSZXF1ZXN0NDcwNzEyNzgz
521
Fix dictionnary (dictionary) typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4", "events_url": "https://api.github.com/users/yonigottesman/events{/privacy}", "followers_url": "https://api.github.com/users/yonigottesman/followers", "following_url": "https://api.github.com/users/yonigottesman/following{/other_user}", "gists_url": "https://api.github.com/users/yonigottesman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yonigottesman", "id": 4004127, "login": "yonigottesman", "node_id": "MDQ6VXNlcjQwMDQxMjc=", "organizations_url": "https://api.github.com/users/yonigottesman/orgs", "received_events_url": "https://api.github.com/users/yonigottesman/received_events", "repos_url": "https://api.github.com/users/yonigottesman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yonigottesman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yonigottesman/subscriptions", "type": "User", "url": "https://api.github.com/users/yonigottesman" }
[]
closed
false
null
[]
null
[ "Hahah thanks Yonatan. It was not on purpose, we are just not very good at spelling :)" ]
2020-08-20T07:09:02Z
2020-08-20T07:52:04Z
2020-08-20T07:52:04Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/521.diff", "html_url": "https://github.com/huggingface/datasets/pull/521", "merged_at": "2020-08-20T07:52:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/521.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/521" }
This error happens many times I'm thinking maybe its spelled like this on purpose?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/521/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/521/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/520
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/520/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/520/comments
https://api.github.com/repos/huggingface/datasets/issues/520/events
https://github.com/huggingface/datasets/pull/520
682,264,839
MDExOlB1bGxSZXF1ZXN0NDcwNTI4MDE0
520
Transform references for sacrebleu
{ "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jbragg", "id": 2238344, "login": "jbragg", "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "organizations_url": "https://api.github.com/users/jbragg/orgs", "received_events_url": "https://api.github.com/users/jbragg/received_events", "repos_url": "https://api.github.com/users/jbragg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "type": "User", "url": "https://api.github.com/users/jbragg" }
[]
closed
false
null
[]
null
[ "I think I agree @lhoestq so I pushed a change.\r\nThanks for your work on the library!" ]
2020-08-20T00:26:55Z
2020-08-20T09:30:54Z
2020-08-20T09:30:53Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/520.diff", "html_url": "https://github.com/huggingface/datasets/pull/520", "merged_at": "2020-08-20T09:30:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/520.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/520" }
Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and references. If one uses a more standard format where predictions and references are lists of the same length, sacrebleu throws an error. This PR transforms reference data in a more standard format into the [unusual format](https://github.com/mjpost/sacreBLEU#using-sacrebleu-from-python) expected by sacrebleu.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/520/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/520/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/519
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/519/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/519/comments
https://api.github.com/repos/huggingface/datasets/issues/519/events
https://github.com/huggingface/datasets/issues/519
682,193,882
MDU6SXNzdWU2ODIxOTM4ODI=
519
[BUG] Metrics throwing new error on master since 0.4.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jbragg", "id": 2238344, "login": "jbragg", "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "organizations_url": "https://api.github.com/users/jbragg/orgs", "received_events_url": "https://api.github.com/users/jbragg/received_events", "repos_url": "https://api.github.com/users/jbragg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "type": "User", "url": "https://api.github.com/users/jbragg" }
[]
closed
false
null
[]
null
[ "Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric", "Closing - seems to be just forgetting to tokenize. And found the helpful discussion in huggingface/evaluate#105 " ]
2020-08-19T21:29:15Z
2022-06-02T16:41:01Z
2020-08-19T22:04:40Z
CONTRIBUTOR
null
null
null
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu. Wasn't happening on 0.4.0 but happening now on master. ``` File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute self.add_batch(predictions=predictions, references=references) File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 242, in add_batch batch = self.info.features.encode_batch(batch) File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in encode_batch encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column] File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in <listcomp> encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column] File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 456, in encode_nested_example raise ValueError("Got a string but expected a list instead: '{}'".format(obj)) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/519/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/519/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/518/comments
https://api.github.com/repos/huggingface/datasets/issues/518/events
https://github.com/huggingface/datasets/pull/518
682,131,165
MDExOlB1bGxSZXF1ZXN0NDcwNDE0ODE1
518
[METRICS, breaking] Refactor caching behavior, pickle/cloudpickle metrics and dataset, add tests on metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[ "(test failure is unrelated)", "As discussed with @thomwolf merging since the hyperparameter-search has been merged in transformers." ]
2020-08-19T19:43:08Z
2020-08-24T16:01:40Z
2020-08-24T16:01:39Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/518.diff", "html_url": "https://github.com/huggingface/datasets/pull/518", "merged_at": "2020-08-24T16:01:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/518.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/518" }
Move the acquisition of the filelock at a later stage during metrics processing so it can be pickled/cloudpickled after instantiation. Also add some tests on pickling, concurrent but separate metric instances and concurrent and distributed metric instances. Changes significantly the caching behavior for the metrics: - if the metric is used in a non-distributed setup (most common case) we try to find a free cache file using UUID instead of asking for an `experiment_id` if we can't lock the cache file this allows to use several instances of the same metrics in parallel. - if the metrics is used in a distributed setup we ask for an `experiment_id` if we can't lock the cache file (because all the nodes need to have related cache file names for the final sync. - after the computation, we free the locks and delete all the cache files. Breaking: Some arguments for Metrics initialization have been removed for simplicity (`version`...) and some have been renamed for consistency with the rest of the library (`in_memory` => `keep_in_memory`). Also remove the `_has_transformers` detection in utils to avoid importing transformers everytime during loading.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/518/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/518/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/517/comments
https://api.github.com/repos/huggingface/datasets/issues/517/events
https://github.com/huggingface/datasets/issues/517
681,896,944
MDU6SXNzdWU2ODE4OTY5NDQ=
517
add MLDoc dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[ "Any updates on this?", "This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies." ]
2020-08-19T14:41:59Z
2021-08-03T05:59:33Z
null
CONTRIBUTOR
null
null
null
Hi, I am recommending that someone add MLDoc, a multilingual news topic classification dataset. - Here's a link to the Github: https://github.com/facebookresearch/MLDoc - and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf Looks like the dataset contains news stories in multiple languages that can be classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). There are 13 languages: Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/517/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/517/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/516/comments
https://api.github.com/repos/huggingface/datasets/issues/516/events
https://github.com/huggingface/datasets/pull/516
681,846,032
MDExOlB1bGxSZXF1ZXN0NDcwMTY5NTA0
516
[Breaking] Rename formated to formatted
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-08-19T13:35:23Z
2020-08-20T08:41:17Z
2020-08-20T08:41:16Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/516.diff", "html_url": "https://github.com/huggingface/datasets/pull/516", "merged_at": "2020-08-20T08:41:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/516.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/516" }
`formated` is not correct but `formatted` is
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/516/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/516/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/515/comments
https://api.github.com/repos/huggingface/datasets/issues/515/events
https://github.com/huggingface/datasets/pull/515
681,845,619
MDExOlB1bGxSZXF1ZXN0NDcwMTY5MTQ0
515
Fix batched map for formatted dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-08-19T13:34:50Z
2020-08-20T20:30:43Z
2020-08-20T20:30:42Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/515.diff", "html_url": "https://github.com/huggingface/datasets/pull/515", "merged_at": "2020-08-20T20:30:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/515.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/515" }
If you had a dataset formatted as numpy for example, and tried to do a batched map, then it would crash because one of the elements from the inputs was missing for unchanged columns (ex: batch of length 999 instead of 1000). The happened during the creation of the `pa.Table`, since columns had different lengths.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/515/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/515/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/514
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/514/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/514/comments
https://api.github.com/repos/huggingface/datasets/issues/514/events
https://github.com/huggingface/datasets/issues/514
681,256,348
MDU6SXNzdWU2ODEyNTYzNDg=
514
dataset.shuffle(keep_in_memory=True) is never allowed
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" }, { "color": "DF8D62", "default": false, "description": "", "id": 4614514401, "name": "hacktoberfest", "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest" } ]
closed
false
null
[]
null
[ "This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf ", "Maybe I'm a bit tired but I fail to see the issue here.\r\n\r\nSince `cache_file_name` is `None` by default, if you set `keep_in_memory` to `True`, the assert should pass, no?", "I failed to realise that this only applies to `shuffle()`. Whenever `keep_in_memory` is set to True, this is passed on to the `select()` function. However, if `cache_file_name` is None, it will be defined in the `shuffle()` function before it is passed on to `select()`. \r\n\r\nThus, `select()` is called with `keep_in_memory=True` and a not None value for `cache_file_name`. \r\nThis is essentially fixed in #513 \r\n\r\nEasily reproducible:\r\n```python\r\n>>> import nlp\r\n>>> data = nlp.load_dataset(\"cosmos_qa\", split=\"train\")\r\nUsing custom data configuration default\r\n>>> data.shuffle(keep_in_memory=True)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 1398, in shuffle\r\n verbose=verbose,\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 1178, in select\r\n ), \"Please use either `keep_in_memory` or `cache_file_name` but not both.\"\r\nAssertionError: Please use either `keep_in_memory` or `cache_file_name` but not both.\r\n>>>data.select([0], keep_in_memory=True)\r\n# No error\r\n```", "Oh yes ok got it thanks. Should be fixed if we are happy with #513 indeed.", "My bad. This is actually not fixed in #513. Sorry about that...\r\nThe new `indices_cache_file_name` is set to a non-None value in the new `shuffle()` as well. \r\n\r\nThe buffer and caching mechanisms used in the `select()` function are too intricate for me to understand why the check is there at all. I've removed it in my local build and it seems to be working fine for my project, without really considering other implications of the change. \r\n\r\n", "Ok I'll investigate and add a series of tests on the `keep_in_memory=True` settings which is under-tested atm", "Hey, still seeing this issue with the latest version.", "The same :(", "These are the steps needed to fix this issue:\r\n1. add the following check to `Dataset.shuffle`:\r\n```python\r\nif keep_in_memory and indices_cache_file_name is not None:\r\n raise ValueError(\"Please use either `keep_in_memory` or `indices_cache_file_name` but not both.\")\r\n```\r\n2. set `indices_cache_file_name` to `None` if `keep_in_memory` is True in the call to `select`\r\n3. add a test with `shuffle(keep_in_memory=True)`", "Hi @mariosasko , I have opened this PR #5082 " ]
2020-08-18T18:47:40Z
2022-10-10T12:21:58Z
2022-10-10T12:21:58Z
CONTRIBUTOR
null
null
null
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either `keep_in_memory` or `cache_file_name` but not both." ``` This affects both `shuffle()` as `select()` is a sub-routine, and `map()` that has the same check. I'd love to fix this myself, but unsure what the intention of the assert is given the rest of the logic in the function concerning `ccache_file_name` and `keep_in_memory`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/514/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/514/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/513
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/513/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/513/comments
https://api.github.com/repos/huggingface/datasets/issues/513/events
https://github.com/huggingface/datasets/pull/513
681,215,612
MDExOlB1bGxSZXF1ZXN0NDY5NjQxMjg1
513
[speedup] Use indices mappings instead of deepcopy for all the samples reordering methods
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[ "Ok I fixed `concatenate_datasets` and added tests\r\nFeel free to merge if it's good for you @thomwolf ", "Ok, adding some benchmarks for map/filters and then I'll merge", "Warning from pytorch that we should maybe consider at some point @lhoestq:\r\n```\r\n/__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,\r\nand PyTorch does not support non-writeable tensors. This means you can write to the underlying\r\n(supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to\r\nprotect its data or make it writeable before converting it to a tensor. This type of warning will be\r\nsuppressed for the rest of this program.\r\n(Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)\r\n532\r\n return torch.tensor(x, **format_kwargs)\r\n```", "> Warning from pytorch that we should maybe consider at some point @lhoestq:\r\n> \r\n> ```\r\n> /__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,\r\n> and PyTorch does not support non-writeable tensors. This means you can write to the underlying\r\n> (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to\r\n> protect its data or make it writeable before converting it to a tensor. This type of warning will be\r\n> suppressed for the rest of this program.\r\n> (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:141.)\r\n> 532\r\n> return torch.tensor(x, **format_kwargs)\r\n> ```\r\n\r\nNot sure why we have that, it's probably linked to zero copy from arrow to numpy" ]
2020-08-18T17:36:02Z
2020-08-28T08:41:51Z
2020-08-28T08:41:50Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/513.diff", "html_url": "https://github.com/huggingface/datasets/pull/513", "merged_at": "2020-08-28T08:41:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/513.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/513" }
Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`). Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests. All the samples re-ordering/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck. *Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/513/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/513/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/512
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/512/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/512/comments
https://api.github.com/repos/huggingface/datasets/issues/512/events
https://github.com/huggingface/datasets/pull/512
681,137,164
MDExOlB1bGxSZXF1ZXN0NDY5NTc2NzE3
512
Delete CONTRIBUTING.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/56394989?v=4", "events_url": "https://api.github.com/users/ChenZehong13/events{/privacy}", "followers_url": "https://api.github.com/users/ChenZehong13/followers", "following_url": "https://api.github.com/users/ChenZehong13/following{/other_user}", "gists_url": "https://api.github.com/users/ChenZehong13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ChenZehong13", "id": 56394989, "login": "ChenZehong13", "node_id": "MDQ6VXNlcjU2Mzk0OTg5", "organizations_url": "https://api.github.com/users/ChenZehong13/orgs", "received_events_url": "https://api.github.com/users/ChenZehong13/received_events", "repos_url": "https://api.github.com/users/ChenZehong13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ChenZehong13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenZehong13/subscriptions", "type": "User", "url": "https://api.github.com/users/ChenZehong13" }
[]
closed
false
null
[]
null
[ "😱", "Yeah, this is spammy behavior. I've reported the user handle." ]
2020-08-18T15:33:25Z
2020-08-18T15:48:21Z
2020-08-18T15:39:07Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/512.diff", "html_url": "https://github.com/huggingface/datasets/pull/512", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/512.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/512" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/512/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/512/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/511
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/511/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/511/comments
https://api.github.com/repos/huggingface/datasets/issues/511/events
https://github.com/huggingface/datasets/issues/511
681,055,553
MDU6SXNzdWU2ODEwNTU1NTM=
511
dataset.shuffle() and select() resets format. Intended?
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
[]
closed
false
null
[]
null
[ "Hi @vegarab yes feel free to open a discussion here.\r\n\r\nThis design choice was not very much thought about.\r\n\r\nSince `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table and infos).\r\n\r\nThinking about it I don't see a strong reason against transmitting the format from the parent dataset to its newly created child. It's probably what's expected by the user in most cases. What do you think @lhoestq?\r\n\r\nBy the way, I've been working today on a refactoring of all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`). The idea is to speed them up by a lot (like, really a lot) by working as much as possible with an indices mapping table instead of doing a deep copy of the full dataset as we've been doing currently. You can give it a look and try it here: https://github.com/huggingface/nlp/pull/513\r\nFeedbacks are very much welcome", "I think it's ok to keep the format.\r\nIf we want to have this behavior for `.map` too we just have to make sure it doesn't keep a column that's been removed.", "Shall we have this in the coming release by the way @lhoestq ?", "Yes sure !", "Since datasets 1.0.0 the format is not reset anymore.\r\nClosing this one, but feel free to re-open if you have other questions" ]
2020-08-18T13:46:01Z
2020-09-14T08:45:38Z
2020-09-14T08:45:38Z
CONTRIBUTOR
null
null
null
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later loading the dataset object using `torch.load("dataset.pt")`, which conserves the defined format before saving. I do shuffling and selecting (for controlling dataset size) after loading the data from .pt-file, as it's convenient whenever you train multiple models with varying sizes of the same dataset. The obvious workaround for this is to set the format again after using `dataset.select()` or `dataset.shuffle()`. _I guess this is more of a discussion on the design philosophy of the functions. Please let me know if this is not the right channel for these kinds of discussions or if they are not wanted at all!_ #### How to reproduce: ```python import nlp from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("t5-base") def create_features(batch): context_encoding = tokenizer.batch_encode_plus(batch["context"]) return {"input_ids": context_encoding["input_ids"]} dataset = nlp.load_dataset("cosmos_qa", split="train") dataset = dataset.map(create_features, batched=True) dataset.set_format(type="torch", columns=["input_ids"]) dataset[0] # {'input_ids': tensor([ 1804, 3525, 1602, ... 0, 0])} dataset = dataset.shuffle() dataset[0] # {'id': '3Q9(...)20', 'context': "Good Old War an (...) play ?', 'answer0': 'None of the above choices .', 'answer1': 'This person likes music and likes to see the show , they will see other bands play .', (...) 'input_ids': [1804, 3525, 1602, ... , 0, 0]} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/511/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/511/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/510
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/510/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/510/comments
https://api.github.com/repos/huggingface/datasets/issues/510/events
https://github.com/huggingface/datasets/issues/510
680,823,644
MDU6SXNzdWU2ODA4MjM2NDQ=
510
Version of numpy to use the library
{ "avatar_url": "https://avatars.githubusercontent.com/u/6966175?v=4", "events_url": "https://api.github.com/users/isspek/events{/privacy}", "followers_url": "https://api.github.com/users/isspek/followers", "following_url": "https://api.github.com/users/isspek/following{/other_user}", "gists_url": "https://api.github.com/users/isspek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/isspek", "id": 6966175, "login": "isspek", "node_id": "MDQ6VXNlcjY5NjYxNzU=", "organizations_url": "https://api.github.com/users/isspek/orgs", "received_events_url": "https://api.github.com/users/isspek/received_events", "repos_url": "https://api.github.com/users/isspek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/isspek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/isspek/subscriptions", "type": "User", "url": "https://api.github.com/users/isspek" }
[]
closed
false
null
[]
null
[ "Seems like this method was added in 1.17. I'll add a requirement on this.", "Thank you so much. After upgrading the numpy library, it worked." ]
2020-08-18T08:59:13Z
2020-08-19T18:35:56Z
2020-08-19T18:35:56Z
NONE
null
null
null
Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library. Thanks in advance.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/510/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/510/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/509
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/509/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/509/comments
https://api.github.com/repos/huggingface/datasets/issues/509/events
https://github.com/huggingface/datasets/issues/509
679,711,585
MDU6SXNzdWU2Nzk3MTE1ODU=
509
Converting TensorFlow dataset example
{ "avatar_url": "https://avatars.githubusercontent.com/u/22762845?v=4", "events_url": "https://api.github.com/users/saareliad/events{/privacy}", "followers_url": "https://api.github.com/users/saareliad/followers", "following_url": "https://api.github.com/users/saareliad/following{/other_user}", "gists_url": "https://api.github.com/users/saareliad/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/saareliad", "id": 22762845, "login": "saareliad", "node_id": "MDQ6VXNlcjIyNzYyODQ1", "organizations_url": "https://api.github.com/users/saareliad/orgs", "received_events_url": "https://api.github.com/users/saareliad/received_events", "repos_url": "https://api.github.com/users/saareliad/repos", "site_admin": false, "starred_url": "https://api.github.com/users/saareliad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saareliad/subscriptions", "type": "User", "url": "https://api.github.com/users/saareliad" }
[]
closed
false
null
[]
null
[ "Do you want to convert a dataset script to the tfds format ?\r\nIf so, we currently have a comversion script nlp/commands/convert.py but it is a conversion script that goes from tfds to nlp.\r\nI think it shouldn't be too hard to do the changes in reverse (at some manual adjustments).\r\nIf you manage to make it work in reverse, feel free to open a PR to share it with the community :)", "In our docs: [Using a Dataset with PyTorch/Tensorflow](https://huggingface.co/docs/datasets/torch_tensorflow.html)." ]
2020-08-16T08:05:20Z
2021-08-03T06:01:18Z
2021-08-03T06:01:17Z
NONE
null
null
null
Hi, I want to use TensorFlow datasets with this repo, I noticed you made some conversion script, can you give a simple example of using it? Thanks
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/509/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/509/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/508
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/508/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/508/comments
https://api.github.com/repos/huggingface/datasets/issues/508/events
https://github.com/huggingface/datasets/issues/508
679,705,734
MDU6SXNzdWU2Nzk3MDU3MzQ=
508
TypeError: Receiver() takes no arguments
{ "avatar_url": "https://avatars.githubusercontent.com/u/1225851?v=4", "events_url": "https://api.github.com/users/sebastiantomac/events{/privacy}", "followers_url": "https://api.github.com/users/sebastiantomac/followers", "following_url": "https://api.github.com/users/sebastiantomac/following{/other_user}", "gists_url": "https://api.github.com/users/sebastiantomac/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sebastiantomac", "id": 1225851, "login": "sebastiantomac", "node_id": "MDQ6VXNlcjEyMjU4NTE=", "organizations_url": "https://api.github.com/users/sebastiantomac/orgs", "received_events_url": "https://api.github.com/users/sebastiantomac/received_events", "repos_url": "https://api.github.com/users/sebastiantomac/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sebastiantomac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sebastiantomac/subscriptions", "type": "User", "url": "https://api.github.com/users/sebastiantomac" }
[]
closed
false
null
[]
null
[ "Which version of Apache Beam do you have (can you copy your full environment info here)?", "apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). ", "Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a dummy pipeline with [this code](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/wordcount_minimal.py)\r\n\r\nIf you get the same error, it means that the issue comes from apache beam.\r\nOtherwise we'll investigate what went wrong here", "Still, same error, so I guess it is on apache beam then. \r\nThanks for the investigation.", "Thanks for trying\r\nLet us know if you find clues of what caused this issue, or if you find a fix" ]
2020-08-16T07:18:16Z
2020-09-01T14:53:33Z
2020-09-01T14:49:03Z
NONE
null
null
null
I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner') ``` This fails in the apache beam runner. ``` Traceback (most recent call last): File "D:/ML/wikiembedding/gpt2_sv.py", line 36, in <module> dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=my_cache_dir, beam_runner='DirectRunner') File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare self._download_and_prepare( File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 969, in _download_and_prepare pipeline_results = pipeline.run() File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\pipeline.py", line 534, in run return self.runner.run_pipeline(self, self._options) .... File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 218, in process_encoded self.output(decoded_value) File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\operations.py", line 332, in output cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value) File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\Cython\Shadow.py", line 167, in cast return type(*args) TypeError: Receiver() takes no arguments ``` This is run on a Windows 10 machine with python 3.8. I get the same error loading the swedish wikipedia dump.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/508/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/508/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/507/comments
https://api.github.com/repos/huggingface/datasets/issues/507/events
https://github.com/huggingface/datasets/issues/507
679,400,683
MDU6SXNzdWU2Nzk0MDA2ODM=
507
Errors when I use
{ "avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4", "events_url": "https://api.github.com/users/mchari/events{/privacy}", "followers_url": "https://api.github.com/users/mchari/followers", "following_url": "https://api.github.com/users/mchari/following{/other_user}", "gists_url": "https://api.github.com/users/mchari/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mchari", "id": 30506151, "login": "mchari", "node_id": "MDQ6VXNlcjMwNTA2MTUx", "organizations_url": "https://api.github.com/users/mchari/orgs", "received_events_url": "https://api.github.com/users/mchari/received_events", "repos_url": "https://api.github.com/users/mchari/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchari/subscriptions", "type": "User", "url": "https://api.github.com/users/mchari" }
[]
closed
false
null
[]
null
[ "Looks like an issue with 3.0.2 transformers version. Works fine when I use \"master\" version of transformers." ]
2020-08-14T21:03:57Z
2020-08-14T21:39:10Z
2020-08-14T21:39:10Z
NONE
null
null
null
I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors I am using **transformers 3.0.2** code . from transformers.pipelines import pipeline from transformers.modeling_auto import AutoModelForQuestionAnswering from transformers.tokenization_auto import AutoTokenizer model_name = "deepset/roberta-base-squad2" nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) The errors are : res = nlp(QA_input) File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in __call__ for s, e, score in zip(starts, ends, scores) File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in <listcomp> for s, e, score in zip(starts, ends, scores) KeyError: 0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/507/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/507/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/506
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/506/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/506/comments
https://api.github.com/repos/huggingface/datasets/issues/506/events
https://github.com/huggingface/datasets/pull/506
679,164,788
MDExOlB1bGxSZXF1ZXN0NDY3OTkwNjc2
506
fix dataset.map for function without outputs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-08-14T13:40:22Z
2020-08-17T11:24:39Z
2020-08-17T11:24:38Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/506.diff", "html_url": "https://github.com/huggingface/datasets/pull/506", "merged_at": "2020-08-17T11:24:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/506.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/506" }
As noticed in #505 , giving a function that doesn't return anything in `.map` raises an error because of an unreferenced variable. I fixed that and added tests. Thanks @avloss for reporting
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/506/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/506/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/505/comments
https://api.github.com/repos/huggingface/datasets/issues/505/events
https://github.com/huggingface/datasets/pull/505
678,791,400
MDExOlB1bGxSZXF1ZXN0NDY3NjgxMjY4
505
tmp_file referenced before assignment
{ "avatar_url": "https://avatars.githubusercontent.com/u/17853685?v=4", "events_url": "https://api.github.com/users/avloss/events{/privacy}", "followers_url": "https://api.github.com/users/avloss/followers", "following_url": "https://api.github.com/users/avloss/following{/other_user}", "gists_url": "https://api.github.com/users/avloss/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avloss", "id": 17853685, "login": "avloss", "node_id": "MDQ6VXNlcjE3ODUzNjg1", "organizations_url": "https://api.github.com/users/avloss/orgs", "received_events_url": "https://api.github.com/users/avloss/received_events", "repos_url": "https://api.github.com/users/avloss/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avloss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avloss/subscriptions", "type": "User", "url": "https://api.github.com/users/avloss" }
[]
closed
false
null
[]
null
[ "Thanks for reporting the issue ! I'm creating a new PR to fix it and add tests.\r\n(I'm doing a new PR because I know there's some other place where it needs to be fixed)", "I'm closing this one as I created the other PR." ]
2020-08-13T23:27:33Z
2020-08-14T13:42:46Z
2020-08-14T13:42:46Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/505.diff", "html_url": "https://github.com/huggingface/datasets/pull/505", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/505.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/505" }
Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file".
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/505/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/505/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/504
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/504/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/504/comments
https://api.github.com/repos/huggingface/datasets/issues/504/events
https://github.com/huggingface/datasets/pull/504
678,756,211
MDExOlB1bGxSZXF1ZXN0NDY3NjUxOTA5
504
Added downloading to Hyperpartisan news detection
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson" }
[]
closed
false
null
[]
null
[ "Thank you @ghomasHudson for making our dataset available! This is great!", "The test passes since #527 :)" ]
2020-08-13T21:53:46Z
2020-08-27T08:18:41Z
2020-08-27T08:18:41Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/504.diff", "html_url": "https://github.com/huggingface/datasets/pull/504", "merged_at": "2020-08-27T08:18:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/504.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/504" }
Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel ! Currently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `default` in this test. Might be related to #474
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/504/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/504/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/503
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/503/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/503/comments
https://api.github.com/repos/huggingface/datasets/issues/503/events
https://github.com/huggingface/datasets/pull/503
678,726,538
MDExOlB1bGxSZXF1ZXN0NDY3NjI3MTEw
503
CompGuessWhat?! 0.2.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4", "events_url": "https://api.github.com/users/aleSuglia/events{/privacy}", "followers_url": "https://api.github.com/users/aleSuglia/followers", "following_url": "https://api.github.com/users/aleSuglia/following{/other_user}", "gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aleSuglia", "id": 1479733, "login": "aleSuglia", "node_id": "MDQ6VXNlcjE0Nzk3MzM=", "organizations_url": "https://api.github.com/users/aleSuglia/orgs", "received_events_url": "https://api.github.com/users/aleSuglia/received_events", "repos_url": "https://api.github.com/users/aleSuglia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions", "type": "User", "url": "https://api.github.com/users/aleSuglia" }
[]
closed
false
null
[]
null
[ "I don't see any significant change in the dataset script (except the version value update), can you check that again please ?", "Hi @aleSuglia , can you check that all the changes you wanted to do are in the dataset script ?", "Hey sorry but I'm in the middle of a conference deadline. I'll let you know asap!", "Ok np :)\r\nGood luck with your work for the conference", "I finally managed to find some time to complete this. The only weird thing about this release is that I had to run the tests with the ignore checksum flag. Could it be because the Dropbox link doesn't change but the file does? Sorry didn't have the time to check the code to see what's happening behind the scenes.\r\n", "Yes if the file changed, then the checksum verification won't pass as it expects to see the checksum of the old file.\r\nThe checksum is computed by hashing the complete file.\r\nYou can update the checksum by doing \r\n\r\n```\r\nnlp-cli test ./datasets/compguesswhat --save_infos --all_configs\r\n```", "Any updates on this?", "Hi :)\r\n\r\nI think what's left to do is\r\n1- rebase from master, since we changed the name of the library\r\n2- update the metadata file of the dataset using the command \r\n```\r\ndatasets-cli test ./datasets/compguesswhat --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\nThis command should update the checksum of the dropbox file", "That's perfect. I'll have a look at it later today!", "Nice thanks !", "@lhoestq not sure why the quality check doesn't pass. Unfortunately CircleCI doesn't show the actual error. If I run `black` on my machine it works just fine. Ideas?", "@lhoestq any updates? :) ", "Your version of `black` might be outdated, or you run using `black` instead of `make style` since it reformatted 100+ files.\r\nCould you try to update black, then `make style` ?", "Yes I think my versions of isort and black were outdated. Thanks @lhoestq :)\r\n", "It still doesn't look right in terms of line-length.\r\nAre you running `black` or `make style` ?", "I'm running `make style`. This is the output of the command:\r\n\r\n```\r\nblack --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! ✨ 🍰 ✨\r\n250 files left unchanged.\r\nisort tests src benchmarks datasets metrics\r\n```", "Weird I have the same output without file changes with black `20.8b1` and isort `5.6.4` using `make style` too", "I think that's because black doesn't revert the changes you first did with the old version.\r\nCould you open a new PR with only the ComGuessWhat files updated ? Hopefully now that black is up to date it should work directly (and to avoid 100+ files changes)", "I will have a look at it tomorrow. Thanks for your help!", "I'm closing this one and I'll open a new one." ]
2020-08-13T20:51:26Z
2020-10-21T06:54:29Z
2020-10-21T06:54:29Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/503.diff", "html_url": "https://github.com/huggingface/datasets/pull/503", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/503.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/503" }
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/503/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/503/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/502/comments
https://api.github.com/repos/huggingface/datasets/issues/502/events
https://github.com/huggingface/datasets/pull/502
678,546,070
MDExOlB1bGxSZXF1ZXN0NDY3NDc1MDg0
502
Fix tokenizers caching
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "This should fix #501 and also the issue you sent me on slack @sgugger ." ]
2020-08-13T15:53:37Z
2020-08-19T13:37:19Z
2020-08-19T13:37:18Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/502.diff", "html_url": "https://github.com/huggingface/datasets/pull/502", "merged_at": "2020-08-19T13:37:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/502.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/502" }
I've found some cases where the caching didn't work properly for tokenizers: 1. if a tokenizer has a regex pattern, then the caching would be inconsistent across sessions 2. if a tokenizer has a cache attribute that changes after some calls, the the caching would not work after cache updates 3. if a tokenizer is used inside a function, the caching of this function would result in the same cache file for different tokenizers 4. if `unique_no_split_tokens`'s attribute is not the same across sessions (after loading a tokenizer) then the caching could be inconsistent To fix that, this is what I did: 1. register a specific `save_regex` function for pickle that makes regex dumps deterministic 2. ignore cache attribute of some tokenizers before dumping 3. enable recursive dump by default for all dumps 4. make `unique_no_split_tokens` deterministic in https://github.com/huggingface/transformers/pull/6461 I also added tests to make sure that tokenizers hashing works as expected. In the future we should find a way to test if hashing also works across session (maybe using two CI jobs ? or by hardcoding a tokenizer's hash ?)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/502/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/502/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/501/comments
https://api.github.com/repos/huggingface/datasets/issues/501/events
https://github.com/huggingface/datasets/issues/501
677,952,893
MDU6SXNzdWU2Nzc5NTI4OTM=
501
Caching doesn't work for map (non-deterministic)
{ "avatar_url": "https://avatars.githubusercontent.com/u/8149933?v=4", "events_url": "https://api.github.com/users/wulu473/events{/privacy}", "followers_url": "https://api.github.com/users/wulu473/followers", "following_url": "https://api.github.com/users/wulu473/following{/other_user}", "gists_url": "https://api.github.com/users/wulu473/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wulu473", "id": 8149933, "login": "wulu473", "node_id": "MDQ6VXNlcjgxNDk5MzM=", "organizations_url": "https://api.github.com/users/wulu473/orgs", "received_events_url": "https://api.github.com/users/wulu473/received_events", "repos_url": "https://api.github.com/users/wulu473/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wulu473/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wulu473/subscriptions", "type": "User", "url": "https://api.github.com/users/wulu473" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Thanks for reporting !\r\n\r\nTo store the cache file, we compute a hash of the function given in `.map`, using our own hashing function.\r\nThe hash doesn't seem to stay the same over sessions for the tokenizer.\r\nApparently this is because of the regex at `tokenizer.pat` is not well supported by our hashing function.\r\n\r\nI'm working on a fix", "Thanks everyone. Works great now.", "Hi. I believe the fix was for the nlp library. Is there a solution to handle compiled regex expressions in .map() with the caching. I want to run a simple regex pattern on a big dataset, but I am running into the issue of compiled expression not being cached. \r\n\r\nInstead of opening a new issue, I thought I would put my query here. Let me know if a new issue would be more suitable. Thanks", "Hi @MaveriQ! This fix is also included in the `datasets` library. Can you provide a reproducer?" ]
2020-08-12T20:20:07Z
2022-08-08T11:02:23Z
2020-08-24T16:34:35Z
NONE
null
null
null
The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2") def convert_to_features(example_batch): input_str = example_batch["body"] encodings = tokenizer(input_str, add_special_tokens=True, truncation=True) return encodings ds = ds.map(convert_to_features, batched=True) if __name__ == "__main__": main() ``` Roughly 3/10 times, this example recomputes the tokenization. Is this expected behaviour?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/501/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/501/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/500/comments
https://api.github.com/repos/huggingface/datasets/issues/500/events
https://github.com/huggingface/datasets/pull/500
677,841,708
MDExOlB1bGxSZXF1ZXN0NDY2ODk0NTk0
500
Use hnsw in wiki_dpr
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-08-12T16:58:07Z
2020-08-20T07:59:19Z
2020-08-20T07:59:18Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/500.diff", "html_url": "https://github.com/huggingface/datasets/pull/500", "merged_at": "2020-08-20T07:59:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/500.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/500" }
The HNSW faiss index is much faster that regular Flat index.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/500/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/500/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/499/comments
https://api.github.com/repos/huggingface/datasets/issues/499/events
https://github.com/huggingface/datasets/pull/499
677,709,938
MDExOlB1bGxSZXF1ZXN0NDY2Nzg1MjAy
499
Narrativeqa (with full text)
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson" }
[]
closed
false
null
[]
null
[ "I took a look at the dummy data creation for this dataset.\r\n\r\nMaybe it didn't work on your side might be because `master.zip` and `narrativeqa_full_text.zip` are supposed to be directories and not acutal zip files in the dummy data folder.\r\n\r\nI managed to make it work with this `dummy_data.zip` file:\r\nhttps://drive.google.com/file/d/1G9ZHAjelazNApbFI0ep2dnSAWklXgGMd/view?usp=sharing", "@lhoestq Hmmm wasn't that. Must have been something else I missed.\r\n\r\nHave committed your working version though now.", "Ok thanks.\r\nCould you rebase from master to fix the CI please ?", "Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ?", "> Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ?\r\n\r\nHave added the test set code but getting an OverflowError when trying to regen the dataset_infos.json:\r\n\r\n---\r\nOverflowError: There was an overflow in the <class 'pyarrow.lib.StructArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB\r\n\r\n---\r\n", "Thanks for reporting @ghomasHudson , I'll look into it", "It looks like it's an issue with Pyarrow.\r\nBy changing the `DEFAULT_MAX_BATCH_SIZE` to 1000 instead of 10 000 in `arrow_writer.py` I was able to run the command.\r\n\r\nBasically it seems that is an Arrow StructArray has more than 1-2GB of data, then it shuffles some of its content.\r\nI can't find any issue on Apache Arrow's JIRA about this problem. It will require more investigation.\r\n\r\nMaybe we can simply automatically decrease the writer's batch size when this happens. We can just check if the arrow array is more than a certain amount of bytes. ", "@lhoestq I've finally got round to regenerating the `dataset_infos.json` for this and adding all 3 splits. I've done this and updated for the new version of datasets.\r\n\r\nThe CI tests still aren't passing though (they pass on my machine). `test_load_dataset_narrativeqa` seems to fail but I have no idea how. Would appreciate if you have any ideas - would be great to finally finish this one!", "The dummy data test fails, apparently it's because no examples are yielded for the dummy data.\r\n\r\nAlso it looks like the PR now show changes in many other files than the ones for NarrativeQA, could you create another branch and another PR please ?\r\n\r\nFeel free to ping me on the new PR so we can fi the dummy data together" ]
2020-08-12T13:49:43Z
2020-12-09T11:21:02Z
2020-12-09T11:21:02Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/499.diff", "html_url": "https://github.com/huggingface/datasets/pull/499", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/499.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/499" }
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset. Few notes: - Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine. - Can't get the dummy data to work. Currently putting stuff at: ``` dummy |---- 0.0.0 |- dummy_data.zip |-master.zip | |- narrativeqa-master | |- documents.csv | |- qaps.csv | |- third_party ...... | | - narrativeqa_full_text.zip | | - 001.content | | - .... ``` Not sure what I'm messing up here (probably something obvious).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/499/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/499/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/498/comments
https://api.github.com/repos/huggingface/datasets/issues/498/events
https://github.com/huggingface/datasets/pull/498
677,597,479
MDExOlB1bGxSZXF1ZXN0NDY2Njg5NTcy
498
dont use beam fs to save info for local cache dir
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-08-12T11:00:00Z
2020-08-14T13:17:21Z
2020-08-14T13:17:20Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/498.diff", "html_url": "https://github.com/huggingface/datasets/pull/498", "merged_at": "2020-08-14T13:17:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/498.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/498" }
If the cache dir is local, then we shouldn't use beam's filesystem to save the dataset info Fix #490
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/498/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/498/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/497/comments
https://api.github.com/repos/huggingface/datasets/issues/497/events
https://github.com/huggingface/datasets/pull/497
677,057,116
MDExOlB1bGxSZXF1ZXN0NDY2MjQ2NDQ3
497
skip header in PAWS-X
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-08-11T17:26:25Z
2020-08-19T09:50:02Z
2020-08-19T09:50:01Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/497.diff", "html_url": "https://github.com/huggingface/datasets/pull/497", "merged_at": "2020-08-19T09:50:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/497.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/497" }
This should fix #485 I also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one). Note that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I removed them in this case when I ran `nlp-cli ./datasets/xtreme --save_infos` to keep backward compatibility (versions 0.3.0 can't load these fields). I think I'll change the logic so that `nlp-cli test` doesn't create these fields for dataset with no post processing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/497/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/497/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/496/comments
https://api.github.com/repos/huggingface/datasets/issues/496/events
https://github.com/huggingface/datasets/pull/496
677,016,998
MDExOlB1bGxSZXF1ZXN0NDY2MjE1Mjg1
496
fix bad type in overflow check
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-08-11T16:24:58Z
2020-08-14T13:29:35Z
2020-08-14T13:29:34Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/496.diff", "html_url": "https://github.com/huggingface/datasets/pull/496", "merged_at": "2020-08-14T13:29:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/496.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/496" }
When writing an arrow file and inferring the features, the overflow check could fail if the first example had a `null` field. This is because we were not using the inferred features to do this check, and we could end up with arrays that don't match because of a type mismatch (`null` vs `string` for example). This should fix #482
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/496/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/496/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/495/comments
https://api.github.com/repos/huggingface/datasets/issues/495/events
https://github.com/huggingface/datasets/pull/495
676,959,289
MDExOlB1bGxSZXF1ZXN0NDY2MTY5MTA3
495
stack vectors in pytorch and tensorflow
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-08-11T15:12:53Z
2020-08-12T09:30:49Z
2020-08-12T09:30:48Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/495.diff", "html_url": "https://github.com/huggingface/datasets/pull/495", "merged_at": "2020-08-12T09:30:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/495.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/495" }
When the format of a dataset is set to pytorch or tensorflow, and if the dataset has vectors in it, they were not stacked together as tensors when calling `dataset[i:i + batch_size][column]` or `dataset[column]`. I added support for stacked tensors for both pytorch and tensorflow. For ragged tensors, they are stacked only for tensorflow as pytorch doesn't support ragged tensors.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/495/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/495/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/494/comments
https://api.github.com/repos/huggingface/datasets/issues/494/events
https://github.com/huggingface/datasets/pull/494
676,886,955
MDExOlB1bGxSZXF1ZXN0NDY2MTExOTQz
494
Fix numpy stacking
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "This PR also fixed a bug where numpy arrays were returned instead of pytorch tensors when getting with a clumn as a key." ]
2020-08-11T13:40:30Z
2020-08-11T14:56:50Z
2020-08-11T13:49:52Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/494.diff", "html_url": "https://github.com/huggingface/datasets/pull/494", "merged_at": "2020-08-11T13:49:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/494.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/494" }
When getting items using a column name as a key, numpy arrays were not stacked. I fixed that and added some tests. There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the to fix this issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/494/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/494/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/493
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/493/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/493/comments
https://api.github.com/repos/huggingface/datasets/issues/493/events
https://github.com/huggingface/datasets/pull/493
676,527,351
MDExOlB1bGxSZXF1ZXN0NDY1ODIxOTA0
493
Fix wmt zh-en url
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
[]
closed
false
null
[]
null
[ "this doesn't work. I can decompress the file after download locally." ]
2020-08-11T02:14:52Z
2020-08-11T02:22:28Z
2020-08-11T02:22:12Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/493.diff", "html_url": "https://github.com/huggingface/datasets/pull/493", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/493.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/493" }
I verified that ``` wget https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00 ``` runs in 2 minutes.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/493/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/493/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/492/comments
https://api.github.com/repos/huggingface/datasets/issues/492/events
https://github.com/huggingface/datasets/issues/492
676,495,064
MDU6SXNzdWU2NzY0OTUwNjQ=
492
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
[]
closed
false
null
[]
null
[ "In 0.4.0, the assertion in `concatenate_datasets ` is on the features, and not the schema.\r\nCould you try to update `nlp` ?\r\n\r\nAlso, since 0.4.0, you can use `dset_wikipedia.cast_(dset_books.features)` to avoid the schema cast hack.", "Or maybe the assertion comes from elsewhere ?", "I'm using the master branch. The assertion failure comes from the underlying `pa.concat_tables()`, which is in the pyarrow package. That method does check schemas.\r\n\r\nSince `features.type` does not contain information about nullable vs non-nullable features, the `cast_()` method won't resolve the schema mismatch. There is information in a schema which is not stored in features.", "I'm doing a refactor of type inference in #363 . Both text fields should match after that", "By default nullable will be set to True", "It should be good now. I was able to run\r\n\r\n```python\r\n>>> from nlp import concatenate_datasets, load_dataset\r\n>>>\r\n>>> bookcorpus = load_dataset(\"bookcorpus\", split=\"train\")\r\n>>> wiki = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\")\r\n>>> wiki.remove_columns_(\"title\") # only keep the text\r\n>>>\r\n>>> assert bookcorpus.features.type == wiki.features.type\r\n>>> bert_dataset = concatenate_datasets([bookcorpus, wiki])\r\n```", "Thanks!" ]
2020-08-11T00:27:46Z
2020-08-26T16:17:19Z
2020-08-26T16:17:19Z
CONTRIBUTOR
null
null
null
Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("title") dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir) dset = nlp.concatenate_datasets([dset_wikipedia, dset_books]) ``` This fails because they have different schemas, despite having identical features. ```python assert dset_wikipedia.features == dset_books.features # True assert dset_wikipedia._data.schema == dset_books._data.schema # False ``` The Wikipedia dataset has 'text: string', while the BookCorpus dataset has 'text: string not null'. Currently I hack together a working schema match with the following line, but it would be better if this was handled in Features themselves. ```python dset_wikipedia._data = dset_wikipedia.data.cast(dset_books._data.schema) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/492/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/491/comments
https://api.github.com/repos/huggingface/datasets/issues/491/events
https://github.com/huggingface/datasets/issues/491
676,486,275
MDU6SXNzdWU2NzY0ODYyNzU=
491
No 0.4.0 release on GitHub
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
[]
closed
false
null
[]
null
[ "I did the release on github, and updated the doc :)\r\nSorry for the delay", "Thanks!" ]
2020-08-10T23:59:57Z
2020-08-11T16:50:07Z
2020-08-11T16:50:07Z
CONTRIBUTOR
null
null
null
0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/491/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/491/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/490/comments
https://api.github.com/repos/huggingface/datasets/issues/490/events
https://github.com/huggingface/datasets/issues/490
676,482,242
MDU6SXNzdWU2NzY0ODIyNDI=
490
Loading preprocessed Wikipedia dataset requires apache_beam
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
[]
closed
false
null
[]
null
[]
2020-08-10T23:46:50Z
2020-08-14T13:17:20Z
2020-08-14T13:17:20Z
CONTRIBUTOR
null
null
null
Running `nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")` gives an error if apache_beam is not installed, stemming from https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988 This succeeded without the dependency in version 0.3.0. This seems like an unnecessary dependency to process some dataset info if you're using the already-preprocessed version. Could it be removed?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/490/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/490/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/489/comments
https://api.github.com/repos/huggingface/datasets/issues/489/events
https://github.com/huggingface/datasets/issues/489
676,456,257
MDU6SXNzdWU2NzY0NTYyNTc=
489
ug
{ "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timothyjlaurent", "id": 2000204, "login": "timothyjlaurent", "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "type": "User", "url": "https://api.github.com/users/timothyjlaurent" }
[]
closed
false
null
[]
null
[ "whoops", "please delete this" ]
2020-08-10T22:33:03Z
2020-08-10T22:55:14Z
2020-08-10T22:33:40Z
NONE
null
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/489/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/489/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/488/comments
https://api.github.com/repos/huggingface/datasets/issues/488/events
https://github.com/huggingface/datasets/issues/488
676,299,993
MDU6SXNzdWU2NzYyOTk5OTM=
488
issues with downloading datasets for wmt16 and wmt19
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
[]
closed
false
null
[]
null
[ "I found `UNv1.0.en-ru.tar.gz` here: https://conferences.unite.un.org/uncorpus/en/downloadoverview, so it can be reconstructed with:\r\n```\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.01\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.02\r\ncat UNv1.0.en-ru.tar.gz.0* > UNv1.0.en-ru.tar.gz\r\n```\r\nit has other languages as well, in case https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/ is gone", "Further, `nlp.load_dataset('wmt19', 'ru-en')` has only the `train` and `val` datasets. `test` is missing.\r\n\r\nFixed locally for summarization needs, by running:\r\n```\r\npip install sacrebleu\r\nsacrebleu -t wmt19 -l ru-en --echo src > test.source\r\nsacrebleu -t wmt19 -l ru-en --echo ref > test.target\r\n```\r\nh/t @sshleifer ", "Fixed in https://github.com/huggingface/datasets/pull/1912" ]
2020-08-10T17:32:51Z
2022-10-04T17:46:59Z
2022-10-04T17:46:58Z
MEMBER
null
null
null
I have encountered multiple issues while trying to: ``` import nlp dataset = nlp.load_dataset('wmt16', 'ru-en') metric = nlp.load_metric('wmt16') ``` 1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and now it worked. So it must have been some outdated dependencies that `pip install -e ".[dev]" ` fixed. 2. it was downloading at 60kbs - almost 5 hours to get the dataset. It was downloading all pairs and not just the one I asked for. I tried the same code with `wmt19` in parallel and it took a few secs to download and it only fetched data for the requested pair. (but it failed too, see below) 3. my machine has crushed and when I retried I got: ``` Traceback (most recent call last): File "./download.py", line 9, in <module> dataset = nlp.load_dataset('wmt16', 'ru-en') File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 549, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 449, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/home/stas/anaconda3/envs/main/lib/python3.7/contextlib.py", line 112, in __enter__ return next(self.gen) File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/home/stas/anaconda3/envs/main/lib/python3.7/os.py", line 221, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/home/stas/.cache/huggingface/datasets/wmt16/ru-en/1.0.0/4d8269cdd971ed26984a9c0e4a158e0c7afc8135fac8fb8ee43ceecf38fd422d.incomplete' ``` it can't handle resumes. but neither allows a new start. Had to delete it manually. 4. and finally when it downloaded the dataset, it then failed to fetch the metrics: ``` Traceback (most recent call last): File "./download.py", line 15, in <module> metric = nlp.load_metric('wmt16') File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 442, in load_metric module_path, hash = prepare_module(path, download_config=download_config, dataset=False) File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 258, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 198, in cached_path local_files_only=download_config.local_files_only, File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 356, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/metrics/wmt16/wmt16.py ``` 5. If I run the same code with `wmt19`, it fails too: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/488/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/488/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/487/comments
https://api.github.com/repos/huggingface/datasets/issues/487/events
https://github.com/huggingface/datasets/pull/487
676,143,029
MDExOlB1bGxSZXF1ZXN0NDY1NTA1NjQy
487
Fix elasticsearch result ids returning as strings
{ "avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4", "events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}", "followers_url": "https://api.github.com/users/sai-prasanna/followers", "following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}", "gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sai-prasanna", "id": 3595526, "login": "sai-prasanna", "node_id": "MDQ6VXNlcjM1OTU1MjY=", "organizations_url": "https://api.github.com/users/sai-prasanna/orgs", "received_events_url": "https://api.github.com/users/sai-prasanna/received_events", "repos_url": "https://api.github.com/users/sai-prasanna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions", "type": "User", "url": "https://api.github.com/users/sai-prasanna" }
[]
closed
false
null
[]
null
[ "It looks like you need to rebase from master to fix the CI. Could you do that please ?" ]
2020-08-10T13:37:11Z
2020-08-31T10:42:46Z
2020-08-31T10:42:46Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/487.diff", "html_url": "https://github.com/huggingface/datasets/pull/487", "merged_at": "2020-08-31T10:42:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/487.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/487" }
I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant "id_" returned for searches are strings, but our library assumes them to be integers.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/487/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/487/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/486/comments
https://api.github.com/repos/huggingface/datasets/issues/486/events
https://github.com/huggingface/datasets/issues/486
675,649,034
MDU6SXNzdWU2NzU2NDkwMzQ=
486
Bookcorpus data contains pretokenized text
{ "avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4", "events_url": "https://api.github.com/users/orsharir/events{/privacy}", "followers_url": "https://api.github.com/users/orsharir/followers", "following_url": "https://api.github.com/users/orsharir/following{/other_user}", "gists_url": "https://api.github.com/users/orsharir/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/orsharir", "id": 99543, "login": "orsharir", "node_id": "MDQ6VXNlcjk5NTQz", "organizations_url": "https://api.github.com/users/orsharir/orgs", "received_events_url": "https://api.github.com/users/orsharir/received_events", "repos_url": "https://api.github.com/users/orsharir/repos", "site_admin": false, "starred_url": "https://api.github.com/users/orsharir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orsharir/subscriptions", "type": "User", "url": "https://api.github.com/users/orsharir" }
[]
closed
false
null
[]
null
[ "Yes indeed it looks like some `'` and spaces are missing (for example in `dont` or `didnt`).\r\nDo you know if there exist some copies without this issue ?\r\nHow would you fix this issue on the current data exactly ? I can see that the data is raw text (not tokenized) so I'm not sure I understand how you would do it. Could you provide more details ?", "I'm afraid that I don't know how to obtain the original BookCorpus data. I believe this version came from an anonymous Google Drive link posted in another issue.\r\n\r\nGoing through the raw text in this version, it's apparent that NLTK's TreebankWordTokenizer was applied on it (I gave some examples in my original post), followed by:\r\n`' '.join(tokens)`\r\nYou can retrieve the tokenization by splitting on whitespace. You can then \"detokenize\" it with TreebankWordDetokenizer class of NLTK (though, as I suggested, use the fixed version in my repo). This will bring the text closer to its original form, but some steps of TreebankWordTokenizer are destructive, so it wouldn't be one-to-one. Something along the lines of the following should work:\r\n```\r\ntreebank_detokenizer = nltk.tokenize.treebank.TreebankWordDetokenizer()\r\ndb = nlp.load_dataset('bookcorpus', split=nlp.Split.TRAIN)\r\ndb = db.map(lambda x: treebank_detokenizer.detokenize(x['text'].split()))\r\n```\r\n\r\nRegarding other issues beyond the above, I'm afraid that I can't help with that.", "Ok I get it, that would be very cool indeed\r\n\r\nWhat kinds of patterns the detokenizer can't retrieve ?", "The TreebankTokenizer makes some assumptions about whitespace, parentheses, quotation marks, etc. For instance, while tokenizing the following text:\r\n```\r\nDwayne \"The Rock\" Johnson\r\n```\r\nwill result in:\r\n```\r\nDwayne `` The Rock '' Johnson\r\n```\r\nwhere the left and right quotation marks are turned into distinct symbols. Upon reconstruction, we can attach the left part to its token on the right, and respectively for the right part. However, the following texts would be tokenized exactly the same:\r\n```\r\nDwayne \" The Rock \" Johnson\r\nDwayne \" The Rock\" Johnson\r\nDwayne \" The Rock\" Johnson\r\n...\r\n```\r\nIn the above examples, the detokenizer would correct these inputs into the canonical text\r\n```\r\nDwayne \"The Rock\" Johnson\r\n```\r\nHowever, there are cases where there the solution cannot easily be inferred (at least without a true LM - this tokenizer is just a bunch of regexes). For instance, in cases where you have a fragment that contains the end of quote, but not its beginning, plus an accidental space:\r\n```\r\n... and it sounds fantastic, \" he said.\r\n```\r\nIn the above case, the tokenizer would assume that the quotes refer to the next token, and so upon detokenization it will result in the following mistake:\r\n```\r\n... and it sounds fantastic, \"he said.\r\n```\r\n\r\nWhile these are all odd edge cases (the basic assumptions do make sense), in noisy data they can occur, which is why I mentioned that the detokenizer cannot restore the original perfectly.\r\n", "To confirm, since this is preprocessed, this was not the exact version of the Book Corpus used to actually train the models described here (particularly Distilbert)? https://huggingface.co/datasets/bookcorpus\r\n\r\nOr does this preprocessing exactly match that of the papers?", "I believe these are just artifacts of this particular source. It might be better to crawl it again, or use another preprocessed source, as found here: https://github.com/soskek/bookcorpus ", "Yes actually the BookCorpus on hugginface is based on [this](https://github.com/soskek/bookcorpus/issues/24#issuecomment-643933352). And I kind of regret naming it as \"BookCorpus\" instead of something like \"BookCorpusLike\".\r\n\r\nBut there is a good news ! @shawwn has replicated BookCorpus in his way, and also provided a link to download the plain text files. see [here](https://github.com/soskek/bookcorpus/issues/27). There is chance we can have a \"OpenBookCorpus\" !", "Resolved via #856" ]
2020-08-09T06:53:24Z
2022-10-04T17:44:33Z
2022-10-04T17:44:33Z
CONTRIBUTOR
null
null
null
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end quotes, respectively. On my own projects, I just run the data through NLTK's TreebankWordDetokenizer to reverse the tokenization (as best as possible). I think it would be beneficial to apply this transformation directly on your remote cached copy of the dataset. If you choose to do so, I would also suggest to use my fork of NLTK that fixes several bugs in their detokenizer (I've opened a pull-request, but they've yet to respond): https://github.com/nltk/nltk/pull/2575
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/486/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/486/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/485/comments
https://api.github.com/repos/huggingface/datasets/issues/485/events
https://github.com/huggingface/datasets/issues/485
675,595,393
MDU6SXNzdWU2NzU1OTUzOTM=
485
PAWS dataset first item is header
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[]
closed
false
null
[]
null
[]
2020-08-08T22:05:25Z
2020-08-19T09:50:01Z
2020-08-19T09:50:01Z
CONTRIBUTOR
null
null
null
``` import nlp dataset = nlp.load_dataset('xtreme', 'PAWS-X.en') dataset['test'][0] ``` prints the following ``` {'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'} ``` dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names to themselves. Probably just need to ignore the first row in the dataset by default or something like that.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/485/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/485/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/484/comments
https://api.github.com/repos/huggingface/datasets/issues/484/events
https://github.com/huggingface/datasets/pull/484
675,088,983
MDExOlB1bGxSZXF1ZXN0NDY0NjY1NTU4
484
update mirror for RT dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[]
closed
false
null
[]
null
[ "Thanks for adding this mirror link :)\r\n\r\nCould you run the following command to update the json file `dataset_infos.json` used to verify the integrity of the downloaded file ?\r\n\r\n```\r\nnlp-cli test ./datasets/rotten_tomatoes --save_infos --ignore_verifications\r\n```", "done! @lhoestq ", "the build_doc CI fail comes from master and has been fixed on master", "done @thomwolf @lhoestq " ]
2020-08-07T15:25:45Z
2020-08-24T13:33:37Z
2020-08-24T13:33:37Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/484.diff", "html_url": "https://github.com/huggingface/datasets/pull/484", "merged_at": "2020-08-24T13:33:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/484.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/484" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/484/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/484/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/483/comments
https://api.github.com/repos/huggingface/datasets/issues/483/events
https://github.com/huggingface/datasets/issues/483
675,080,694
MDU6SXNzdWU2NzUwODA2OTQ=
483
rotten tomatoes movie review dataset taken down
{ "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "events_url": "https://api.github.com/users/jxmorris12/events{/privacy}", "followers_url": "https://api.github.com/users/jxmorris12/followers", "following_url": "https://api.github.com/users/jxmorris12/following{/other_user}", "gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxmorris12", "id": 13238952, "login": "jxmorris12", "node_id": "MDQ6VXNlcjEzMjM4OTUy", "organizations_url": "https://api.github.com/users/jxmorris12/orgs", "received_events_url": "https://api.github.com/users/jxmorris12/received_events", "repos_url": "https://api.github.com/users/jxmorris12/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions", "type": "User", "url": "https://api.github.com/users/jxmorris12" }
[]
closed
false
null
[]
null
[ "found a mirror: https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz", "fixed in #484 ", "Closing this one. Thanks again @jxmorris12 for taking care of this :)" ]
2020-08-07T15:12:01Z
2020-09-08T09:36:34Z
2020-09-08T09:36:33Z
CONTRIBUTOR
null
null
null
In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz). It's not downloadable anymore.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/483/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/483/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/482/comments
https://api.github.com/repos/huggingface/datasets/issues/482/events
https://github.com/huggingface/datasets/issues/482
674,851,147
MDU6SXNzdWU2NzQ4NTExNDc=
482
Bugs : dataset.map() is frozen on ELI5
{ "avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4", "events_url": "https://api.github.com/users/ratthachat/events{/privacy}", "followers_url": "https://api.github.com/users/ratthachat/followers", "following_url": "https://api.github.com/users/ratthachat/following{/other_user}", "gists_url": "https://api.github.com/users/ratthachat/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ratthachat", "id": 56621342, "login": "ratthachat", "node_id": "MDQ6VXNlcjU2NjIxMzQy", "organizations_url": "https://api.github.com/users/ratthachat/orgs", "received_events_url": "https://api.github.com/users/ratthachat/received_events", "repos_url": "https://api.github.com/users/ratthachat/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ratthachat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ratthachat/subscriptions", "type": "User", "url": "https://api.github.com/users/ratthachat" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "This comes from an overflow in pyarrow's array.\r\nIt is stuck inside the loop that reduces the batch size to avoid the overflow.\r\nI'll take a look", "I created a PR to fix the issue.\r\nIt was due to an overflow check that handled badly an empty list.\r\n\r\nYou can try the changes by using \r\n```\r\n!pip install git+https://github.com/huggingface/nlp.git@fix-bad-type-in-overflow-check\r\n```\r\n\r\nAlso I noticed that the first 1000 examples have an empty list in the `title_urls` field. The feature type inference in `.map` will consider it `null` because of that, and it will crash when it encounter the next example with a `title_urls` that is not empty.\r\n\r\nTherefore to fix that, what you can do for now is increase the writer batch size so that the feature inference will take into account at least one example with a non-empty `title_urls`:\r\n\r\n```python\r\n# default batch size is 1_000 and it's not enough for feature type inference because of empty lists\r\nvalid_dataset = valid_dataset.map(make_input_target, writer_batch_size=3_000) \r\n```\r\n\r\nI was able to run the frozen cell with these changes.", "@lhoestq Perfect and thank you very much!!\r\nClose the issue.", "@lhoestq mapping the function `make_input_target` was passed by your fixing.\r\n\r\nHowever, there is another error in the final step of `valid_dataset.map(convert_to_features, batched=True)`\r\n\r\n`ArrowInvalid: Could not convert Thepiratebay.vg with type str: converting to null type`\r\n(The [same colab notebook above with new error message](https://colab.research.google.com/drive/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing#scrollTo=5sRrJ3_C8rLt))\r\n\r\nDo you have some ideas? (I am really sorry I could not debug it by myself since I never used `pyarrow` before) \r\nNote that `train_dataset.map(convert_to_features, batched=True)` can be run successfully even though train_dataset is 27x bigger than `valid_dataset` so I believe the problem lies in some field of `valid_dataset` again .", "I got this issue too and fixed it by specifying `writer_batch_size=3_000` in `.map`.\r\nThis is because Arrow didn't expect `Thepiratebay.vg` in `title_urls `, as all previous examples have empty lists in `title_urls `", "I am clear now . Thank so much again Quentin!" ]
2020-08-07T08:23:35Z
2020-08-12T14:13:46Z
2020-08-11T23:55:15Z
NONE
null
null
null
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, target_text`, `dataset.map` is **frozen** in the first hundreds examples. On the contrary, this works totally fine on SQUAD (80,000 examples). Both `nlp` version 0.3.0 and 0.4.0 cause frozen process . Also try various `pyarrow` versions from 0.16.0 / 0.17.0 / 1.0.0 also have the same frozen process. Reproducible code can be found on [this colab notebook ](https://colab.research.google.com/drive/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing), where I also show that the same mapping function works fine on SQUAD, so the problem is likely due to ELI5 somehow. ---------------------------------------- **More Info :** instead of `map`, if I run `for` loop and apply function by myself, there's no error and can finish within 10 seconds. However, `nlp dataset` is immutable (I couldn't manually assign a new key-value to `dataset `object) I also notice that SQUAD texts are quite clean while ELI5 texts contain many special characters, not sure if this is the cause ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/482/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/482/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/481/comments
https://api.github.com/repos/huggingface/datasets/issues/481/events
https://github.com/huggingface/datasets/pull/481
674,567,389
MDExOlB1bGxSZXF1ZXN0NDY0MjM2MTA1
481
Apply utf-8 encoding to all datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
[ "Not sure why the AWS test is failing - perhaps I made too many concurrent CI builds 😢. Can someone please rerun the CI to check the error is not on my end?", "I pushed an improved docstring and the unit tests now pass, which suggests the previous failure on AWS was simply a timeout error. \r\n\r\nFor some reason the docs are now failing to build, but does not seem related to my changes:\r\n```\r\nWarning, treated as error:\r\n/home/circleci/nlp/src/nlp/dataset_dict.py:docstring of nlp.DatasetDict.filter:27:Inline interpreted text or phrase reference start-string without end-string.\r\nmake: *** [Makefile:20: html] Error 2\r\n```\r\n\r\nAny ideas what's going wrong?", "The build_doc fail has been fixed on master.\r\nIt was due to the latest update of sphinx that has some issues, so I pinned the previous version for now.", "I noticed that you also changed the Apache Beam `open` to also use utf-8. However it doesn't have an `encoding` parameter.\r\nTherefore you should ignore lines like\r\n\r\n```python\r\nbeam.io.filesystems.FileSystems.open(filepath)\r\n```\r\n\r\nI guess you could add a rule to your regex to only include the `open` call that have a space right before it.", "Good catch @lhoestq! Your suggestion to match on `open(...)` with a whitespace was a great idea - it allowed me to simplify the regexp considerably 😄.\r\n\r\nI fixed the Apache Beam false positives and also caught a few problems in `json.load()`, e.g.\r\n```python\r\nrelation_name_map = json.load(open(rel_info), encoding='utf-8')\r\n```\r\n\r\nI've tested that the new regexp doesn't reintroduce these false positives, so I think the PR is ready for another review.", "Ok to merge this @lhoestq ?" ]
2020-08-06T20:02:09Z
2020-08-20T08:16:08Z
2020-08-20T08:16:08Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/481.diff", "html_url": "https://github.com/huggingface/datasets/pull/481", "merged_at": "2020-08-20T08:16:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/481.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/481" }
## Description This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function ```python def apply_encoding_on_file_open(filepath: str): """Apply UTF-8 encoding for all instances where a non-binary file is opened.""" with open(filepath, 'r', encoding='utf-8') as input_file: regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)") input_text = input_file.read() match = regexp.search(input_text) if match: output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text) with open(filepath, 'w', encoding='utf-8') as output_file: output_file.write(output) ``` to perform the replacement. Note: 1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly 2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time. 3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/` 4. I have implemented a unit test that should catch missing encodings in future CI runs Closes #468 and possibly #347
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/481/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/481/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/480
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/480/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/480/comments
https://api.github.com/repos/huggingface/datasets/issues/480/events
https://github.com/huggingface/datasets/pull/480
674,245,959
MDExOlB1bGxSZXF1ZXN0NDYzOTcwNjQ2
480
Column indexing hotfix
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao" }
[]
closed
false
null
[]
null
[ "Looks good to me as well but we'll want to add a test indeed.\r\nYou can add one if you have time @TevenLeScao.\r\nOtherwise, we'll do it when we are back with Quentin. ", "I fixed it in #494 " ]
2020-08-06T11:37:05Z
2020-08-12T08:36:10Z
2020-08-12T08:36:10Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/480.diff", "html_url": "https://github.com/huggingface/datasets/pull/480", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/480.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/480" }
As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/480/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/480/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/479/comments
https://api.github.com/repos/huggingface/datasets/issues/479/events
https://github.com/huggingface/datasets/pull/479
673,905,407
MDExOlB1bGxSZXF1ZXN0NDYzNjkxMjA0
479
add METEOR metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "events_url": "https://api.github.com/users/vegarab/events{/privacy}", "followers_url": "https://api.github.com/users/vegarab/followers", "following_url": "https://api.github.com/users/vegarab/following{/other_user}", "gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vegarab", "id": 24683907, "login": "vegarab", "node_id": "MDQ6VXNlcjI0NjgzOTA3", "organizations_url": "https://api.github.com/users/vegarab/orgs", "received_events_url": "https://api.github.com/users/vegarab/received_events", "repos_url": "https://api.github.com/users/vegarab/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vegarab/subscriptions", "type": "User", "url": "https://api.github.com/users/vegarab" }
[]
closed
false
null
[]
null
[ "Really nice !\r\nThanks for adding this one.\r\n\r\nI noticed that there are some '-' that are left in the description in the middle of some workds. It migh come from copy-pasting the pdf paper. ex: `im-provement`. Could you fix that please ?", "@lhoestq \r\nLinebreaks have been removed! Note that there are still a few compound words that are hyphenated intentionally. ", "I think you just need to rebase from master to fix the CI :)", "Yes I made the mistake of simply merging master into this branch. A rebase seems to be neater :) Although all the commits ended up being added twice. I assume you just squash them into a single one on merge anyways?", "Yes indeed they'll be squashed" ]
2020-08-05T23:13:00Z
2020-08-19T13:39:09Z
2020-08-19T13:39:09Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/479.diff", "html_url": "https://github.com/huggingface/datasets/pull/479", "merged_at": "2020-08-19T13:39:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/479.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/479" }
Added the METEOR metric. Can be used like this: ```python import nlp meteor = nlp.load_metric('metrics/meteor') meteor.compute(["some string", "some string"], ["some string", "some similar string"]) # {'meteor': 0.6411637931034483} meteor.add("some string", "some string") meteor.add('some string", "some similar string") meteor.compute() # {'meteor': 0.6411637931034483} ``` Uses [NLTK's implementation](https://www.nltk.org/api/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https://github.com/nltk/nltk/blob/develop/nltk/translate/meteor_score.py)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/479/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/479/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/478/comments
https://api.github.com/repos/huggingface/datasets/issues/478/events
https://github.com/huggingface/datasets/issues/478
673,178,317
MDU6SXNzdWU2NzMxNzgzMTc=
478
Export TFRecord to GCP bucket
{ "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/astariul", "id": 43774355, "login": "astariul", "node_id": "MDQ6VXNlcjQzNzc0MzU1", "organizations_url": "https://api.github.com/users/astariul/orgs", "received_events_url": "https://api.github.com/users/astariul/received_events", "repos_url": "https://api.github.com/users/astariul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "type": "User", "url": "https://api.github.com/users/astariul" }
[]
closed
false
null
[]
null
[ "Nevermind, I restarted my python session and it worked fine...\r\n\r\n---\r\n\r\nI had an authentification error, and I authenticated from another terminal. After that, no more error but it was not working. Restarting the sessions makes it work :)" ]
2020-08-05T01:08:32Z
2020-08-05T01:21:37Z
2020-08-05T01:21:36Z
NONE
null
null
null
Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')` Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket. `dataset.export('local.tfrecord')` works fine, but `dataset.export('gs://my_bucket/x.tfrecord')` does not work. There is no error message, I just can't find the file on my bucket... --- Looking at the code, `nlp` is using `tf.data.experimental.TFRecordWriter`, while I was using `tf.io.TFRecordWriter`. **What's the difference between those 2 ? How can I write TFRecords files directly to GCP bucket ?** @jarednielsen @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/478/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/478/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/477/comments
https://api.github.com/repos/huggingface/datasets/issues/477/events
https://github.com/huggingface/datasets/issues/477
673,142,143
MDU6SXNzdWU2NzMxNDIxNDM=
477
Overview.ipynb throws exceptions with nlp 0.4.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/23109219?v=4", "events_url": "https://api.github.com/users/mandy-li/events{/privacy}", "followers_url": "https://api.github.com/users/mandy-li/followers", "following_url": "https://api.github.com/users/mandy-li/following{/other_user}", "gists_url": "https://api.github.com/users/mandy-li/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mandy-li", "id": 23109219, "login": "mandy-li", "node_id": "MDQ6VXNlcjIzMTA5MjE5", "organizations_url": "https://api.github.com/users/mandy-li/orgs", "received_events_url": "https://api.github.com/users/mandy-li/received_events", "repos_url": "https://api.github.com/users/mandy-li/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mandy-li/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mandy-li/subscriptions", "type": "User", "url": "https://api.github.com/users/mandy-li" }
[]
closed
false
null
[]
null
[ "Thanks for reporting this issue\r\n\r\nThere was a bug where numpy arrays would get returned instead of tensorflow tensors.\r\nThis is fixed on master.\r\n\r\nI tried to re-run the colab and encountered this error instead:\r\n\r\n```\r\nAttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to_tensor'\r\n```\r\n\r\nThis is because the dataset returns a Tensor and not a RaggedTensor.\r\nBut I think we should always return a RaggedTensor unless the length of the sequence is fixed (it that case they can be stack into a Tensor).", "Hi, I got another error (on Colab):\r\n\r\n```python\r\n# You can read a few attributes of the datasets before loading them (they are python dataclasses)\r\nfrom dataclasses import asdict\r\n\r\nfor key, value in asdict(datasets[6]).items():\r\n print('👉 ' + key + ': ' + str(value))\r\n\r\n---------------------------------------------------------------------------\r\n\r\nTypeError Traceback (most recent call last)\r\n\r\n<ipython-input-6-b8ace6c227a2> in <module>()\r\n 2 from dataclasses import asdict\r\n 3 \r\n----> 4 for key, value in asdict(datasets[6]).items():\r\n 5 print('👉 ' + key + ': ' + str(value))\r\n\r\n/usr/local/lib/python3.6/dist-packages/dataclasses.py in asdict(obj, dict_factory)\r\n 1008 \"\"\"\r\n 1009 if not _is_dataclass_instance(obj):\r\n-> 1010 raise TypeError(\"asdict() should be called on dataclass instances\")\r\n 1011 return _asdict_inner(obj, dict_factory)\r\n 1012 \r\n\r\nTypeError: asdict() should be called on dataclass instances\r\n```", "Indeed we'll update the cola with the new release coming up this week." ]
2020-08-04T23:18:15Z
2021-08-03T06:02:15Z
2021-08-03T06:02:15Z
NONE
null
null
null
with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-48907f2ad433> in <module> ----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]} 2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])} 3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1]) 4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8) <ipython-input-5-48907f2ad433> in <dictcomp>(.0) ----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]} 2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])} 3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1]) 4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8) AttributeError: 'numpy.ndarray' object has no attribute 'to_tensor'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/477/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/477/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/476/comments
https://api.github.com/repos/huggingface/datasets/issues/476/events
https://github.com/huggingface/datasets/pull/476
672,991,854
MDExOlB1bGxSZXF1ZXN0NDYyOTMyMTgx
476
CheckList
{ "avatar_url": "https://avatars.githubusercontent.com/u/698010?v=4", "events_url": "https://api.github.com/users/marcotcr/events{/privacy}", "followers_url": "https://api.github.com/users/marcotcr/followers", "following_url": "https://api.github.com/users/marcotcr/following{/other_user}", "gists_url": "https://api.github.com/users/marcotcr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marcotcr", "id": 698010, "login": "marcotcr", "node_id": "MDQ6VXNlcjY5ODAxMA==", "organizations_url": "https://api.github.com/users/marcotcr/orgs", "received_events_url": "https://api.github.com/users/marcotcr/received_events", "repos_url": "https://api.github.com/users/marcotcr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marcotcr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcotcr/subscriptions", "type": "User", "url": "https://api.github.com/users/marcotcr" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "> Also, a little out of my depth there, but would there be a way to have the default pip install checklist command not require mysql and mariadb to be installed? Feels like that might be a source of confusion for users.\r\n\r\nI removed the pattern dependency, mysql is not a requirement anymore. I'm not sure where mariadb is coming from. ", "Thanks for your contribution, @marcotcr. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
2020-08-04T18:32:05Z
2022-10-03T09:43:37Z
2022-10-03T09:43:37Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/476.diff", "html_url": "https://github.com/huggingface/datasets/pull/476", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/476.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/476" }
Sorry for the large pull request. - Added checklists as datasets. I can't run `test_load_real_dataset` (see #474), but I can load the datasets successfully as shown in the example notebook - Added a checklist wrapper
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/476/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/476/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/475/comments
https://api.github.com/repos/huggingface/datasets/issues/475/events
https://github.com/huggingface/datasets/pull/475
672,884,595
MDExOlB1bGxSZXF1ZXN0NDYyODQzMzQz
475
misc. bugs and quality of life
{ "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "events_url": "https://api.github.com/users/joeddav/events{/privacy}", "followers_url": "https://api.github.com/users/joeddav/followers", "following_url": "https://api.github.com/users/joeddav/following{/other_user}", "gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joeddav", "id": 9353833, "login": "joeddav", "node_id": "MDQ6VXNlcjkzNTM4MzM=", "organizations_url": "https://api.github.com/users/joeddav/orgs", "received_events_url": "https://api.github.com/users/joeddav/received_events", "repos_url": "https://api.github.com/users/joeddav/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joeddav/subscriptions", "type": "User", "url": "https://api.github.com/users/joeddav" }
[]
closed
false
null
[]
null
[ "Cool thanks, I made those changes. LMK if you think it's ready for merge.", "Ok to merge for me" ]
2020-08-04T15:32:29Z
2020-08-17T21:14:08Z
2020-08-17T21:14:07Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/475.diff", "html_url": "https://github.com/huggingface/datasets/pull/475", "merged_at": "2020-08-17T21:14:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/475.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/475" }
A few misc. bugs and QOL improvements that I've come across in using the library. Let me know if you don't like any of them and I can adjust/remove them. 1. Printing datasets without a description field throws an error when formatting the `single_line_description`. This fixes that, and also adds some formatting to the repr to make it slightly more readable. ``` >>> print(list_datasets()[0]) nlp.ObjectInfo( id='aeslc', description='A collection of email messages of employees in the Enron Corporation.There are two features: - email_body: email body text. - subject_line: email subject text.', files=[nlp.S3Object('aeslc.py'), nlp.S3Object('dataset_infos.json'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/dev/allen-p_inbox_29.subject'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/test/allen-p_inbox_24.subject'), nlp.S3Object('dummy/1.0.0/dummy_data-zip-extracted/dummy_data/AESLC-master/enron_subject_line/train/allen-p_inbox_20.subject'), nlp.S3Object('dummy/1.0.0/dummy_data.zip'), nlp.S3Object('urls_checksums/checksums.txt')] ) ``` 2. Add id-only option to `list_datasets` and `list_metrics` to allow the user to easily print out just the names of the datasets & metrics. I often found myself annoyed that this took so many strokes to do. ```python [dataset.id for dataset in list_datasets()] # before list_datasets(id_only=True) # after ``` 3. Fix null-seed randomization caching. When using `train_test_split` and `shuffle`, the computation was being cached even without a seed or generator being passed. The result was that calling `.shuffle` more than once on the same dataset didn't do anything without passing a distinct seed or generator. Likewise with `train_test_split`. 4. Indexing by iterables of bool. I added support for passing an iterable of type bool to `_getitem` as a numpy/pandas-like indexing method. Let me know if you think it's redundant with `filter` (I know it's not optimal memory-wise), but I think it's nice to have as a lightweight alternative to do simple things without having to create a copy of the entire dataset, e.g. ```python dataset[dataset['label'] == 0] # numpy-like bool indexing to look at instances with labels of 0 ``` 5. Add an `input_column` argument to `map` and `filter`, which allows you to filter/map on a particular column rather than passing the whole dict to the function. Also adds `fn_kwargs` to be passed to the function. I think these together make mapping much cleaner in many cases such as mono-column tokenization: ```python # before dataset = dataset.map(lambda batch: tokenizer(batch["text"]) # after dataset = dataset.map(tokenizer, input_column="text") dataset = dataset.map(tokenizer, input_column="text", fn_kwargs={"truncation": True, "padding": True}) ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/475/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/475/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/474
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/474/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/474/comments
https://api.github.com/repos/huggingface/datasets/issues/474/events
https://github.com/huggingface/datasets/issues/474
672,407,330
MDU6SXNzdWU2NzI0MDczMzA=
474
test_load_real_dataset when config has BUILDER_CONFIGS that matter
{ "avatar_url": "https://avatars.githubusercontent.com/u/698010?v=4", "events_url": "https://api.github.com/users/marcotcr/events{/privacy}", "followers_url": "https://api.github.com/users/marcotcr/followers", "following_url": "https://api.github.com/users/marcotcr/following{/other_user}", "gists_url": "https://api.github.com/users/marcotcr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marcotcr", "id": 698010, "login": "marcotcr", "node_id": "MDQ6VXNlcjY5ODAxMA==", "organizations_url": "https://api.github.com/users/marcotcr/orgs", "received_events_url": "https://api.github.com/users/marcotcr/received_events", "repos_url": "https://api.github.com/users/marcotcr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marcotcr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcotcr/subscriptions", "type": "User", "url": "https://api.github.com/users/marcotcr" }
[]
closed
false
null
[]
null
[ "The `data_dir` parameter has been removed. Now the error is `ValueError: Config name is missing`\r\n\r\nAs mentioned in #470 I think we can have one test with the first config of BUILDER_CONFIGS, and another test that runs all of the configs in BUILDER_CONFIGS", "This was fixed in #527 \r\n\r\nClosing this one, but feel free to re-open if you have other questions" ]
2020-08-03T23:46:36Z
2020-09-07T14:53:13Z
2020-09-07T14:53:13Z
NONE
null
null
null
It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error. I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingface/nlp/blob/master/tests/test_dataset_common.py#L200)). This causes [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/builder.py#L201) to always be false because `config_kwargs` is not `None`. [This line](https://github.com/huggingface/nlp/blob/master/src/nlp/builder.py#L222) will be run instead, which doesn't use `BUILDER_CONFIGS`. For an example, you can try running the test for lince: ` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lince` which yields > E TypeError: __init__() missing 3 required positional arguments: 'colnames', 'classes', and 'label_column'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/474/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/474/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/473/comments
https://api.github.com/repos/huggingface/datasets/issues/473/events
https://github.com/huggingface/datasets/pull/473
672,007,247
MDExOlB1bGxSZXF1ZXN0NDYyMTIwNzU4
473
add DoQA dataset (ACL 2020)
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
[]
2020-08-03T11:26:52Z
2020-09-10T17:19:11Z
2020-09-03T11:44:15Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/473.diff", "html_url": "https://github.com/huggingface/datasets/pull/473", "merged_at": "2020-09-03T11:44:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/473.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/473" }
add DoQA dataset (ACL 2020) http://ixa.eus/node/12931
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/473/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/473/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/472/comments
https://api.github.com/repos/huggingface/datasets/issues/472/events
https://github.com/huggingface/datasets/pull/472
672,000,745
MDExOlB1bGxSZXF1ZXN0NDYyMTE1MjA4
472
add crd3 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
[ "This PR was already approved by @lhoestq in #456 . This one just make style to remove some typos" ]
2020-08-03T11:15:02Z
2020-08-03T11:22:10Z
2020-08-03T11:22:09Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/472.diff", "html_url": "https://github.com/huggingface/datasets/pull/472", "merged_at": "2020-08-03T11:22:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/472.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/472" }
opening new PR for CRD3 dataset (ACL2020) to fix the circle CI problems
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/472/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/472/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/471/comments
https://api.github.com/repos/huggingface/datasets/issues/471/events
https://github.com/huggingface/datasets/pull/471
671,996,423
MDExOlB1bGxSZXF1ZXN0NDYyMTExNTU1
471
add reuters21578 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
[]
2020-08-03T11:07:14Z
2022-08-04T08:39:11Z
2020-09-03T09:58:50Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/471.diff", "html_url": "https://github.com/huggingface/datasets/pull/471", "merged_at": "2020-09-03T09:58:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/471.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/471" }
new PR to add the reuters21578 dataset and fix the circle CI problems. Fix partially: - #353 Subsequent PR after: - #449
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/471/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/471/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/470/comments
https://api.github.com/repos/huggingface/datasets/issues/470/events
https://github.com/huggingface/datasets/pull/470
671,952,276
MDExOlB1bGxSZXF1ZXN0NDYyMDc0MzQ0
470
Adding IWSLT 2017 dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Narsil", "id": 204321, "login": "Narsil", "node_id": "MDQ6VXNlcjIwNDMyMQ==", "organizations_url": "https://api.github.com/users/Narsil/orgs", "received_events_url": "https://api.github.com/users/Narsil/received_events", "repos_url": "https://api.github.com/users/Narsil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "type": "User", "url": "https://api.github.com/users/Narsil" }
[]
closed
false
null
[]
null
[ "Ok I tried to add the dummy dataset (I actually modified the dummy_data command to generate them for me because it was too painful to do that manually).\r\n\r\nThe dummy_data test seems to work:\r\n```bash\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_iwslt2017\r\n```\r\n\r\nHowever the test on the full data fails, because the `**config_kwargs` don't include `pair, multilingual`.\r\nI could add a default parameter for the Config (but that feels broken, how can one config be the \"default\" ?). If I do I still have errors, saying that something within the downloader is a directory so I'm not sure where that comes from.\r\n\r\nI can share my auto_zip dummy data code if you want (I tried to keep it clean). [Edit: it's [here](https://github.com/Narsil/nlp/tree/auto_zip)]. \r\nThe way it works is that it just keeps X line from the beginning of the original files, and Y lines at the end. It's good enough for my usage, but I guess it could work for most data files out there (as long as they're real text and not binary format)", "The slow test doesn't support dataset that require config parameters that don't have default values.\r\n\r\nTo improve that we can replace it by two tests:\r\n- one test that loads the default config (it can simply be the first config of the config lists for example)\r\n- one tests that iterate over all configs and load them all one by one\r\n\r\nBy using the configs inside the builder config lists, there is no need to instantiate new configs, so the missing parameter error doesn't happen.\r\n\r\nDoes that sound good to you ?", "Seems fair.\r\nHowever I'm unsure what I should do ?\r\n\r\nShould I wait for #527 to pass and rebase and the command will be the same ?\r\nShould I update something ?", "I think everything is fine on your side. Thanks for adding this dataset :)\r\n\r\nI think it's better to wait for the slow test to be updated if you don't mind.\r\n", "Sure ! :)", "Thanks for fixing the isort/black changes :)\r\nFeel free to merge if it's good for you @Narsil " ]
2020-08-03T09:52:39Z
2020-09-07T12:33:30Z
2020-09-07T12:33:30Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/470.diff", "html_url": "https://github.com/huggingface/datasets/pull/470", "merged_at": "2020-09-07T12:33:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/470.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/470" }
Created a [IWSLT 2017](https://sites.google.com/site/iwsltevaluation2017/TED-tasks) dataset script for the *multilingual data*. ``` Bilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English Multilingual data: German, English, Italian, Dutch, Romanian. (Any pair) ``` I'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both. Any opinion on how that should be done ? EDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist. EDIT : Could be interesting for #438
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/470/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/470/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/469/comments
https://api.github.com/repos/huggingface/datasets/issues/469/events
https://github.com/huggingface/datasets/issues/469
671,876,963
MDU6SXNzdWU2NzE4NzY5NjM=
469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/30617486?v=4", "events_url": "https://api.github.com/users/Murgates/events{/privacy}", "followers_url": "https://api.github.com/users/Murgates/followers", "following_url": "https://api.github.com/users/Murgates/following{/other_user}", "gists_url": "https://api.github.com/users/Murgates/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Murgates", "id": 30617486, "login": "Murgates", "node_id": "MDQ6VXNlcjMwNjE3NDg2", "organizations_url": "https://api.github.com/users/Murgates/orgs", "received_events_url": "https://api.github.com/users/Murgates/received_events", "repos_url": "https://api.github.com/users/Murgates/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Murgates/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Murgates/subscriptions", "type": "User", "url": "https://api.github.com/users/Murgates" }
[]
open
false
null
[]
null
[ "Hi ! Did you try to set the output format to pytorch ? (or tensorflow if you're using tensorflow)\r\nIt can be done with `dataset.set_format(\"torch\", columns=columns)` (or \"tensorflow\").\r\n\r\nNote that for pytorch, string columns can't be converted to `torch.Tensor`, so you have to specify in `columns=` the list of columns you want to keep (`input_ids` for example)", "Hello . Yes, I did set the output format as below for the two columns \r\n\r\n `train_dataset.set_format('torch',columns=['Text','Label'])`\r\n ", "I think you're having this issue because you try to format strings as pytorch tensors, which is not possible.\r\nIndeed by having \"Text\" in `columns=['Text','Label']`, you try to convert the text values to pytorch tensors.\r\n\r\nInstead I recommend you to first tokenize your dataset using a tokenizer from transformers. For example\r\n\r\n```python\r\nfrom transformers import BertTokenizer\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\ntrain_dataset.map(lambda x: tokenizer(x[\"Text\"]), batched=True)\r\ntrain_dataset.set_format(\"torch\", column=[\"input_ids\"])\r\n```\r\n\r\nAnother way to fix your issue would be to not set the format to pytorch, and leave the dataset as it is by default. In that case, the strings are returned normally when you get examples from your dataloader. It means that you would have to tokenize the examples in the training loop (or using a data collator) though.\r\n\r\nLet me know if you have other questions", "Hi, actually the thing is I am getting the same error and even after tokenizing them I am passing them through batch_encode_plus.\r\nI dont know what seems to be the problem is. I even converted it into 'pt' while passing them through batch_encode_plus but when I am evaluating my model , i am getting this error\r\n\r\n\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-145-ca218223c9fc> in <module>()\r\n----> 1 val_loss, predictions, true_val = evaluate(dataloader_validation)\r\n 2 val_f1 = f1_score_func(predictions, true_val)\r\n 3 tqdm.write(f'Validation loss: {val_loss}')\r\n 4 tqdm.write(f'F1 Score (Weighted): {val_f1}')\r\n\r\n6 frames\r\n/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py in <genexpr>(.0)\r\n 160 \r\n 161 def __getitem__(self, index):\r\n--> 162 return tuple(tensor[index] for tensor in self.tensors)\r\n 163 \r\n 164 def __len__(self):\r\n\r\nTypeError: new(): invalid data type 'str' ", "> Hi, actually the thing is I am getting the same error and even after tokenizing them I am passing them through batch_encode_plus.\r\n> I dont know what seems to be the problem is. I even converted it into 'pt' while passing them through batch_encode_plus but when I am evaluating my model , i am getting this error\r\n> \r\n> TypeError Traceback (most recent call last)\r\n> in ()\r\n> ----> 1 val_loss, predictions, true_val = evaluate(dataloader_validation)\r\n> 2 val_f1 = f1_score_func(predictions, true_val)\r\n> 3 tqdm.write(f'Validation loss: {val_loss}')\r\n> 4 tqdm.write(f'F1 Score (Weighted): {val_f1}')\r\n> \r\n> 6 frames\r\n> /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py in (.0)\r\n> 160\r\n> 161 def **getitem**(self, index):\r\n> --> 162 return tuple(tensor[index] for tensor in self.tensors)\r\n> 163\r\n> 164 def **len**(self):\r\n> \r\n> TypeError: new(): invalid data type 'str'\r\n\r\nI got the same error and fix it .\r\nyou can check your input where there may be string contained.\r\nsuch as\r\n```\r\na = [1,2,3,4,'<unk>']\r\ntorch.tensor(a)\r\n```", "I didn't know tokenizers could return strings in the token ids. Which tokenizer are you using to get this @Doragd ?", "> I didn't know tokenizers could return strings in the token ids. Which tokenizer are you using to get this @Doragd ?\r\n\r\ni'm sorry that i met this issue in another place (not in huggingface repo). ", "@akhilkapil do you have strings in your dataset ? When you set the dataset format to \"pytorch\" you should exclude columns with strings as pytorch can't make tensors out of strings" ]
2020-08-03T07:48:29Z
2020-10-22T09:04:26Z
null
NONE
null
null
null
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type 'str' I'm using pyarrow 1.0.0. And I have simple custom data set with Text and Integer Label. Ex: Data Text , Label #Column Header I'm facing an Network issue, 1 I forgot my password, 2 Error StackTrace: File "C:\**\transformers\trainer.py", line 492, in train for step, inputs in enumerate(epoch_iterator): File "C:\**\tqdm\std.py", line 1104, in __iter__ for obj in iterable: File "C:\**\torch\utils\data\dataloader.py", line 345, in __next__ data = self._next_data() File "C:\**\torch\utils\data\dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\**\torch\utils\data\_utils\fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "C:\**\nlp\arrow_dataset.py", line 414, in __getitem__ output_all_columns=self._output_all_columns, File "C:\**\nlp\arrow_dataset.py", line 403, in _getitem outputs, format_type=format_type, format_columns=format_columns, output_all_columns=output_all_columns File "C:\**\nlp\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type 'str'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/469/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/469/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/468/comments
https://api.github.com/repos/huggingface/datasets/issues/468/events
https://github.com/huggingface/datasets/issues/468
671,622,441
MDU6SXNzdWU2NzE2MjI0NDE=
468
UnicodeDecodeError while loading PAN-X task of XTREME dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
[ "Indeed. Solution 1 is the simplest.\r\n\r\nThis is actually a recurring problem.\r\nI think we should scan all the datasets with regexpr to fix the use of `open()` without encodings.\r\nAnd probably add a test in the CI to forbid using this in the future.", "I'm happy to tackle the broader problem - will open a PR when it's ready!", "That would be awesome!", "I've created a simple function that seems to do the trick:\r\n\r\n```python\r\ndef apply_encoding_on_file_open(filepath: str):\r\n \"\"\"Apply UTF-8 encoding for all instances where a non-binary file is opened.\"\"\"\r\n \r\n with open(filepath, 'r', encoding='utf-8') as input_file:\r\n regexp = re.compile(r\"\"\"\r\n (?!.*\\b(?:encoding|rb|wb|wb+|ab|ab+)\\b)\r\n (open)\r\n \\((.*)\\)\r\n \"\"\")\r\n input_text = input_file.read()\r\n match = regexp.search(input_text)\r\n \r\n if match:\r\n print('Found match!', match.group())\r\n # append utf-8 encoding to matching groups in-place\r\n output = regexp.sub(lambda m: m.group()[:-1]+', encoding=\"utf-8\")', input_text)\r\n with open(filepath, 'w', encoding='utf-8') as output_file:\r\n output_file.write(output)\r\n else:\r\n print(\"No match found!\")\r\n```\r\n\r\nThe regexp does a negative lookahead to avoid matching on cases where the encoding is already specified or when binary files are involved.\r\n\r\nFrom an implementation perspective:\r\n\r\n* Would it make sense to include this function in `nlp-cli` so that we can run something like\r\n```\r\nnlp-cli fix_encoding path/to/folder\r\n```\r\nand the command recursively fixes all files in the target?\r\n* What is the desired behaviour in the CI test? Here we could either have a simple script that we run as a `job` in the CI and raises an error if a missing encoding is detected. Alternatively we could incorporate this behaviour into the CLI and run that in the CI.\r\n\r\nPlease let me know what you prefer among the alternatives.\r\n", "I realised I was overthinking the problem, so decided to just run the regexp over the codebase and make the PR. In other words, we can ignore my comments about using the CLI 😸 " ]
2020-08-02T14:05:10Z
2020-08-20T08:16:08Z
2020-08-20T08:16:08Z
MEMBER
null
null
null
Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-5-1d61f439b843> in <module> ----> 1 dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 528 ignore_verifications = ignore_verifications or save_infos 529 # Download/copy dataset processing script --> 530 module_path, hash = prepare_module(path, download_config=download_config, dataset=True) 531 532 # Get dataset builder class from the processing script /usr/local/lib/python3.6/dist-packages/nlp/load.py in prepare_module(path, download_config, dataset, force_local_path, **download_kwargs) 265 266 # Download external imports if needed --> 267 imports = get_imports(local_path) 268 local_imports = [] 269 library_imports = [] /usr/local/lib/python3.6/dist-packages/nlp/load.py in get_imports(file_path) 156 lines = [] 157 with open(file_path, mode="r") as f: --> 158 lines.extend(f.readlines()) 159 160 logger.info("Checking %s for additional imports.", file_path) /usr/lib/python3.6/encodings/ascii.py in decode(self, input, final) 24 class IncrementalDecoder(codecs.IncrementalDecoder): 25 def decode(self, input, final=False): ---> 26 return codecs.ascii_decode(input, self.errors)[0] 27 28 class StreamWriter(Codec,codecs.StreamWriter): UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 111: ordinal not in range(128) ``` ## Steps to reproduce Install from nlp's master branch ```python pip install git+https://github.com/huggingface/nlp.git ``` then run ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') ``` ## OS / platform details - `nlp` version: latest from master - Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.1.0 (True) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ## Proposed solution Either change [line 762](https://github.com/huggingface/nlp/blob/7ada00b1d62f94eee22a7df38c6b01e3f27194b7/datasets/xtreme/xtreme.py#L762) in `xtreme.py` to include UTF-8 encoding: ``` # old with open(filepath) as f # new with open(filepath, encoding='utf-8') as f ``` or raise a warning that suggests setting the locale explicitly, e.g. ```python import locale locale.setlocale(locale.LC_ALL, 'C.UTF-8') ``` I have a preference for the first solution. Let me know if you agree and I'll be happy to implement the simple fix!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/468/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/468/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/467/comments
https://api.github.com/repos/huggingface/datasets/issues/467/events
https://github.com/huggingface/datasets/pull/467
671,580,010
MDExOlB1bGxSZXF1ZXN0NDYxNzgwMzUy
467
DOCS: Fix typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4", "events_url": "https://api.github.com/users/Bharat123rox/events{/privacy}", "followers_url": "https://api.github.com/users/Bharat123rox/followers", "following_url": "https://api.github.com/users/Bharat123rox/following{/other_user}", "gists_url": "https://api.github.com/users/Bharat123rox/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Bharat123rox", "id": 13381361, "login": "Bharat123rox", "node_id": "MDQ6VXNlcjEzMzgxMzYx", "organizations_url": "https://api.github.com/users/Bharat123rox/orgs", "received_events_url": "https://api.github.com/users/Bharat123rox/received_events", "repos_url": "https://api.github.com/users/Bharat123rox/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bharat123rox/subscriptions", "type": "User", "url": "https://api.github.com/users/Bharat123rox" }
[]
closed
false
null
[]
null
[ "Thanks!" ]
2020-08-02T08:59:37Z
2020-08-02T13:52:27Z
2020-08-02T09:18:54Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/467.diff", "html_url": "https://github.com/huggingface/datasets/pull/467", "merged_at": "2020-08-02T09:18:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/467.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/467" }
Fix typo from dictionnary -> dictionary
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/467/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/467/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/466/comments
https://api.github.com/repos/huggingface/datasets/issues/466/events
https://github.com/huggingface/datasets/pull/466
670,766,891
MDExOlB1bGxSZXF1ZXN0NDYxMDEzOTM0
466
[METRICS] Various improvements on metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[ "The cast function is now called inside `features.encode_example`.\r\nI also added `encode_batch` that was missing.\r\n\r\nMoreover I used the cast function in `Dataset.map` to support torch/tensorflow tensors or numpy arrays inputs.\r\n\r\nThere are tests for tensors inputs in metrics and in .map", "I think we can merge" ]
2020-08-01T11:03:45Z
2020-08-17T15:15:00Z
2020-08-17T15:14:59Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/466.diff", "html_url": "https://github.com/huggingface/datasets/pull/466", "merged_at": "2020-08-17T15:14:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/466.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/466" }
- Disallow the use of positional arguments to avoid `predictions` vs `references` mistakes - Allow to directly feed numpy/pytorch/tensorflow/pandas objects in metrics
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/466/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/466/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/465/comments
https://api.github.com/repos/huggingface/datasets/issues/465/events
https://github.com/huggingface/datasets/pull/465
669,889,779
MDExOlB1bGxSZXF1ZXN0NDYwMjEwODYw
465
Keep features after transform
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "One note on features inference:\r\n\r\nif an arrow type is `struct of items` where each item is a `list`, then we return a `dict` in which each item is a `Sequence`.\r\nIt means that we don't use the Sequence <-> dict swap when we infer features.\r\n\r\nIt's fine because the swap is generally used in dataset scripts, in which features are defined (inferred features are discarded)", "If it's fine for you @thomwolf we can merge this one :) ", "Yes this is fine I think!" ]
2020-07-31T14:43:21Z
2020-07-31T18:27:33Z
2020-07-31T18:27:32Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/465.diff", "html_url": "https://github.com/huggingface/datasets/pull/465", "merged_at": "2020-07-31T18:27:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/465.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/465" }
When applying a transform like `map`, some features were lost (and inferred features were used). It was the case for ClassLabel, Translation, etc. To fix that, I did some modifications in the `ArrowWriter`: - added the `update_features` parameter. When it's `True`, then the features specified by the user (if any) can be updated with inferred features if their type don't match. `map` transform sets `update_features=True` when writing to cache file or buffer. Features won't change by default in `map`. - added the `with_metadata` parameter. If `True`, the `features` (after update) will be written inside the metadata of the schema in this format: ``` { "huggingface": {"features" : <serialized Features exactly like dataset_info.json>} } ``` Then, once a dataset is instantiated without info/features, these metadata are used to set the features of the dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/465/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/465/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/464/comments
https://api.github.com/repos/huggingface/datasets/issues/464/events
https://github.com/huggingface/datasets/pull/464
669,767,381
MDExOlB1bGxSZXF1ZXN0NDYwMTAxNDYz
464
Add rename, remove and cast in-place operations
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-07-31T12:30:21Z
2020-07-31T15:50:02Z
2020-07-31T15:50:00Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/464.diff", "html_url": "https://github.com/huggingface/datasets/pull/464", "merged_at": "2020-07-31T15:50:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/464.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/464" }
Add a bunch of in-place operation leveraging the Arrow back-end to rename and remove columns and cast to new features without using the more expensive `map` method. These methods are added to `Dataset` as well as `DatasetDict`. Added tests for these new methods and add the methods to the doc. Naming follows the new pattern with a trailing underscore indicating in-place methods.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/464/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/464/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/463/comments
https://api.github.com/repos/huggingface/datasets/issues/463/events
https://github.com/huggingface/datasets/pull/463
669,735,455
MDExOlB1bGxSZXF1ZXN0NDYwMDcyNjQ1
463
Add dataset/mlsum
{ "avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4", "events_url": "https://api.github.com/users/RachelKer/events{/privacy}", "followers_url": "https://api.github.com/users/RachelKer/followers", "following_url": "https://api.github.com/users/RachelKer/following{/other_user}", "gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RachelKer", "id": 36986299, "login": "RachelKer", "node_id": "MDQ6VXNlcjM2OTg2Mjk5", "organizations_url": "https://api.github.com/users/RachelKer/orgs", "received_events_url": "https://api.github.com/users/RachelKer/received_events", "repos_url": "https://api.github.com/users/RachelKer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions", "type": "User", "url": "https://api.github.com/users/RachelKer" }
[]
closed
false
null
[]
null
[ "I think the problem is related to `wiki_dpr` dataset which is making the circle CI failed as you can see:\r\n```\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/dummy_psgs_w100_no_embeddings\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/dummy_psgs_w100_with_nq_embeddings\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/psgs_w100_no_embeddings\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/psgs_w100_with_nq_embeddings\r\n\r\n```\r\nI'm facing the same issues with my last commits, I tried to rebase from master but it still not working. Maybe @lhoestq can help with.", "Hello, I am confused about the next steps I need to do. Did the forced merge solve the issue ?", "Hello :)\r\nI think you can just rebase from master and it should solve the CI error" ]
2020-07-31T11:50:52Z
2020-08-24T14:54:42Z
2020-08-24T14:54:42Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/463.diff", "html_url": "https://github.com/huggingface/datasets/pull/463", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/463.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/463" }
New pull request that should correct the previous errors. The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/463/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/463/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/462/comments
https://api.github.com/repos/huggingface/datasets/issues/462/events
https://github.com/huggingface/datasets/pull/462
669,715,547
MDExOlB1bGxSZXF1ZXN0NDYwMDU0NDgz
462
add DoQA (ACL 2020) dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
[]
2020-07-31T11:25:56Z
2020-08-03T11:28:27Z
2020-08-03T11:28:27Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/462.diff", "html_url": "https://github.com/huggingface/datasets/pull/462", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/462.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/462" }
adds DoQA (ACL 2020) dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/462/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/462/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/461/comments
https://api.github.com/repos/huggingface/datasets/issues/461/events
https://github.com/huggingface/datasets/pull/461
669,703,508
MDExOlB1bGxSZXF1ZXN0NDYwMDQzNDY5
461
Doqa
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
[]
2020-07-31T11:11:12Z
2020-07-31T11:13:15Z
2020-07-31T11:13:15Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/461.diff", "html_url": "https://github.com/huggingface/datasets/pull/461", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/461.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/461" }
add DoQA (ACL 2020) dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/461/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/461/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/460/comments
https://api.github.com/repos/huggingface/datasets/issues/460/events
https://github.com/huggingface/datasets/pull/460
669,585,256
MDExOlB1bGxSZXF1ZXN0NDU5OTM2OTU2
460
Fix KeyboardInterrupt in map and bad indices in select
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "Thanks @TevenLeScao for finding this issue", "Thanks @lhoestq for catching this ❤️" ]
2020-07-31T08:57:15Z
2020-07-31T11:32:19Z
2020-07-31T11:32:18Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/460.diff", "html_url": "https://github.com/huggingface/datasets/pull/460", "merged_at": "2020-07-31T11:32:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/460.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/460" }
If you interrupted a map function while it was writing, the cached file was not discarded. Therefore the next time you called map, it was loading an incomplete arrow file. We had the same issue with select if there was a bad indice at one point. To fix that I used temporary files that are renamed once everything is finished.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/460/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/460/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/459/comments
https://api.github.com/repos/huggingface/datasets/issues/459/events
https://github.com/huggingface/datasets/pull/459
669,545,437
MDExOlB1bGxSZXF1ZXN0NDU5OTAxMjEy
459
[Breaking] Update Dataset and DatasetDict API
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-07-31T08:11:33Z
2020-08-26T08:28:36Z
2020-08-26T08:28:35Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/459.diff", "html_url": "https://github.com/huggingface/datasets/pull/459", "merged_at": "2020-08-26T08:28:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/459.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/459" }
This PR contains a few breaking changes so it's probably good to keep it for the next (major) release: - rename the `flatten`, `drop` and `dictionary_encode_column` methods in `flatten_`, `drop_` and `dictionary_encode_column_` to indicate that these methods have in-place effects as discussed in #166. From now on we should keep the convention of having a trailing underscore for methods which have an in-place effet. I also adopt the conversion of not returning the (self) dataset for these methods. This is different than what PyTorch does for instance (`model.to()` is in-place but return the self model) but I feel like it's a safer approach in terms of UX. - remove the `dataset.columns` property which returns a low-level Apache Arrow object and should not be used by users. Similarly, remove `dataset. nbytes` which we don't really want to expose in this bare-bone format. - add a few more properties and methods to `DatasetDict`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/459/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/459/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/458/comments
https://api.github.com/repos/huggingface/datasets/issues/458/events
https://github.com/huggingface/datasets/pull/458
668,972,666
MDExOlB1bGxSZXF1ZXN0NDU5Mzk5ODg2
458
Install CoVal metric from github
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[]
closed
false
null
[]
null
[]
2020-07-30T16:59:25Z
2020-07-31T13:56:33Z
2020-07-31T13:56:33Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/458.diff", "html_url": "https://github.com/huggingface/datasets/pull/458", "merged_at": "2020-07-31T13:56:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/458.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/458" }
Changed the import statements in `coval.py` to direct the user to install the original package from github if it's not already installed (the warning will only display properly after merging [PR455](https://github.com/huggingface/nlp/pull/455)) Also changed the function call to use named rather than positional arguments.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/458/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/458/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/457/comments
https://api.github.com/repos/huggingface/datasets/issues/457/events
https://github.com/huggingface/datasets/pull/457
668,898,386
MDExOlB1bGxSZXF1ZXN0NDU5MzMyOTM1
457
add set_format to DatasetDict + tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomwolf", "id": 7353373, "login": "thomwolf", "node_id": "MDQ6VXNlcjczNTMzNzM=", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "repos_url": "https://api.github.com/users/thomwolf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "type": "User", "url": "https://api.github.com/users/thomwolf" }
[]
closed
false
null
[]
null
[]
2020-07-30T15:53:20Z
2020-07-30T17:34:36Z
2020-07-30T17:34:34Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/457.diff", "html_url": "https://github.com/huggingface/datasets/pull/457", "merged_at": "2020-07-30T17:34:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/457.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/457" }
Add the `set_format` and `formated_as` and `reset_format` to `DatasetDict`. Add tests to these for `Dataset` and `DatasetDict`. Fix some bugs uncovered by the tests for `pandas` formating.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/457/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/457/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/456/comments
https://api.github.com/repos/huggingface/datasets/issues/456/events
https://github.com/huggingface/datasets/pull/456
668,723,785
MDExOlB1bGxSZXF1ZXN0NDU5MTc1MTY0
456
add crd3(ACL 2020) dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "events_url": "https://api.github.com/users/mariamabarham/events{/privacy}", "followers_url": "https://api.github.com/users/mariamabarham/followers", "following_url": "https://api.github.com/users/mariamabarham/following{/other_user}", "gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariamabarham", "id": 38249783, "login": "mariamabarham", "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "organizations_url": "https://api.github.com/users/mariamabarham/orgs", "received_events_url": "https://api.github.com/users/mariamabarham/received_events", "repos_url": "https://api.github.com/users/mariamabarham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions", "type": "User", "url": "https://api.github.com/users/mariamabarham" }
[]
closed
false
null
[]
null
[]
2020-07-30T13:28:35Z
2020-08-03T11:28:52Z
2020-08-03T11:28:52Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/456.diff", "html_url": "https://github.com/huggingface/datasets/pull/456", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/456.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/456" }
This PR adds the **Critical Role Dungeons and Dragons Dataset** published at ACL 2020
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/456/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/456/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/455
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/455/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/455/comments
https://api.github.com/repos/huggingface/datasets/issues/455/events
https://github.com/huggingface/datasets/pull/455
668,037,965
MDExOlB1bGxSZXF1ZXN0NDU4NTk4NTUw
455
Add bleurt
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[]
closed
false
null
[]
null
[ "Sorry one nit: Could we use named arguments for the call to BLEURT?\r\n\r\ni.e. \r\n scores = self.scorer.score(references=references, candidates=predictions)\r\n\r\n(i.e. so it is less bug prone)\r\n", "Following up on Ankur's comment---we are going to drop support for\npositional (not named) arguments in the future releases because it seems to\ncause bugs and confusion. I hope it doesn't create too much of a mess.\n\nLe jeu. 30 juil. 2020 à 10:44, ankparikh <notifications@github.com> a\nécrit :\n\n> Sorry one nit: Could we use named arguments for the call to BLEURT?\n>\n> i.e.\n> scores = self.scorer.score(references=references, candidates=predictions)\n>\n> (i.e. so it is less bug prone)\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/nlp/pull/455#issuecomment-666414514>, or\n> unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA>\n> .\n>\n", "> Following up on Ankur's comment---we are going to drop support for positional (not named) arguments in the future releases because it seems to cause bugs and confusion. I hope it doesn't create too much of a mess. Le jeu. 30 juil. 2020 à 10:44, ankparikh <notifications@github.com> a écrit :\r\n> […](#)\r\n> Sorry one nit: Could we use named arguments for the call to BLEURT? i.e. scores = self.scorer.score(references=references, candidates=predictions) (i.e. so it is less bug prone) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <[#455 (comment)](https://github.com/huggingface/nlp/pull/455#issuecomment-666414514)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA> .\r\n\r\nChanged @ankparikh @tsellam, thanks for taking a look!", "We should avoid positional arguments in metrics on our side as well. It's a dangerous source of errors indeed." ]
2020-07-29T18:08:32Z
2020-07-31T13:56:14Z
2020-07-31T13:56:14Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/455.diff", "html_url": "https://github.com/huggingface/datasets/pull/455", "merged_at": "2020-07-31T13:56:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/455.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/455" }
This PR adds the BLEURT metric to the library. The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`. Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues/discussions if it comes up. In addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL cc @ankparikh @tsellam
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/455/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/455/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/454/comments
https://api.github.com/repos/huggingface/datasets/issues/454/events
https://github.com/huggingface/datasets/pull/454
668,011,577
MDExOlB1bGxSZXF1ZXN0NDU4NTc3MzA3
454
Create SECURITY.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/56394989?v=4", "events_url": "https://api.github.com/users/ChenZehong13/events{/privacy}", "followers_url": "https://api.github.com/users/ChenZehong13/followers", "following_url": "https://api.github.com/users/ChenZehong13/following{/other_user}", "gists_url": "https://api.github.com/users/ChenZehong13/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ChenZehong13", "id": 56394989, "login": "ChenZehong13", "node_id": "MDQ6VXNlcjU2Mzk0OTg5", "organizations_url": "https://api.github.com/users/ChenZehong13/orgs", "received_events_url": "https://api.github.com/users/ChenZehong13/received_events", "repos_url": "https://api.github.com/users/ChenZehong13/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ChenZehong13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenZehong13/subscriptions", "type": "User", "url": "https://api.github.com/users/ChenZehong13" }
[]
closed
false
null
[]
null
[]
2020-07-29T17:23:34Z
2020-07-29T21:45:52Z
2020-07-29T21:45:52Z
NONE
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/454.diff", "html_url": "https://github.com/huggingface/datasets/pull/454", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/454.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/454" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/454/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/454/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/453
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/453/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/453/comments
https://api.github.com/repos/huggingface/datasets/issues/453/events
https://github.com/huggingface/datasets/pull/453
667,728,247
MDExOlB1bGxSZXF1ZXN0NDU4MzQwNzky
453
add builder tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2020-07-29T10:22:07Z
2020-07-29T11:14:06Z
2020-07-29T11:14:05Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/453.diff", "html_url": "https://github.com/huggingface/datasets/pull/453", "merged_at": "2020-07-29T11:14:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/453.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/453" }
I added `as_dataset` and `download_and_prepare` to the tests
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/453/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/453/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/452/comments
https://api.github.com/repos/huggingface/datasets/issues/452/events
https://github.com/huggingface/datasets/pull/452
667,498,295
MDExOlB1bGxSZXF1ZXN0NDU4MTUzNjQy
452
Guardian authorship dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/25109412?v=4", "events_url": "https://api.github.com/users/malikaltakrori/events{/privacy}", "followers_url": "https://api.github.com/users/malikaltakrori/followers", "following_url": "https://api.github.com/users/malikaltakrori/following{/other_user}", "gists_url": "https://api.github.com/users/malikaltakrori/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/malikaltakrori", "id": 25109412, "login": "malikaltakrori", "node_id": "MDQ6VXNlcjI1MTA5NDEy", "organizations_url": "https://api.github.com/users/malikaltakrori/orgs", "received_events_url": "https://api.github.com/users/malikaltakrori/received_events", "repos_url": "https://api.github.com/users/malikaltakrori/repos", "site_admin": false, "starred_url": "https://api.github.com/users/malikaltakrori/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/malikaltakrori/subscriptions", "type": "User", "url": "https://api.github.com/users/malikaltakrori" }
[]
closed
false
null
[]
null
[ "Hi ! Glad you managed to fix the version issue.\r\n\r\nThe command `\r\npython nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs` is supposed to generate a json file `dataset_infos.json` next to your dataset script, but I can't see it in the PR.\r\nCan you make sure you have the json file on your side and that you have pushed it ?", "Done!", "Is there anything else that I should do? and would the new dataset be available via the NLP package now? ", "Sorry I forgot to merge this one ! Doing it now", "Thanks for the heads up ;)", "No worries, this is my first contribution to an online package, and I feel very proud it's part of this library :) Thank you very much!" ]
2020-07-29T02:23:57Z
2020-08-20T15:09:57Z
2020-08-20T15:07:56Z
CONTRIBUTOR
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/452.diff", "html_url": "https://github.com/huggingface/datasets/pull/452", "merged_at": "2020-08-20T15:07:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/452.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/452" }
A new dataset: Guardian news articles for authorship attribution **tests passed:** python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship **Tests failed:** Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...' Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed: * _glue - OSError: Cannot find data file. *_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist Thank you for letting us contribute to such a huge and important library! EDIT: I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/452/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/452/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/451
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/451/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/451/comments
https://api.github.com/repos/huggingface/datasets/issues/451/events
https://github.com/huggingface/datasets/pull/451
667,210,468
MDExOlB1bGxSZXF1ZXN0NDU3OTIxNDMx
451
Fix csv/json/txt cache dir
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I think this is the way to go but I’m afraid this might be a little slow. I was thinking that we could use a high quality very fast non crypto hash like xxhash for these stuff (hashing data files)", "Yep good idea, I'll take a look", "I tested the hashing speed [here](https://colab.research.google.com/drive/1hlhP84kLIHmOzMRQN1h8x10hKWpXXyud?usp=sharing).\r\nI was able to get 8x speed with `xxhashlib` (42ms vs 345ms for 100MiB of data).\r\nWhat do you think @thomwolf ?", "I added xxhash and some tests" ]
2020-07-28T16:30:51Z
2020-07-29T13:57:23Z
2020-07-29T13:57:22Z
MEMBER
null
false
{ "diff_url": "https://github.com/huggingface/datasets/pull/451.diff", "html_url": "https://github.com/huggingface/datasets/pull/451", "merged_at": "2020-07-29T13:57:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/451.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/451" }
The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user. To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir. This should fix #444
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/451/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/451/timeline
null
null
true