url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.51B
| node_id
stringlengths 18
32
| number
int64 1
7.14k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
⌀ | created_at
timestamp[ns] | updated_at
timestamp[ns] | closed_at
timestamp[ns] | author_association
stringclasses 4
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/297/comments | https://api.github.com/repos/huggingface/datasets/issues/297/events | https://github.com/huggingface/datasets/issues/297 | 643,444,625 | MDU6SXNzdWU2NDM0NDQ2MjU= | 297 | Error in Demo for Specific Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/60150701?v=4",
"events_url": "https://api.github.com/users/s-jse/events{/privacy}",
"followers_url": "https://api.github.com/users/s-jse/followers",
"following_url": "https://api.github.com/users/s-jse/following{/other_user}",
"gists_url": "https://api.github.com/users/s-jse/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/s-jse",
"id": 60150701,
"login": "s-jse",
"node_id": "MDQ6VXNlcjYwMTUwNzAx",
"organizations_url": "https://api.github.com/users/s-jse/orgs",
"received_events_url": "https://api.github.com/users/s-jse/received_events",
"repos_url": "https://api.github.com/users/s-jse/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/s-jse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s-jse/subscriptions",
"type": "User",
"url": "https://api.github.com/users/s-jse"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | [
"Thanks for reporting these errors :)\r\n\r\nI can actually see two issues here.\r\n\r\nFirst, datasets like `natural_questions` require apache_beam to be processed. Right now the import is not at the right place so we have this error message. However, even the imports are fixed, the nlp viewer doesn't actually have the resources to process NQ right now so we'll have to wait until we have a version that we've already processed on our google storage (that's what we've done for wikipedia for example).\r\n\r\nSecond, datasets like `newsroom` require manual downloads as we're not allowed to redistribute the data ourselves (if I'm not wrong). An error message should be displayed saying that we're not allowed to show the dataset.\r\n\r\nI can fix the first issue with the imports but for the second one I think we'll have to see with @srush to show a message for datasets that require manual downloads (it can be checked whether a dataset requires manual downloads if `dataset_builder_instance.manual_download_instructions is not None`).\r\n\r\n",
"I added apache-beam to the viewer. We can think about how to add newsroom. ",
"We don't plan to host the source files of newsroom ourselves for now.\r\nYou can still get the dataset if you follow the download instructions given by `dataset = load_dataset('newsroom')` though.\r\nThe viewer also shows the instructions now.\r\n\r\nClosing this one. If you have other questions, feel free to re-open :)"
] | 2020-06-23T00:38:42 | 2020-07-17T17:43:06 | 2020-07-17T17:43:06 | NONE | null | null | null | Selecting `natural_questions` or `newsroom` dataset in the online demo results in an error similar to the following.
![image](https://user-images.githubusercontent.com/60150701/85347842-ac861900-b4ae-11ea-98c4-a53a00934783.png)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/297/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/297/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/296/comments | https://api.github.com/repos/huggingface/datasets/issues/296/events | https://github.com/huggingface/datasets/issues/296 | 643,423,717 | MDU6SXNzdWU2NDM0MjM3MTc= | 296 | snli -1 labels | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | [
"@jxmorris12 , we use `-1` to label examples for which `gold label` is missing (`gold label = -` in the original dataset). ",
"Thanks @mariamabarham! so the original dataset is missing some labels? That is weird. Is standard practice just to discard those examples training/eval?",
"Yes the original dataset is missing some labels maybe @sleepinyourhat , @gangeli can correct me if I'm wrong \r\nFor my personal opinion at least if you want your model to learn to predict no answer (-1) you can leave it their but otherwise you can discard them. ",
"thanks @mariamabarham :)"
] | 2020-06-22T23:33:30 | 2020-06-23T14:41:59 | 2020-06-23T14:41:58 | CONTRIBUTOR | null | null | null | I'm trying to train a model on the SNLI dataset. Why does it have so many -1 labels?
```
import nlp
from collections import Counter
data = nlp.load_dataset('snli')['train']
print(Counter(data['label']))
Counter({0: 183416, 2: 183187, 1: 182764, -1: 785})
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/296/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/296/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/295/comments | https://api.github.com/repos/huggingface/datasets/issues/295/events | https://github.com/huggingface/datasets/issues/295 | 643,245,412 | MDU6SXNzdWU2NDMyNDU0MTI= | 295 | Improve input warning for evaluation metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/19514537?v=4",
"events_url": "https://api.github.com/users/Tiiiger/events{/privacy}",
"followers_url": "https://api.github.com/users/Tiiiger/followers",
"following_url": "https://api.github.com/users/Tiiiger/following{/other_user}",
"gists_url": "https://api.github.com/users/Tiiiger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tiiiger",
"id": 19514537,
"login": "Tiiiger",
"node_id": "MDQ6VXNlcjE5NTE0NTM3",
"organizations_url": "https://api.github.com/users/Tiiiger/orgs",
"received_events_url": "https://api.github.com/users/Tiiiger/received_events",
"repos_url": "https://api.github.com/users/Tiiiger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tiiiger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tiiiger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tiiiger"
} | [] | closed | false | null | [] | null | [] | 2020-06-22T17:28:57 | 2020-06-23T14:47:37 | 2020-06-23T14:47:37 | NONE | null | null | null | Hi,
I am the author of `bert_score`. Recently, we received [ an issue ](https://github.com/Tiiiger/bert_score/issues/62) reporting a problem in using `bert_score` from the `nlp` package (also see #238 in this repo). After looking into this, I realized that the problem arises from the format `nlp.Metric` takes input.
Here is a minimal example:
```python
import nlp
scorer = nlp.load_metric("bertscore")
with open("pred.txt") as p, open("ref.txt") as g:
for lp, lg in zip(p, g):
scorer.add(lp, lg)
score = scorer.compute(lang="en")
```
The problem in the above code is that `scorer.add()` expects a list of strings as input for the references. As a result, the `scorer` here would take a list of characters in `lg` to be the references. The correct implementation would be calling
```python
scorer.add(lp, [lg])
```
I just want to raise this issue to you to prevent future user errors of a similar kind. I assume some simple type checking can prevent this from happening?
Thanks! | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/295/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/295/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/294/comments | https://api.github.com/repos/huggingface/datasets/issues/294/events | https://github.com/huggingface/datasets/issues/294 | 643,181,179 | MDU6SXNzdWU2NDMxODExNzk= | 294 | Cannot load arxiv dataset on MacOS? | {
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JohnGiorgi",
"id": 8917831,
"login": "JohnGiorgi",
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JohnGiorgi"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | [
"I couldn't replicate this issue on my macbook :/\r\nCould you try to play with different encodings in `with open(path, encoding=...) as f` in scientific_papers.py:L108 ?",
"I was able to track down the file causing the problem by adding the following to `scientific_papers.py` (starting at line 116):\r\n\r\n```python\r\n from json import JSONDecodeError\r\n try:\r\n d = json.loads(line)\r\n summary = \"\\n\".join(d[\"abstract_text\"])\r\n except JSONDecodeError:\r\n print(path, line)\r\n```\r\n\r\n\r\n\r\nFor me it was at: `/Users/johngiorgi/.cache/huggingface/datasets/f87fd498c5003cbe253a2af422caa1e58f87a4fd74cb3e67350c635c8903b259/arxiv-dataset/train.txt` with `\"article_id\": \"1407.3051\"`.\r\n\r\nNot really 100% sure at the moment, but it looks like this specific substring from `\"article_text\"` may be causing the problem?\r\n\r\n```\r\n\"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas\r\n```\r\n\r\nperhaps because it appears to be truncated. I (think) I can recreate the problem by doing the following:\r\n\r\n```python\r\nimport json\r\n\r\n# A minimal example of the json file that causes the error\r\ninvalid_json = '{\"article_id\": \"1407.3051\", \"article_text\": [\"the missing - mass resolution was obtained to be 2.8 @xmath3 0.1 mev/@xmath4 ( fwhm ) , which corresponds to the missing - mass resolution of 3.2 @xmath3 0.2 mev/@xmath4 ( fwhm ) at the @xmath6 cusp region in the @xmath0 reaction .\", \"this resolution is at least by a factor of 2 better than the previous measurement with the same reaction ( 3.2@xmath595.5 mev/@xmath4 in @xmath84 ) @xcite .\", \"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas' \r\n# The line of code from `scientific_papers.py` which appears to cause the error\r\njson.loads(invalid_json)\r\n```\r\n\r\nThis is as far as I get before I am stumped.",
"I just checked inside `train.txt` and this line isn't truncated for me (line 163577).\r\nCould you try to clear your cache and re-download the dataset ?",
"Ah the turn-it-off-turn-it-on again solution! That did it, thanks a lot :) "
] | 2020-06-22T15:46:55 | 2020-06-30T15:25:10 | 2020-06-30T15:25:10 | CONTRIBUTOR | null | null | null | I am having trouble loading the `"arxiv"` config from the `"scientific_papers"` dataset on MacOS. When I try loading the dataset with:
```python
arxiv = nlp.load_dataset("scientific_papers", "arxiv")
```
I get the following stack trace:
```bash
JSONDecodeError Traceback (most recent call last)
<ipython-input-2-8e00c55d5a59> in <module>
----> 1 arxiv = nlp.load_dataset("scientific_papers", "arxiv")
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
481 try:
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _prepare_split(self, split_generator)
662
663 generator = self._generate_examples(**split_generator.gen_kwargs)
--> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
665 example = self.info.features.encode_example(record)
666 writer.write(example)
~/miniconda3/envs/t2t/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)
1106 fp_write=getattr(self.fp, 'write', sys.stderr.write))
1107
-> 1108 for obj in iterable:
1109 yield obj
1110 # Update and possibly print the progressbar.
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/datasets/scientific_papers/107a416c0e1958cb846f5934b5aae292f7884a5b27e86af3f3ef1a093e058bbc/scientific_papers.py in _generate_examples(self, path)
114 # "section_names": list[str], list of section names.
115 # "sections": list[list[str]], list of sections (list of paragraphs)
--> 116 d = json.loads(line)
117 summary = "\n".join(d["abstract_text"])
118 # In original paper, <S> and </S> are not used in vocab during training
~/miniconda3/envs/t2t/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
346 parse_int is None and parse_float is None and
347 parse_constant is None and object_pairs_hook is None and not kw):
--> 348 return _default_decoder.decode(s)
349 if cls is None:
350 cls = JSONDecoder
~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in decode(self, s, _w)
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in raw_decode(self, s, idx)
351 """
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
355 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 46983 (char 46982)
163502 examples [02:10, 2710.68 examples/s]
```
I am not sure how to trace back to the specific JSON file that has the "Unterminated string". Also, I do not get this error on colab so I suspect it may be MacOS specific. Copy pasting the relevant lines from `transformers-cli env` below:
- Platform: Darwin-19.5.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
Any ideas? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/294/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/294/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/293/comments | https://api.github.com/repos/huggingface/datasets/issues/293/events | https://github.com/huggingface/datasets/pull/293 | 642,942,182 | MDExOlB1bGxSZXF1ZXN0NDM3ODM1ODI4 | 293 | Don't test community datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-06-22T10:15:33 | 2020-06-22T11:07:00 | 2020-06-22T11:06:59 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/293.diff",
"html_url": "https://github.com/huggingface/datasets/pull/293",
"merged_at": "2020-06-22T11:06:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/293.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/293"
} | This PR disables testing for community datasets on aws.
It should fix the CI that is currently failing. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/293/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/293/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/292/comments | https://api.github.com/repos/huggingface/datasets/issues/292/events | https://github.com/huggingface/datasets/pull/292 | 642,897,797 | MDExOlB1bGxSZXF1ZXN0NDM3Nzk4NTM2 | 292 | Update metadata for x_stance dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/5830820?v=4",
"events_url": "https://api.github.com/users/jvamvas/events{/privacy}",
"followers_url": "https://api.github.com/users/jvamvas/followers",
"following_url": "https://api.github.com/users/jvamvas/following{/other_user}",
"gists_url": "https://api.github.com/users/jvamvas/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jvamvas",
"id": 5830820,
"login": "jvamvas",
"node_id": "MDQ6VXNlcjU4MzA4MjA=",
"organizations_url": "https://api.github.com/users/jvamvas/orgs",
"received_events_url": "https://api.github.com/users/jvamvas/received_events",
"repos_url": "https://api.github.com/users/jvamvas/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jvamvas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvamvas/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jvamvas"
} | [] | closed | false | null | [] | null | [
"Great! Thanks @jvamvas for these updates.\r\n",
"I have fixed a warning. The remaining test failure is due to an unrelated dataset.",
"We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?"
] | 2020-06-22T09:13:26 | 2020-06-23T08:07:24 | 2020-06-23T08:07:24 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/292.diff",
"html_url": "https://github.com/huggingface/datasets/pull/292",
"merged_at": "2020-06-23T08:07:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/292.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/292"
} | Thank you for featuring the x_stance dataset in your library. This PR updates some metadata:
- Citation: Replace preprint with proceedings
- URL: Use a URL with long-term availability
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/292/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/291/comments | https://api.github.com/repos/huggingface/datasets/issues/291/events | https://github.com/huggingface/datasets/pull/291 | 642,688,450 | MDExOlB1bGxSZXF1ZXN0NDM3NjM1NjMy | 291 | break statement not required | {
"avatar_url": "https://avatars.githubusercontent.com/u/12967587?v=4",
"events_url": "https://api.github.com/users/mayurnewase/events{/privacy}",
"followers_url": "https://api.github.com/users/mayurnewase/followers",
"following_url": "https://api.github.com/users/mayurnewase/following{/other_user}",
"gists_url": "https://api.github.com/users/mayurnewase/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mayurnewase",
"id": 12967587,
"login": "mayurnewase",
"node_id": "MDQ6VXNlcjEyOTY3NTg3",
"organizations_url": "https://api.github.com/users/mayurnewase/orgs",
"received_events_url": "https://api.github.com/users/mayurnewase/received_events",
"repos_url": "https://api.github.com/users/mayurnewase/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mayurnewase/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mayurnewase/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mayurnewase"
} | [] | closed | false | null | [] | null | [
"I guess,test failing due to connection error?",
"We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?",
"If I'm not wrong this function returns None if no main class was found.\r\nI think it makes things less clear not to have a return at the end of the function.\r\nI guess we can have one return in the for loop instead of the break statement, AND one return at the end to explicitly return None.\r\nWhat do you think ?"
] | 2020-06-22T01:40:55 | 2020-06-23T17:57:58 | 2020-06-23T09:37:02 | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/291.diff",
"html_url": "https://github.com/huggingface/datasets/pull/291",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/291.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/291"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/291/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/290/comments | https://api.github.com/repos/huggingface/datasets/issues/290/events | https://github.com/huggingface/datasets/issues/290 | 641,978,286 | MDU6SXNzdWU2NDE5NzgyODY= | 290 | ConnectionError - Eli5 dataset download | {
"avatar_url": "https://avatars.githubusercontent.com/u/8490096?v=4",
"events_url": "https://api.github.com/users/JovanNj/events{/privacy}",
"followers_url": "https://api.github.com/users/JovanNj/followers",
"following_url": "https://api.github.com/users/JovanNj/following{/other_user}",
"gists_url": "https://api.github.com/users/JovanNj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JovanNj",
"id": 8490096,
"login": "JovanNj",
"node_id": "MDQ6VXNlcjg0OTAwOTY=",
"organizations_url": "https://api.github.com/users/JovanNj/orgs",
"received_events_url": "https://api.github.com/users/JovanNj/received_events",
"repos_url": "https://api.github.com/users/JovanNj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JovanNj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JovanNj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JovanNj"
} | [] | closed | false | null | [] | null | [
"It should ne fixed now, thanks for reporting this one :)\r\nIt was an issue on our google storage.\r\n\r\nLet me now if you're still facing this issue.",
"It works now, thanks for prompt help!"
] | 2020-06-19T13:40:33 | 2020-06-20T13:22:24 | 2020-06-20T13:22:24 | NONE | null | null | null | Hi, I have a problem with downloading Eli5 dataset. When typing `nlp.load_dataset('eli5')`, I get ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/eli5/LFQA_reddit/1.0.0/explain_like_im_five-train_eli5.arrow
I would appreciate if you could help me with this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/290/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/290/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/289/comments | https://api.github.com/repos/huggingface/datasets/issues/289/events | https://github.com/huggingface/datasets/pull/289 | 641,934,194 | MDExOlB1bGxSZXF1ZXN0NDM3MDc0MTM3 | 289 | update xsum | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [
"Looks cool!\r\n@mariamabarham can you add a detailed description here what exactly is changed and how the user can load xsum now?",
"And a rebase should solve the conflicts",
"This is a super useful PR :-) @sshleifer - maybe you can take a look at the updated version of xsum if you can use it for your use case. Now, one should be able to just load it with:\r\n\r\n```python \r\nnlp.load_datasets(\"xsum\", ....) # no manual dir required anymore\r\n```\r\n"
] | 2020-06-19T12:28:32 | 2020-06-22T13:27:26 | 2020-06-22T07:20:07 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/289.diff",
"html_url": "https://github.com/huggingface/datasets/pull/289",
"merged_at": "2020-06-22T07:20:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/289.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/289"
} | This PR makes the following update to the xsum dataset:
- Manual download is not required anymore
- dataset can be loaded as follow: `nlp.load_dataset('xsum')`
**Important**
Instead of using on outdated url to download the data: "https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json"
a more up-to-date url stored here: https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz is used
, so that the user does not need to manually download the data anymore.
There might be slight breaking changes here for xsum. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/289/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/289/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/288/comments | https://api.github.com/repos/huggingface/datasets/issues/288/events | https://github.com/huggingface/datasets/issues/288 | 641,888,610 | MDU6SXNzdWU2NDE4ODg2MTA= | 288 | Error at the first example in README: AttributeError: module 'dill' has no attribute '_dill' | {
"avatar_url": "https://avatars.githubusercontent.com/u/14964542?v=4",
"events_url": "https://api.github.com/users/wutong8023/events{/privacy}",
"followers_url": "https://api.github.com/users/wutong8023/followers",
"following_url": "https://api.github.com/users/wutong8023/following{/other_user}",
"gists_url": "https://api.github.com/users/wutong8023/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wutong8023",
"id": 14964542,
"login": "wutong8023",
"node_id": "MDQ6VXNlcjE0OTY0NTQy",
"organizations_url": "https://api.github.com/users/wutong8023/orgs",
"received_events_url": "https://api.github.com/users/wutong8023/received_events",
"repos_url": "https://api.github.com/users/wutong8023/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wutong8023/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wutong8023/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wutong8023"
} | [] | closed | false | null | [] | null | [
"It looks like the bug comes from `dill`. Which version of `dill` are you using ?",
"Thank you. It is version 0.2.6, which version is better?",
"0.2.6 is three years old now, maybe try a more recent one, e.g. the current 0.3.2 if you can?",
"Thanks guys! I upgraded dill and it works.",
"Awesome"
] | 2020-06-19T11:01:22 | 2020-06-21T09:05:11 | 2020-06-21T09:05:11 | NONE | null | null | null | /Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:469: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:470: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:471: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:472: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:473: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:476: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/Users/parasol_tree/anaconda3/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "/Users/parasol_tree/Resource/019 - Github/AcademicEnglishToolkit /test.py", line 7, in <module>
import nlp
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/__init__.py", line 27, in <module>
from .arrow_dataset import Dataset
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/arrow_dataset.py", line 31, in <module>
from nlp.utils.py_utils import dumps
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/__init__.py", line 20, in <module>
from .download_manager import DownloadManager, GenerateMode
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/download_manager.py", line 25, in <module>
from .py_utils import flatten_nested, map_nested, size_str
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 244, in <module>
class Pickler(dill.Pickler):
File "/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/nlp/utils/py_utils.py", line 247, in Pickler
dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy())
AttributeError: module 'dill' has no attribute '_dill' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/288/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/288/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/287/comments | https://api.github.com/repos/huggingface/datasets/issues/287/events | https://github.com/huggingface/datasets/pull/287 | 641,800,227 | MDExOlB1bGxSZXF1ZXN0NDM2OTY0NTg0 | 287 | fix squad_v2 metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-06-19T08:24:46 | 2020-06-19T08:33:43 | 2020-06-19T08:33:41 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/287.diff",
"html_url": "https://github.com/huggingface/datasets/pull/287",
"merged_at": "2020-06-19T08:33:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/287.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/287"
} | Fix #280
The imports were wrong | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/287/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/287/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/286/comments | https://api.github.com/repos/huggingface/datasets/issues/286/events | https://github.com/huggingface/datasets/pull/286 | 641,585,758 | MDExOlB1bGxSZXF1ZXN0NDM2NzkzMjI4 | 286 | Add ANLI dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4",
"events_url": "https://api.github.com/users/easonnie/events{/privacy}",
"followers_url": "https://api.github.com/users/easonnie/followers",
"following_url": "https://api.github.com/users/easonnie/following{/other_user}",
"gists_url": "https://api.github.com/users/easonnie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/easonnie",
"id": 11016329,
"login": "easonnie",
"node_id": "MDQ6VXNlcjExMDE2MzI5",
"organizations_url": "https://api.github.com/users/easonnie/orgs",
"received_events_url": "https://api.github.com/users/easonnie/received_events",
"repos_url": "https://api.github.com/users/easonnie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/easonnie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/easonnie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/easonnie"
} | [] | closed | false | null | [] | null | [
"Awesome!! Thanks @easonnie.\r\nLet's wait for additional reviews maybe from @lhoestq @patrickvonplaten @jplu"
] | 2020-06-18T22:27:30 | 2020-06-22T12:23:27 | 2020-06-22T12:23:27 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/286.diff",
"html_url": "https://github.com/huggingface/datasets/pull/286",
"merged_at": "2020-06-22T12:23:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/286.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/286"
} | I completed all the steps in https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset and push the code for ANLI. Please let me know if there are any errors. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/286/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/286/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/285/comments | https://api.github.com/repos/huggingface/datasets/issues/285/events | https://github.com/huggingface/datasets/pull/285 | 641,360,702 | MDExOlB1bGxSZXF1ZXN0NDM2NjAyMjk4 | 285 | Consistent formatting of citations | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [
"Circle CI shuold be green :-) "
] | 2020-06-18T16:25:23 | 2020-06-22T08:09:25 | 2020-06-22T08:09:24 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/285.diff",
"html_url": "https://github.com/huggingface/datasets/pull/285",
"merged_at": "2020-06-22T08:09:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/285.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/285"
} | #283 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/285/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/285/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/284/comments | https://api.github.com/repos/huggingface/datasets/issues/284/events | https://github.com/huggingface/datasets/pull/284 | 641,337,217 | MDExOlB1bGxSZXF1ZXN0NDM2NTgxODQ2 | 284 | Fix manual download instructions | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [
"Verified that this works, thanks!",
"But I get\r\n```python\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/./datasets/wmt16/wmt16.py\r\n```\r\nWhen I try from jupyter on brutasse or my mac. (the jupyter server is run from transformers).\r\n\r\n\r\nBoth machines can run\r\n```bash\r\naws s3 ls s3://datasets.huggingface.co/nlp/datasets/wmt16/\r\n```\r\nbut it seems one must be in the nlp directory to run the command?\r\n\r\n(I ran `pip install -e . ` on this branch in both situations.)\r\n\r\n\r\n",
"`https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/./datasets/wmt16/wmt16.py` looks very weird.\r\n\r\n(Also, S3 is not a file-system, it's a flat key-value store)",
"Good to merge I think @lhoestq ",
"> But I get\r\n> \r\n> ```python\r\n> ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/./datasets/wmt16/wmt16.py\r\n> ```\r\n> \r\n> When I try from jupyter on brutasse or my mac. (the jupyter server is run from transformers).\r\n> \r\n> Both machines can run\r\n> \r\n> ```shell\r\n> aws s3 ls s3://datasets.huggingface.co/nlp/datasets/wmt16/\r\n> ```\r\n> \r\n> but it seems one must be in the nlp directory to run the command?\r\n> \r\n> (I ran `pip install -e . ` on this branch in both situations.)\r\n\r\nAs soon as it is on master, the dataset script wmt16.py will be synced on S3 and you'll be able to do `load_dataset(\"wmt16\")`"
] | 2020-06-18T15:59:57 | 2020-06-19T08:24:21 | 2020-06-19T08:24:19 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/284.diff",
"html_url": "https://github.com/huggingface/datasets/pull/284",
"merged_at": "2020-06-19T08:24:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/284.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/284"
} | This PR replaces the static `DatasetBulider` variable `MANUAL_DOWNLOAD_INSTRUCTIONS` by a property function `manual_download_instructions()`.
Some datasets like XTREME and all WMT need the manual data dir only for a small fraction of the possible configs.
After some brainstorming with @mariamabarham and @lhoestq, we came to the conclusion that having a property function `manual_download_instructions()` gives us more flexibility to decide on a per config basis in the dataset builder if manual download instructions are needed.
Also this PR should unblock solves a bug with `wmt16 - ro-en`
@sshleifer from this branch you should be able to succesfully run
```python
import nlp
ds = nlp.load_dataset('./datasets/wmt16', 'ro-en')
```
and once this PR is merged S3 should be synched so that
```python
import nlp
ds = nlp.load_dataset("wmt16", "ro-en")
```
works as well.
**Important**: Since `MANUAL_DOWNLOAD_INSTRUCTIONS` was not really exposed to the user, this PR should not be a problem regarding backward compatibility. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/284/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/284/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/283/comments | https://api.github.com/repos/huggingface/datasets/issues/283/events | https://github.com/huggingface/datasets/issues/283 | 641,270,439 | MDU6SXNzdWU2NDEyNzA0Mzk= | 283 | Consistent formatting of citations | {
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srush",
"id": 35882,
"login": "srush",
"node_id": "MDQ6VXNlcjM1ODgy",
"organizations_url": "https://api.github.com/users/srush/orgs",
"received_events_url": "https://api.github.com/users/srush/received_events",
"repos_url": "https://api.github.com/users/srush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srush"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
}
] | null | [] | 2020-06-18T14:48:45 | 2020-06-22T17:30:46 | 2020-06-22T17:30:46 | CONTRIBUTOR | null | null | null | The citations are all of a different format, some have "```" and have text inside, others are proper bibtex.
Can we make it so that they all are proper citations, i.e. parse by the bibtex spec:
https://bibtexparser.readthedocs.io/en/master/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/283/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/283/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/282/comments | https://api.github.com/repos/huggingface/datasets/issues/282/events | https://github.com/huggingface/datasets/pull/282 | 641,217,759 | MDExOlB1bGxSZXF1ZXN0NDM2NDgxNzMy | 282 | Update dataset_info from gcs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-06-18T13:41:15 | 2020-06-18T16:24:52 | 2020-06-18T16:24:51 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/282",
"merged_at": "2020-06-18T16:24:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/282"
} | Some datasets are hosted on gcs (wikipedia for example). In this PR I make sure that, when a user loads such datasets, the file_instructions are built using the dataset_info.json from gcs and not from the info extracted from the local `dataset_infos.json` (the one that contain the info for each config). Indeed local files may end up outdated.
Furthermore, to avoid outdated dataset_infos.json, I now make sure that each time you run `load_dataset` it also tries to update the file locally.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/282/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/281/comments | https://api.github.com/repos/huggingface/datasets/issues/281/events | https://github.com/huggingface/datasets/issues/281 | 641,067,856 | MDU6SXNzdWU2NDEwNjc4NTY= | 281 | Private/sensitive data | {
"avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4",
"events_url": "https://api.github.com/users/MFreidank/events{/privacy}",
"followers_url": "https://api.github.com/users/MFreidank/followers",
"following_url": "https://api.github.com/users/MFreidank/following{/other_user}",
"gists_url": "https://api.github.com/users/MFreidank/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MFreidank",
"id": 6368040,
"login": "MFreidank",
"node_id": "MDQ6VXNlcjYzNjgwNDA=",
"organizations_url": "https://api.github.com/users/MFreidank/orgs",
"received_events_url": "https://api.github.com/users/MFreidank/received_events",
"repos_url": "https://api.github.com/users/MFreidank/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MFreidank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MFreidank/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MFreidank"
} | [] | closed | false | null | [] | null | [
"Hi @MFreidank, you should already be able to load a dataset from local sources, indeed. (ping @lhoestq and @jplu)\r\n\r\nWe're also thinking about the ability to host private datasets on a hosted bucket with permission management, but that's further down the road.",
"Hi @MFreidank, it is possible to load a dataset from your local storage, but only CSV/TSV and JSON are supported. To load a dataset in JSON format:\r\n\r\n```\r\nnlp.load_dataset(path=\"json\", data_files={nlp.Split.TRAIN: [\"path/to/train.json\"], nlp.Split.TEST: [\"path/to/test.json\"]})\r\n```\r\n\r\nFor CSV/TSV datasets, you have to replace `json` by `csv`.",
"Hi @julien-c @jplu,\r\nThanks for sharing this solution with me, it helps, this is what I was looking for. \r\nIf not already there and only missed by me, this could be a great addition in the docs.\r\n\r\nClosing my issue as resolved, thanks again."
] | 2020-06-18T09:47:27 | 2020-06-20T13:15:12 | 2020-06-20T13:15:12 | CONTRIBUTOR | null | null | null | Hi all,
Thanks for this fantastic library, it makes it very easy to do prototyping for NLP projects interchangeably between TF/Pytorch.
Unfortunately, there is data that cannot easily be shared publicly as it may contain sensitive information.
Is there support/a plan to support such data with NLP, e.g. by reading it from local sources?
Use case flow could look like this: use NLP to prototype an approach on similar, public data and apply the resulting prototype on sensitive/private data without the need to rethink data processing pipelines.
Many thanks for your responses ahead of time and kind regards,
MFreidank | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/281/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/281/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/280/comments | https://api.github.com/repos/huggingface/datasets/issues/280/events | https://github.com/huggingface/datasets/issues/280 | 640,677,615 | MDU6SXNzdWU2NDA2Nzc2MTU= | 280 | Error with SquadV2 Metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/32203792?v=4",
"events_url": "https://api.github.com/users/avinregmi/events{/privacy}",
"followers_url": "https://api.github.com/users/avinregmi/followers",
"following_url": "https://api.github.com/users/avinregmi/following{/other_user}",
"gists_url": "https://api.github.com/users/avinregmi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avinregmi",
"id": 32203792,
"login": "avinregmi",
"node_id": "MDQ6VXNlcjMyMjAzNzky",
"organizations_url": "https://api.github.com/users/avinregmi/orgs",
"received_events_url": "https://api.github.com/users/avinregmi/received_events",
"repos_url": "https://api.github.com/users/avinregmi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avinregmi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avinregmi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avinregmi"
} | [] | closed | false | null | [] | null | [] | 2020-06-17T19:10:54 | 2020-06-19T08:33:41 | 2020-06-19T08:33:41 | NONE | null | null | null | I can't seem to import squad v2 metrics.
**squad_metric = nlp.load_metric('squad_v2')**
**This throws me an error.:**
```
ImportError Traceback (most recent call last)
<ipython-input-8-170b6a170555> in <module>
----> 1 squad_metric = nlp.load_metric('squad_v2')
~/env/lib64/python3.6/site-packages/nlp/load.py in load_metric(path, name, process_id, num_process, data_dir, experiment_id, in_memory, download_config, **metric_init_kwargs)
426 """
427 module_path = prepare_module(path, download_config=download_config, dataset=False)
--> 428 metric_cls = import_main_class(module_path, dataset=False)
429 metric = metric_cls(
430 name=name,
~/env/lib64/python3.6/site-packages/nlp/load.py in import_main_class(module_path, dataset)
55 """
56 importlib.invalidate_caches()
---> 57 module = importlib.import_module(module_path)
58
59 if dataset:
/usr/lib64/python3.6/importlib/__init__.py in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
128
/usr/lib64/python3.6/importlib/_bootstrap.py in _gcd_import(name, package, level)
/usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load(name, import_)
/usr/lib64/python3.6/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
/usr/lib64/python3.6/importlib/_bootstrap.py in _load_unlocked(spec)
/usr/lib64/python3.6/importlib/_bootstrap_external.py in exec_module(self, module)
/usr/lib64/python3.6/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
~/env/lib64/python3.6/site-packages/nlp/metrics/squad_v2/a15e787c76889174874386d3def75321f0284c11730d2a57e28fe1352c9b5c7a/squad_v2.py in <module>
16
17 import nlp
---> 18 from .evaluate import evaluate
19
20 _CITATION = """\
ImportError: cannot import name 'evaluate'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/280/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/280/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/279/comments | https://api.github.com/repos/huggingface/datasets/issues/279/events | https://github.com/huggingface/datasets/issues/279 | 640,611,692 | MDU6SXNzdWU2NDA2MTE2OTI= | 279 | Dataset Preprocessing Cache with .map() function not working as expected | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie"
} | [] | closed | false | null | [] | null | [
"When you're processing a dataset with `.map`, it checks whether it has already done this computation using a hash based on the function and the input (using some fancy serialization with `dill`). If you found that it doesn't work as expected in some cases, let us know !\r\n\r\nGiven that, you can still force to re-process using `.map(my_func, load_from_cache_file=False)` if you want to.\r\n\r\nI am curious about the problem you have with splits. It makes me think about #160 that was an issue of version 0.1.0. What version of `nlp` are you running ? Could you give me more details ?",
"Thanks, that's helpful! I was running 0.1.0, but since upgraded to 0.2.1. I can't reproduce the issue anymore as I've cleared the cache & everything now seems to be running fine since the upgrade. I've added some checks to my code, so if I do encounter it again I will reopen this issue.",
"Just checking in, the cache sometimes still does not work when I make changes in my processing function in version `1.2.1`. The changes made to my data processing function only propagate to the dataset when I use `load_from_cache_file=False` or clear the cache. Is this a system-specific issue?",
"Hi @sarahwie \r\nThe data are reloaded from the cache if the hash of the function you provide is the same as a computation you've done before. The hash is computed by recursively looking at the python objects of the function you provide.\r\n\r\nIf you think there's an issue, can you share the function you used or a google colab please ?",
"I can't reproduce it, so I'll close for now."
] | 2020-06-17T17:17:21 | 2021-07-06T21:43:28 | 2021-04-18T23:43:49 | NONE | null | null | null | I've been having issues with reproducibility when loading and processing datasets with the `.map` function. I was only able to resolve them by clearing all of the cache files on my system.
Is there a way to disable using the cache when processing a dataset? As I make minor processing changes on the same dataset, I want to be able to be certain the data is being re-processed rather than loaded from a cached file.
Could you also help me understand a bit more about how the caching functionality is used for pre-processing? E.g. how is it determined when to load from a cache vs. reprocess.
I was particularly having an issue where the correct dataset splits were loaded, but as soon as I applied the `.map()` function to each split independently, they somehow all exited this process having been converted to the test set.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/279/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/279/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/278/comments | https://api.github.com/repos/huggingface/datasets/issues/278/events | https://github.com/huggingface/datasets/issues/278 | 640,518,917 | MDU6SXNzdWU2NDA1MTg5MTc= | 278 | MemoryError when loading German Wikipedia | {
"avatar_url": "https://avatars.githubusercontent.com/u/4698028?v=4",
"events_url": "https://api.github.com/users/gregburman/events{/privacy}",
"followers_url": "https://api.github.com/users/gregburman/followers",
"following_url": "https://api.github.com/users/gregburman/following{/other_user}",
"gists_url": "https://api.github.com/users/gregburman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gregburman",
"id": 4698028,
"login": "gregburman",
"node_id": "MDQ6VXNlcjQ2OTgwMjg=",
"organizations_url": "https://api.github.com/users/gregburman/orgs",
"received_events_url": "https://api.github.com/users/gregburman/received_events",
"repos_url": "https://api.github.com/users/gregburman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gregburman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gregburman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gregburman"
} | [] | closed | false | null | [] | null | [
"Hi !\r\n\r\nAs you noticed, \"big\" datasets like Wikipedia require apache beam to be processed.\r\nHowever users usually don't have an apache beam runtime available (spark, dataflow, etc.) so our goal for this library is to also make available processed versions of these datasets, so that users can just download and use them right away.\r\n\r\nThis is the case for english and french wikipedia right now: we've processed them ourselves and now they are available from our google storage. However we've not processed the german one (yet).",
"Hi @lhoestq \r\n\r\nThank you for your quick reply. I thought this might be the case, that the processing was done for some languages and not for others. Is there any set timeline for when other languages (German, Italian) will be processed?\r\n\r\nGiven enough memory, is it possible to process the data ourselves by specifying the `beam_runner`?",
"Adding them is definitely in our short term objectives. I'll be working on this early next week :)\r\n\r\nAlthough if you have an apache beam runtime feel free to specify the beam runner. You can find more info [here](https://github.com/huggingface/nlp/blob/master/docs/beam_dataset.md) on how to make it work on Dataflow but you can adapt it for Spark or any other beam runtime (by changing the `runner`).\r\n\r\nHowever if you don't have a beam runtime and even if you have enough memory, I discourage you to use the `DirectRunner` on the german or italian wikipedia. According to Apache Beam documentation it was made for testing purposes and therefore it is memory-inefficient.",
"German is [almost] done @gregburman",
"I added the German and the Italian Wikipedia to our google cloud storage:\r\nFirst update the `nlp` package to 0.3.0:\r\n```bash\r\npip install nlp --upgrade\r\n```\r\nand then\r\n```python\r\nfrom nlp import load_dataset\r\nwiki_de = load_dataset(\"wikipedia\", \"20200501.de\")\r\nwiki_it = load_dataset(\"wikipedia\", \"20200501.it\")\r\n```\r\nThe datasets are downloaded and directly ready to use (no processing).",
"Hi @lhoestq \r\n\r\nWow, thanks so much, that's **really** incredible! I was considering looking at creating my own Beam Dataset, as per the doc you linked, but instead opted to process the data myself using `wikiextractor`. However, now that this is available, I'll definitely switch across and use it.\r\n\r\nThanks so much for the incredible work, this really helps out our team considerably!\r\n\r\nHave a great (and well-deserved ;) weekend ahead!\r\n\r\nP.S. I'm not sure if I should close the issue here - if so I'm happy to do so.",
"Thanks for your message, glad I could help :)\r\nClosing this one."
] | 2020-06-17T15:06:21 | 2020-06-19T12:53:02 | 2020-06-19T12:53:02 | NONE | null | null | null | Hi, first off let me say thank you for all the awesome work you're doing at Hugging Face across all your projects (NLP, Transformers, Tokenizers) - they're all amazing contributions to us working with NLP models :)
I'm trying to download the German Wikipedia dataset as follows:
```
wiki = nlp.load_dataset("wikipedia", "20200501.de", split="train")
```
However, when I do so, I get the following error:
```
Downloading and preparing dataset wikipedia/20200501.de (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/ubuntu/.cache/huggingface/datasets/wikipedia/20200501.de/1.0.0...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/load.py", line 520, in load_dataset
save_infos=save_infos,
File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 433, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 824, in _download_and_prepare
"\n\t`{}`".format(usage_example)
nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.de', beam_runner='DirectRunner')`
```
So, following on from the example usage at the bottom, I tried specifying `beam_runner='DirectRunner`, however when I do this after about 20 min after the data has all downloaded, I get a `MemoryError` as warned.
This isn't an issue for the English or French Wikipedia datasets (I've tried both), as neither seem to require that `beam_runner` be specified. Can you please clarify why this is an issue for the German dataset?
My nlp version is 0.2.1.
Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/278/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/278/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/277/comments | https://api.github.com/repos/huggingface/datasets/issues/277/events | https://github.com/huggingface/datasets/issues/277 | 640,163,053 | MDU6SXNzdWU2NDAxNjMwNTM= | 277 | Empty samples in glue/qqp | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | null | [] | null | [
"We are only wrapping the original dataset.\r\n\r\nMaybe try to ask on the GLUE mailing list or reach out to the original authors?",
"Tanks for the suggestion, I'll try to ask GLUE benchmark.\r\nI'll first close the issue, post the following up here afterwards, and reopen the issue if needed. "
] | 2020-06-17T05:54:52 | 2020-06-21T00:21:45 | 2020-06-21T00:21:45 | CONTRIBUTOR | null | null | null | ```
qqp = nlp.load_dataset('glue', 'qqp')
print(qqp['train'][310121])
print(qqp['train'][362225])
```
```
{'question1': 'How can I create an Android app?', 'question2': '', 'label': 0, 'idx': 310137}
{'question1': 'How can I develop android app?', 'question2': '', 'label': 0, 'idx': 362246}
```
Notice that question 2 is empty string.
BTW, I have checked and these two are the only naughty ones in all splits of qqp. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/277/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/277/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/276/comments | https://api.github.com/repos/huggingface/datasets/issues/276/events | https://github.com/huggingface/datasets/pull/276 | 639,490,858 | MDExOlB1bGxSZXF1ZXN0NDM1MDY5Nzg5 | 276 | Fix metric compute (original_instructions missing) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Awesome! This is working now:\r\n\r\n```python\r\nimport nlp \r\nseqeval = nlp.load_metric(\"seqeval\") \r\ny_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] \r\ny_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] \r\n\r\nresults = seqeval.compute(y_true, y_pred)\r\n```\r\n\r\nI heavily need this fix for an upcoming `nlp` integration PR for Transformers (token classification example) 😅",
"Haha nice ! We'll ship this fix with the next release that will probably come out on thursday :)"
] | 2020-06-16T08:52:01 | 2020-06-18T07:41:45 | 2020-06-18T07:41:44 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/276.diff",
"html_url": "https://github.com/huggingface/datasets/pull/276",
"merged_at": "2020-06-18T07:41:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/276.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/276"
} | When loading arrow data we added in cc8d250 a way to specify the instructions that were used to store them with the loaded dataset.
However metrics load data the same way but don't need instructions (we use one single file).
In this PR I just make `original_instructions` optional when reading files to load a `Dataset` object.
This should fix #269 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/276/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/276/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/275/comments | https://api.github.com/repos/huggingface/datasets/issues/275/events | https://github.com/huggingface/datasets/issues/275 | 639,439,052 | MDU6SXNzdWU2Mzk0MzkwNTI= | 275 | NonMatchingChecksumError when loading pubmed dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/48441753?v=4",
"events_url": "https://api.github.com/users/DavideStenner/events{/privacy}",
"followers_url": "https://api.github.com/users/DavideStenner/followers",
"following_url": "https://api.github.com/users/DavideStenner/following{/other_user}",
"gists_url": "https://api.github.com/users/DavideStenner/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DavideStenner",
"id": 48441753,
"login": "DavideStenner",
"node_id": "MDQ6VXNlcjQ4NDQxNzUz",
"organizations_url": "https://api.github.com/users/DavideStenner/orgs",
"received_events_url": "https://api.github.com/users/DavideStenner/received_events",
"repos_url": "https://api.github.com/users/DavideStenner/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DavideStenner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavideStenner/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DavideStenner"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | [
"For some reason the files are not available for unauthenticated users right now (like the download service of this package). Instead of downloading the right files, it downloads the html of the error.\r\nAccording to the error it should be back again in 24h.\r\n\r\n![image](https://user-images.githubusercontent.com/42851186/84751599-096c6580-afbd-11ea-97f3-ee4aef791711.png)\r\n"
] | 2020-06-16T07:31:51 | 2020-06-19T07:37:07 | 2020-06-19T07:37:07 | NONE | null | null | null | I get this error when i run `nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')`.
The error is:
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-2-7742dea167d0> in <module>()
----> 1 df = nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')
2 df = pd.DataFrame(df)
3 gc.collect()
3 frames
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
431 verify_infos = not save_infos and not ignore_verifications
432 self._download_and_prepare(
--> 433 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
434 )
435 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
468 # Checksums verification
469 if verify_infos:
--> 470 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())
471 for split_generator in split_generators:
472 if str(split_generator.split_info.name).lower() == "all":
/usr/local/lib/python3.6/dist-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums)
34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]]
35 if len(bad_urls) > 0:
---> 36 raise NonMatchingChecksumError(str(bad_urls))
37 logger.info("All the checksums matched successfully.")
38
NonMatchingChecksumError: ['https://drive.google.com/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https://drive.google.com/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download']
```
I'm currently working on google colab.
That is quite strange because yesterday it was fine.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/275/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/275/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/274/comments | https://api.github.com/repos/huggingface/datasets/issues/274/events | https://github.com/huggingface/datasets/issues/274 | 639,156,625 | MDU6SXNzdWU2MzkxNTY2MjU= | 274 | PG-19 | {
"avatar_url": "https://avatars.githubusercontent.com/u/108653?v=4",
"events_url": "https://api.github.com/users/lucidrains/events{/privacy}",
"followers_url": "https://api.github.com/users/lucidrains/followers",
"following_url": "https://api.github.com/users/lucidrains/following{/other_user}",
"gists_url": "https://api.github.com/users/lucidrains/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lucidrains",
"id": 108653,
"login": "lucidrains",
"node_id": "MDQ6VXNlcjEwODY1Mw==",
"organizations_url": "https://api.github.com/users/lucidrains/orgs",
"received_events_url": "https://api.github.com/users/lucidrains/received_events",
"repos_url": "https://api.github.com/users/lucidrains/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lucidrains/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucidrains/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lucidrains"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"Sounds good! Do you want to give it a try?",
"Ok, I'll see if I can figure it out tomorrow!",
"Got around to this today, and so far so good, I'm able to download and load pg19 locally. However, I think there may be an issue with the dummy data, and testing in general.\r\n\r\nThe problem lies in the fact that each book from pg19 actually resides as its own text file in a google cloud folder that denotes the split, where the book id is the name of the text file. https://console.cloud.google.com/storage/browser/deepmind-gutenberg/train/ I don't believe there's anywhere else (even in the supplied metadata), where the mapping of id -> split can be found.\r\n\r\nTherefore I end up making a network call `tf.io.gfile.listdir` to get all the files within each of the split directories. https://github.com/lucidrains/nlp/commit/adbacbd85decc80db2347d0882e7dab4faa6fd03#diff-cece8f166a85dd927caf574ba303d39bR78\r\n\r\nDoes this network call need to be eventually stubbed out for testing?",
"Ohh nevermind, I think I can use `download_custom` here with `listdir` as the custom function. Ok, I'll keep trying to make the dummy data work!"
] | 2020-06-15T21:02:26 | 2020-07-06T15:35:02 | 2020-07-06T15:35:02 | CONTRIBUTOR | null | null | null | Hi, and thanks for all your open-sourced work, as always!
I was wondering if you would be open to adding PG-19 to your collection of datasets. https://github.com/deepmind/pg19 It is often used for benchmarking long-range language modeling. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/274/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/274/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/273/comments | https://api.github.com/repos/huggingface/datasets/issues/273/events | https://github.com/huggingface/datasets/pull/273 | 638,968,054 | MDExOlB1bGxSZXF1ZXN0NDM0NjM0MzU4 | 273 | update cos_e to add cos_e v1.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [] | 2020-06-15T16:03:22 | 2020-06-16T08:25:54 | 2020-06-16T08:25:52 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/273.diff",
"html_url": "https://github.com/huggingface/datasets/pull/273",
"merged_at": "2020-06-16T08:25:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/273.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/273"
} | This PR updates the cos_e dataset to add v1.0 as requested here #163
@nazneenrajani | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/273/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/273/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/272/comments | https://api.github.com/repos/huggingface/datasets/issues/272/events | https://github.com/huggingface/datasets/pull/272 | 638,307,313 | MDExOlB1bGxSZXF1ZXN0NDM0MTExOTQ3 | 272 | asd | {
"avatar_url": "https://avatars.githubusercontent.com/u/66900970?v=4",
"events_url": "https://api.github.com/users/sn696/events{/privacy}",
"followers_url": "https://api.github.com/users/sn696/followers",
"following_url": "https://api.github.com/users/sn696/following{/other_user}",
"gists_url": "https://api.github.com/users/sn696/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sn696",
"id": 66900970,
"login": "sn696",
"node_id": "MDQ6VXNlcjY2OTAwOTcw",
"organizations_url": "https://api.github.com/users/sn696/orgs",
"received_events_url": "https://api.github.com/users/sn696/received_events",
"repos_url": "https://api.github.com/users/sn696/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sn696/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sn696/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sn696"
} | [] | closed | false | null | [] | null | [] | 2020-06-14T08:20:38 | 2020-06-14T09:16:41 | 2020-06-14T09:16:41 | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/272.diff",
"html_url": "https://github.com/huggingface/datasets/pull/272",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/272.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/272"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/272/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/272/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/271/comments | https://api.github.com/repos/huggingface/datasets/issues/271/events | https://github.com/huggingface/datasets/pull/271 | 638,135,754 | MDExOlB1bGxSZXF1ZXN0NDMzOTg3NDkw | 271 | Fix allociné dataset configuration | {
"avatar_url": "https://avatars.githubusercontent.com/u/37028092?v=4",
"events_url": "https://api.github.com/users/TheophileBlard/events{/privacy}",
"followers_url": "https://api.github.com/users/TheophileBlard/followers",
"following_url": "https://api.github.com/users/TheophileBlard/following{/other_user}",
"gists_url": "https://api.github.com/users/TheophileBlard/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TheophileBlard",
"id": 37028092,
"login": "TheophileBlard",
"node_id": "MDQ6VXNlcjM3MDI4MDky",
"organizations_url": "https://api.github.com/users/TheophileBlard/orgs",
"received_events_url": "https://api.github.com/users/TheophileBlard/received_events",
"repos_url": "https://api.github.com/users/TheophileBlard/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TheophileBlard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheophileBlard/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TheophileBlard"
} | [] | closed | false | null | [] | null | [
"Actually when there is only one configuration, then you don't need to specify the configuration in `load_dataset`. You can run:\r\n```python\r\ndataset = load_dataset('allocine')\r\n```\r\nand it works.\r\n\r\nMaybe we should take that into account in the nlp viewer @srush ?",
"@lhoestq Just to understand the exact semantics. Are you suggesting that if there is exactly 1 configuration I should not show the configuration menu and just treat it as if there were 0 configurations? ",
"The configuration menu is fine imo.\r\nIt was more about the code snippet presented in the viewer.\r\nFor example for Allociné it currently shows this snippet to load the dataset:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('allocine', 'allocine')\r\n```\r\nHowever for datasets with one or zero configurations, the second argument in `load_dataset` is optional. For Allociné, that has one configuration, we can expect to show instead:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('allocine')\r\n```",
"> Actually when there is only one configuration, then you don't need to specify the configuration in `load_dataset`. You can run:\r\n> \r\n> ```python\r\n> dataset = load_dataset('allocine')\r\n> ```\r\n> \r\n> and it works.\r\n> \r\n> Maybe we should take that into account in the nlp viewer @srush ?\r\n\r\nOh ok, I didn't expect it would work! \r\n\r\nAnyway, I think it's intrinsically better to simply remove the optional parameter. \r\nThe dummy data folder architecture seems also more logical this way.\r\n",
"Fixed in the viewer. Checked that allocine works.",
"Awesome thanks :)\r\n\r\nClosing this."
] | 2020-06-13T10:12:10 | 2020-06-18T07:41:21 | 2020-06-18T07:41:20 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/271.diff",
"html_url": "https://github.com/huggingface/datasets/pull/271",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/271.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/271"
} | This is a patch for #244. According to the [live nlp viewer](url), the Allociné dataset must be loaded with :
```python
dataset = load_dataset('allocine', 'allocine')
```
This is redundant, as there is only one "dataset configuration", and should only be:
```python
dataset = load_dataset('allocine')
```
This is my mistake, because the code for [`allocine.py`](https://github.com/huggingface/nlp/blob/master/datasets/allocine/allocine.py) was inspired by [`imdb.py`](https://github.com/huggingface/nlp/blob/master/datasets/imdb/imdb.py), which also force the user to specify the "dataset configuration" (even if there is only one).
I believe this PR should solve this issue, making the Allociné dataset more convenient to use. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/271/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/271/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/270/comments | https://api.github.com/repos/huggingface/datasets/issues/270/events | https://github.com/huggingface/datasets/issues/270 | 638,121,617 | MDU6SXNzdWU2MzgxMjE2MTc= | 270 | c4 dataset is not viewable in nlpviewer demo | {
"avatar_url": "https://avatars.githubusercontent.com/u/6441313?v=4",
"events_url": "https://api.github.com/users/rajarsheem/events{/privacy}",
"followers_url": "https://api.github.com/users/rajarsheem/followers",
"following_url": "https://api.github.com/users/rajarsheem/following{/other_user}",
"gists_url": "https://api.github.com/users/rajarsheem/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rajarsheem",
"id": 6441313,
"login": "rajarsheem",
"node_id": "MDQ6VXNlcjY0NDEzMTM=",
"organizations_url": "https://api.github.com/users/rajarsheem/orgs",
"received_events_url": "https://api.github.com/users/rajarsheem/received_events",
"repos_url": "https://api.github.com/users/rajarsheem/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rajarsheem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajarsheem/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rajarsheem"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | [
"C4 is too large to be shown in the viewer"
] | 2020-06-13T08:26:16 | 2020-10-27T15:35:29 | 2020-10-27T15:35:13 | NONE | null | null | null | I get the following error when I try to view the c4 dataset in [nlpviewer](https://huggingface.co/nlp/viewer/)
```python
ModuleNotFoundError: No module named 'langdetect'
Traceback:
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp_viewer/run.py", line 54, in <module>
configs = get_confs(option.id)
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp_viewer/run.py", line 48, in get_confs
builder_cls = nlp.load.import_main_class(module_path, dataset=True)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/load.py", line 57, in import_main_class
module = importlib.import_module(module_path)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4.py", line 29, in <module>
from .c4_utils import (
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4_utils.py", line 29, in <module>
import langdetect
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/270/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/270/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/269/comments | https://api.github.com/repos/huggingface/datasets/issues/269/events | https://github.com/huggingface/datasets/issues/269 | 638,106,774 | MDU6SXNzdWU2MzgxMDY3NzQ= | 269 | Error in metric.compute: missing `original_instructions` argument | {
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zphang",
"id": 1668462,
"login": "zphang",
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"repos_url": "https://api.github.com/users/zphang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zphang"
} | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2020-06-13T06:26:54 | 2020-06-18T07:41:44 | 2020-06-18T07:41:44 | NONE | null | null | null | I'm running into an error using metrics for computation in the latest master as well as version 0.2.1. Here is a minimal example:
```python
import nlp
rte_metric = nlp.load_metric('glue', name="rte")
rte_metric.compute(
[0, 0, 1, 1],
[0, 1, 0, 1],
)
```
```
181 # Read the predictions and references
182 reader = ArrowReader(path=self.data_dir, info=None)
--> 183 self.data = reader.read_files(node_files)
184
185 # Release all of our locks
TypeError: read_files() missing 1 required positional argument: 'original_instructions'
```
I believe this might have been introduced with cc8d2508b75f7ba0e5438d0686ee02dcec43c7f4, which added the `original_instructions` argument. Elsewhere, an empty-string default is provided--perhaps that could be done here too? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/269/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/269/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/268/comments | https://api.github.com/repos/huggingface/datasets/issues/268/events | https://github.com/huggingface/datasets/pull/268 | 637,848,056 | MDExOlB1bGxSZXF1ZXN0NDMzNzU5NzQ1 | 268 | add Rotten Tomatoes Movie Review sentences sentiment dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | [
"@jplu @thomwolf @patrickvonplaten @lhoestq -- How do I request reviewers? Thanks."
] | 2020-06-12T15:53:59 | 2020-06-18T07:46:24 | 2020-06-18T07:46:23 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/268.diff",
"html_url": "https://github.com/huggingface/datasets/pull/268",
"merged_at": "2020-06-18T07:46:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/268.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/268"
} | Sentence-level movie reviews v1.0 from here: http://www.cs.cornell.edu/people/pabo/movie-review-data/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/268/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/268/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/267/comments | https://api.github.com/repos/huggingface/datasets/issues/267/events | https://github.com/huggingface/datasets/issues/267 | 637,415,545 | MDU6SXNzdWU2Mzc0MTU1NDU= | 267 | How can I load/find WMT en-romanian? | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
}
] | null | [
"I will take a look :-) "
] | 2020-06-12T01:09:37 | 2020-06-19T08:24:19 | 2020-06-19T08:24:19 | CONTRIBUTOR | null | null | null | I believe it is from `wmt16`
When I run
```python
wmt = nlp.load_dataset('wmt16')
```
I get:
```python
AssertionError: The dataset wmt16 with config cs-en requires manual data.
Please follow the manual download instructions: Some of the wmt configs here, require a manual download.
Please look into wmt.py to see the exact path (and file name) that has to
be downloaded.
.
Manual data can be loaded with `nlp.load(wmt16, data_dir='<path/to/manual/data>')
```
There is no wmt.py,as the error message suggests, and wmt16.py doesn't have manual download instructions.
Any idea how to do this?
Thanks in advance!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/267/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/267/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/266/comments | https://api.github.com/repos/huggingface/datasets/issues/266/events | https://github.com/huggingface/datasets/pull/266 | 637,156,392 | MDExOlB1bGxSZXF1ZXN0NDMzMTk1NDgw | 266 | Add sort, shuffle, test_train_split and select methods | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | [
"Nice !\r\n\r\nAlso it looks like we can have a train_test_split method for free:\r\n```python\r\ntrain_indices, test_indices = train_test_split(range(len(dataset)))\r\ntrain = dataset.sort(indices=train_indices)\r\ntest = dataset.sort(indices=test_indices)\r\n```\r\n\r\nand a shuffling method for free:\r\n```python\r\nshuffled_indices = shuffle(range(len(dataset)))\r\nshuffled_dataset = dataset.sort(indices=shuffled_indices)\r\n```\r\n\r\nMaybe we can have a specific API for train_test_split and shuffle. They are two features asked quite often (see #147, #166)",
"Ok, I think this one is ready to merge.\r\n\r\n@patrickvonplaten @jplu @mariamabarham @joeddav @n1t0 @julien-c you may want to give it a look, it adds a bunch of methods to reorder/split/select rows in a dataset:\r\n- `dataset.select(indices)`: Create a new dataset with rows selected following the list/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be smaller than the dataset size, otherwise we're indexing outside the dataset...)\r\n- `dataset.sort(column_name)`: sort a dataset according to a column (has to be a column with a numpy compatible type)\r\n- `dataset.shuffle(seed)`: shuffle a dataset rows\r\n- `dataset.train_test_split(test_size, train_size)`: Return a dictionary with two random train and test subsets (`train` and `test` ``Dataset`` splits)\r\n\r\nAll these methods are **not** in-place which means they return new ``Dataset``, which is the default behavior in the library.",
"> Might be a solution to put 0.25 and 0.75 as default values for respectively `test_size` and `train_size`. WDYT?\r\n\r\nAccording to sklearn documentation, it is indeed set to 0.25 and 0.75 if both `test_size` and `train_size` are None.\r\nLet me add it.",
"I think we're good to go now :) @joeddav @thomwolf @jplu "
] | 2020-06-11T16:22:20 | 2020-06-18T16:23:25 | 2020-06-18T16:23:24 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/266.diff",
"html_url": "https://github.com/huggingface/datasets/pull/266",
"merged_at": "2020-06-18T16:23:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/266.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/266"
} | Add a bunch of methods to reorder/split/select rows in a dataset:
- `dataset.select(indices)`: Create a new dataset with rows selected following the list/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be smaller than the dataset size, otherwise we're indexing outside the dataset...)
- `dataset.sort(column_name)`: sort a dataset according to a column (has to be a column with a numpy compatible type)
- `dataset.shuffle(seed)`: shuffle a dataset rows
- `dataset.train_test_split(test_size, train_size)`: Return a dictionary with two random train and test subsets (`train` and `test` ``Dataset`` splits)
All these methods are **not** in-place which means they return new ``Dataset``.
This is the default behavior in the library.
Fix #147 #166 #259 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/266/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/266/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/265/comments | https://api.github.com/repos/huggingface/datasets/issues/265/events | https://github.com/huggingface/datasets/pull/265 | 637,139,220 | MDExOlB1bGxSZXF1ZXN0NDMzMTgxNDMz | 265 | Add pyarrow warning colab | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-06-11T15:57:51 | 2020-08-02T18:14:36 | 2020-06-12T08:14:16 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/265.diff",
"html_url": "https://github.com/huggingface/datasets/pull/265",
"merged_at": "2020-06-12T08:14:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/265.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/265"
} | When a user installs `nlp` on google colab, then google colab doesn't update pyarrow, and the runtime needs to be restarted to use the updated version of pyarrow.
This is an issue because `nlp` requires the updated version to work correctly.
In this PR I added en error that is shown to the user in google colab if the user tries to `import nlp` without having restarted the runtime. The error tells the user to restart the runtime. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/265/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/265/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/264/comments | https://api.github.com/repos/huggingface/datasets/issues/264/events | https://github.com/huggingface/datasets/pull/264 | 637,106,170 | MDExOlB1bGxSZXF1ZXN0NDMzMTU0ODQ4 | 264 | Fix small issues creating dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-06-11T15:20:16 | 2020-06-12T08:15:57 | 2020-06-12T08:15:56 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/264.diff",
"html_url": "https://github.com/huggingface/datasets/pull/264",
"merged_at": "2020-06-12T08:15:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/264.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/264"
} | Fix many small issues mentioned in #249:
- don't force to install apache beam for commands
- fix None cache dir when using `dl_manager.download_custom`
- added new extras in `setup.py` named `dev` that contains tests and quality dependencies
- mock dataset sizes when running tests with dummy data
- add a note about the naming convention of datasets (camel case - snake case) in CONTRIBUTING.md
This should help users create their datasets.
Next step is the `add_dataset.md` docs :) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/264/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/264/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/263/comments | https://api.github.com/repos/huggingface/datasets/issues/263/events | https://github.com/huggingface/datasets/issues/263 | 637,028,015 | MDU6SXNzdWU2MzcwMjgwMTU= | 263 | [Feature request] Support for external modality for language datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aleSuglia",
"id": 1479733,
"login": "aleSuglia",
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aleSuglia"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | closed | false | null | [] | null | [
"Thanks a lot, @aleSuglia for the very detailed and introductive feature request.\r\nIt seems like we could build something pretty useful here indeed.\r\n\r\nOne of the questions here is that Arrow doesn't have built-in support for generic \"tensors\" in records but there might be ways to do that in a clean way. We'll probably try to tackle this during the summer.",
"I was looking into Facebook MMF and apparently they decided to use LMDB to store additional features associated with every example: https://github.com/facebookresearch/mmf/blob/master/mmf/datasets/databases/features_database.py\r\n\r\n",
"I saw the Mozilla common_voice dataset in model hub, which has mp3 audio recordings as part it. It's use predominantly maybe in ASR and TTS, but dataset is a Language + Voice Dataset similar to @aleSuglia's point about Language + Vision. \r\n\r\nhttps://huggingface.co/datasets/common_voice",
"Hey @thomwolf, are there any updates on this? I would love to contribute if possible!\r\n\r\nThanks, \r\nAlessandro ",
"Hi @aleSuglia :) In today's new release 1.17 of `datasets` we introduce a new feature type `Image` that allows to store images directly in a dataset, next to text features and labels for example. There is also an `Audio` feature type, for datasets containing audio data. For tensors there are `Array2D`, `Array3D`, etc. feature types\r\n\r\nNote that both Image and Audio feature types take care of decoding the images/audio data if needed. The returned images are PIL images, and the audio signals are decoded as numpy arrays.\r\n\r\nAnd `datasets` also leverage end-to-end zero copy from the arrow data for all of them, for maximum speed :)"
] | 2020-06-11T13:42:18 | 2022-02-10T13:26:35 | 2022-02-10T13:26:35 | CONTRIBUTOR | null | null | null | # Background
In recent years many researchers have advocated that learning meanings from text-based only datasets is just like asking a human to "learn to speak by listening to the radio" [[E. Bender and A. Koller,2020](https://openreview.net/forum?id=GKTvAcb12b), [Y. Bisk et. al, 2020](https://arxiv.org/abs/2004.10151)]. Therefore, the importance of multi-modal datasets for the NLP community is of paramount importance for next-generation models. For this reason, I raised a [concern](https://github.com/huggingface/nlp/pull/236#issuecomment-639832029) related to the best way to integrate external features in NLP datasets (e.g., visual features associated with an image, audio features associated with a recording, etc.). This would be of great importance for a more systematic way of representing data for ML models that are learning from multi-modal data.
# Language + Vision
## Use case
Typically, people working on Language+Vision tasks, have a reference dataset (either in JSON or JSONL format) and for each example, they have an identifier that specifies the reference image. For a practical example, you can refer to the [GQA](https://cs.stanford.edu/people/dorarad/gqa/download.html#seconddown) dataset.
Currently, images are represented by either pooling-based features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https://arxiv.org/abs/1611.08481), [Shekhar et.al, 2019](https://www.aclweb.org/anthology/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https://arxiv.org/abs/1502.03044)). A more recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https://arxiv.org/abs/1908.03557), is to use FastRCNN features.
For all these types of features, people use one of the following formats:
1. [HD5F](https://pypi.org/project/h5py/)
2. [NumPy](https://numpy.org/doc/stable/reference/generated/numpy.savez.html)
3. [LMDB](https://lmdb.readthedocs.io/en/release/)
## Implementation considerations
I was thinking about possible ways of implementing this feature. As mentioned above, depending on the model, different visual features can be used. This step usually relies on another model (say ResNet-101) that is used to generate the visual features for each image used in the dataset. Typically, this step is done in a separate script that completes the feature generation procedure. The usual processing steps for these datasets are the following:
1. Download dataset
2. Download images associated with the dataset
3. Write a script that generates the visual features for every image and store them in a specific file
4. Create a DataLoader that maps the visual features to the corresponding language example
In my personal projects, I've decided to ignore HD5F because it doesn't have out-of-the-box support for multi-processing (see this PyTorch [issue](https://github.com/pytorch/pytorch/issues/11929)). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it.
For ease of use of all these Language+Vision datasets, it would be really handy to have a way to associate the visual features with the text and store them in an efficient way. That's why I immediately thought about the HuggingFace NLP backend based on Apache Arrow. The assumption here is that the external modality will be mapped to a N-dimensional tensor so easily represented by a NumPy array.
Looking forward to hearing your thoughts about it! | {
"+1": 18,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/263/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/263/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/262/comments | https://api.github.com/repos/huggingface/datasets/issues/262/events | https://github.com/huggingface/datasets/pull/262 | 636,702,849 | MDExOlB1bGxSZXF1ZXN0NDMyODI3Mzcz | 262 | Add new dataset ANLI Round 1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4",
"events_url": "https://api.github.com/users/easonnie/events{/privacy}",
"followers_url": "https://api.github.com/users/easonnie/followers",
"following_url": "https://api.github.com/users/easonnie/following{/other_user}",
"gists_url": "https://api.github.com/users/easonnie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/easonnie",
"id": 11016329,
"login": "easonnie",
"node_id": "MDQ6VXNlcjExMDE2MzI5",
"organizations_url": "https://api.github.com/users/easonnie/orgs",
"received_events_url": "https://api.github.com/users/easonnie/received_events",
"repos_url": "https://api.github.com/users/easonnie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/easonnie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/easonnie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/easonnie"
} | [] | closed | false | null | [] | null | [
"Hello ! Thanks for adding this one :)\r\n\r\nThis looks great, you just have to do the last steps to make the CI pass.\r\nI can see that two things are missing:\r\n1. the dummy data that is used to test that the script is working as expected\r\n2. the json file with all the infos about the dataset\r\n\r\nYou can see the steps to help you create the dummy data and generate the dataset_infos.json file right [here](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset)"
] | 2020-06-11T04:14:57 | 2020-06-12T22:03:03 | 2020-06-12T22:03:03 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/262.diff",
"html_url": "https://github.com/huggingface/datasets/pull/262",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/262.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/262"
} | Adding new dataset [ANLI](https://github.com/facebookresearch/anli/).
I'm not familiar with how to add new dataset. Let me know if there is any issue. I only include round 1 data here. There will be round 2, round 3 and more in the future with potentially different format. I think it will be better to separate them. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/262/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/262/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/261/comments | https://api.github.com/repos/huggingface/datasets/issues/261/events | https://github.com/huggingface/datasets/issues/261 | 636,372,380 | MDU6SXNzdWU2MzYzNzIzODA= | 261 | Downloading dataset error with pyarrow.lib.RecordBatch | {
"avatar_url": "https://avatars.githubusercontent.com/u/5248968?v=4",
"events_url": "https://api.github.com/users/cuent/events{/privacy}",
"followers_url": "https://api.github.com/users/cuent/followers",
"following_url": "https://api.github.com/users/cuent/following{/other_user}",
"gists_url": "https://api.github.com/users/cuent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cuent",
"id": 5248968,
"login": "cuent",
"node_id": "MDQ6VXNlcjUyNDg5Njg=",
"organizations_url": "https://api.github.com/users/cuent/orgs",
"received_events_url": "https://api.github.com/users/cuent/received_events",
"repos_url": "https://api.github.com/users/cuent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cuent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cuent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cuent"
} | [] | closed | false | null | [] | null | [
"When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.\r\nIf you don't restart, then it breaks like in your message.",
"Yeah, that worked! Thanks :) "
] | 2020-06-10T16:04:19 | 2020-06-11T14:35:12 | 2020-06-11T14:35:12 | NONE | null | null | null | I am trying to download `sentiment140` and I have the following error
```
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
418 verify_infos = not save_infos and not ignore_verifications
419 self._download_and_prepare(
--> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
421 )
422 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
472 try:
473 # Prepare split will record examples associated to the split
--> 474 self._prepare_split(split_generator, **prepare_split_kwargs)
475 except OSError:
476 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or ""))
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator)
652 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
653 example = self.info.features.encode_example(record)
--> 654 writer.write(example)
655 num_examples, num_bytes = writer.finalize()
656
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write(self, example, writer_batch_size)
143 self._build_writer(pa_table=pa.Table.from_pydict(example))
144 if writer_batch_size is not None and len(self.current_rows) >= writer_batch_size:
--> 145 self.write_on_file()
146
147 def write_batch(
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in write_on_file(self)
127 else:
128 # All good
--> 129 self._write_array_on_file(pa_array)
130 self.current_rows = []
131
/usr/local/lib/python3.6/dist-packages/nlp/arrow_writer.py in _write_array_on_file(self, pa_array)
96 def _write_array_on_file(self, pa_array):
97 """Write a PyArrow Array"""
---> 98 pa_batch = pa.RecordBatch.from_struct_array(pa_array)
99 self._num_bytes += pa_array.nbytes
100 self.pa_writer.write_batch(pa_batch)
AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
```
I installed the last version and ran the following command:
```python
import nlp
sentiment140 = nlp.load_dataset('sentiment140', cache_dir='/content')
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/261/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/261/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/260/comments | https://api.github.com/repos/huggingface/datasets/issues/260/events | https://github.com/huggingface/datasets/pull/260 | 636,261,118 | MDExOlB1bGxSZXF1ZXN0NDMyNDY3NDM5 | 260 | Consistency fixes | {
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c"
} | [] | closed | false | null | [] | null | [] | 2020-06-10T13:44:42 | 2020-06-11T10:34:37 | 2020-06-11T10:34:36 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/260.diff",
"html_url": "https://github.com/huggingface/datasets/pull/260",
"merged_at": "2020-06-11T10:34:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/260.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/260"
} | A few bugs I've found while hacking | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/260/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/260/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/259/comments | https://api.github.com/repos/huggingface/datasets/issues/259/events | https://github.com/huggingface/datasets/issues/259 | 636,239,529 | MDU6SXNzdWU2MzYyMzk1Mjk= | 259 | documentation missing how to split a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/2873355?v=4",
"events_url": "https://api.github.com/users/fotisj/events{/privacy}",
"followers_url": "https://api.github.com/users/fotisj/followers",
"following_url": "https://api.github.com/users/fotisj/following{/other_user}",
"gists_url": "https://api.github.com/users/fotisj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fotisj",
"id": 2873355,
"login": "fotisj",
"node_id": "MDQ6VXNlcjI4NzMzNTU=",
"organizations_url": "https://api.github.com/users/fotisj/orgs",
"received_events_url": "https://api.github.com/users/fotisj/received_events",
"repos_url": "https://api.github.com/users/fotisj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fotisj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fotisj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fotisj"
} | [] | closed | false | null | [] | null | [
"this seems to work for my specific problem:\r\n\r\n`self.train_ds, self.test_ds, self.val_ds = map(_prepare_ds, ('train', 'test[:25%]+test[50%:75%]', 'test[75%:]'))`",
"Currently you can indeed split a dataset using `ds_test = nlp.load_dataset('imdb, split='test[:5000]')` (works also with percentages).\r\n\r\nHowever right now we don't have a way to shuffle a dataset but we are thinking about it in the discussion in #166. Feel free to share your thoughts about it.\r\n\r\nOne trick that you can do until we have a better solution is to shuffle and split the indices of your dataset:\r\n```python\r\nimport nlp\r\nfrom sklearn.model_selection import train_test_split\r\n\r\nimdb = nlp.load_dataset('imbd', split='test')\r\ntest_indices, val_indices = train_test_split(range(len(imdb)))\r\n```\r\n\r\nand then to iterate each split:\r\n```python\r\nfor i in test_indices:\r\n example = imdb[i]\r\n ...\r\n```\r\n",
"I added a small guide [here](https://github.com/huggingface/nlp/tree/master/docs/splits.md) that explains how to split a dataset. It is very similar to the tensorflow datasets guide, as we kept the same logic.",
"Thanks a lot, the new explanation is very helpful!\r\n\r\nAbout using train_test_split from sklearn: I stumbled across the [same error message as this user ](https://github.com/huggingface/nlp/issues/147 )and thought it can't be used at the moment in this context. Will check it out again.\r\n\r\nOne of the problems is how to shuffle very large datasets, which don't fit into the memory. Well, one strategy could be shuffling data in sections. But in a case where the data is sorted by the labels you have to swap larger sections first. \r\n",
"We added a way to shuffle datasets (shuffle the indices and then reorder to make a new dataset).\r\nYou can do `shuffled_dset = dataset.shuffle(seed=my_seed)`. It shuffles the whole dataset.\r\nThere is also `dataset.train_test_split()` which if very handy (with the same signature as sklearn).\r\n\r\nClosing this issue as we added the docs for splits and tools to split datasets. Thanks again for your feedback !",
"https://huggingface.co/docs/datasets/v1.0.1/package_reference/builder_classes.html#datasets.Split still links to https://github.com/huggingface/datasets/tree/main/docs/splits.md which is a 404\r\n",
"The updated documentation doesn't link to this anymore: https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/builder_classes#datasets.Split"
] | 2020-06-10T13:18:13 | 2023-03-14T13:56:07 | 2020-06-18T22:20:24 | NONE | null | null | null | I am trying to understand how to split a dataset ( as arrow_dataset).
I know I can do something like this to access a split which is already in the original dataset :
`ds_test = nlp.load_dataset('imdb, split='test') `
But how can I split ds_test into a test and a validation set (without reading the data into memory and keeping the arrow_dataset as container)?
I guess it has something to do with the module split :-) but there is no real documentation in the code but only a reference to a longer description:
> See the [guide on splits](https://github.com/huggingface/nlp/tree/master/docs/splits.md) for more information.
But the guide seems to be missing.
To clarify: I know that this has been modelled after the dataset of tensorflow and that some of the documentation there can be used [like this one](https://www.tensorflow.org/datasets/splits). But to come back to the example above: I cannot simply split the testset doing this:
`ds_test = nlp.load_dataset('imdb, split='test'[:5000]) `
`ds_val = nlp.load_dataset('imdb, split='test'[5000:])`
because the imdb test data is sorted by class (probably not a good idea anyway)
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/259/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/259/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/258/comments | https://api.github.com/repos/huggingface/datasets/issues/258/events | https://github.com/huggingface/datasets/issues/258 | 635,859,525 | MDU6SXNzdWU2MzU4NTk1MjU= | 258 | Why is dataset after tokenization far more larger than the orginal one ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | null | [] | null | [
"Hi ! This is because `.map` added the new column `input_ids` to the dataset, and so all the other columns were kept. Therefore the dataset size increased a lot.\r\n If you want to only keep the `input_ids` column, you can stash the other ones by specifying `remove_columns=[\"title\", \"text\"]` in the arguments of `.map`",
"Hi ! Thanks for your reply.\r\n\r\nBut since size of `input_ids` < size of `text`, I am wondering why\r\nsize of `input_ids` + `text` > 2x the size of `text` 🤔",
"Hard to tell... This is probably related to the way apache arrow compresses lists of integers, that may be different from the compression of strings.",
"Thanks for your point. 😀, It might be answer.\r\nSince this is hard to know, I'll close this issue.\r\nBut if somebody knows more details, please comment below ~ 😁"
] | 2020-06-10T01:27:07 | 2020-06-10T12:46:34 | 2020-06-10T12:46:34 | CONTRIBUTOR | null | null | null | I tokenize wiki dataset by `map` and cache the results.
```
def tokenize_tfm(example):
example['input_ids'] = hf_fast_tokenizer.convert_tokens_to_ids(hf_fast_tokenizer.tokenize(example['text']))
return example
wiki = nlp.load_dataset('wikipedia', '20200501.en', cache_dir=cache_dir)['train']
wiki.map(tokenize_tfm, cache_file_name=cache_dir/"wikipedia/20200501.en/1.0.0/tokenized_wiki.arrow")
```
and when I see their size
```
ls -l --block-size=M
17460M wikipedia-train.arrow
47511M tokenized_wiki.arrow
```
The tokenized one is over 2x size of original one.
Is there something I did wrong ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/258/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/258/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/257/comments | https://api.github.com/repos/huggingface/datasets/issues/257/events | https://github.com/huggingface/datasets/issues/257 | 635,620,979 | MDU6SXNzdWU2MzU2MjA5Nzk= | 257 | Tokenizer pickling issue fix not landed in `nlp` yet? | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie"
} | [] | closed | false | null | [] | null | [
"Yes, the new release of tokenizers solves this and should be out soon.\r\nIn the meantime, you can install it with `pip install tokenizers==0.8.0-dev2`",
"If others run into this issue, a quick fix is to use python 3.6 instead of 3.7+. Serialization differences between the 3rd party `dataclasses` package for 3.6 and the built in `dataclasses` in 3.7+ cause the issue.\r\n\r\nProbably a dumb fix, but it works for me."
] | 2020-06-09T17:12:34 | 2020-06-10T21:45:32 | 2020-06-09T17:26:53 | NONE | null | null | null | Unless I recreate an arrow_dataset from my loaded nlp dataset myself (which I think does not use the cache by default), I get the following error when applying the map function:
```
dataset = nlp.load_dataset('cos_e')
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2', cache_dir=cache_dir)
for split in dataset.keys():
dataset[split].map(lambda x: some_function(x, tokenizer))
```
```
06/09/2020 10:09:19 - INFO - nlp.builder - Constructing Dataset for split train[:10], from /home/sarahw/.cache/huggingface/datasets/cos_e/default/0.0.1
Traceback (most recent call last):
File "generation/input_to_label_and_rationale.py", line 390, in <module>
main()
File "generation/input_to_label_and_rationale.py", line 263, in main
dataset[split] = dataset[split].map(lambda x: input_to_explanation_plus_label(x, tokenizer, max_length, datasource=data_args.task_name, wt5=(model_class=='t5'), expl_only=model_args.rationale_only), batched=False)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 522, in map
cache_file_name = self._get_cache_file_path(function, cache_kwargs)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/arrow_dataset.py", line 381, in _get_cache_file_path
function_bytes = dumps(function)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 257, in dumps
dump(obj, file)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/nlp/utils/py_utils.py", line 250, in dump
Pickler(file).dump(obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 445, in dump
StockPickler.dump(self, obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 485, in dump
self.save(obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1410, in save_function
pickler.save_reduce(_create_function, (obj.__code__,
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce
save(args)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple
save(element)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 899, in save_tuple
save(element)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 1147, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 690, in save_reduce
save(args)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 884, in save_tuple
save(element)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save
self.save_reduce(obj=obj, *rv)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce
save(state)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict
self._batch_setitems(obj.items())
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems
save(v)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 601, in save
self.save_reduce(obj=obj, *rv)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 715, in save_reduce
save(state)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 558, in save
f(self, obj) # Call unbound method with explicit self
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/site-packages/dill/_dill.py", line 912, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 969, in save_dict
self._batch_setitems(obj.items())
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 995, in _batch_setitems
save(v)
File "/home/sarahw/miniconda3/envs/project_huggingface/lib/python3.8/pickle.py", line 576, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'Tokenizer' object
```
Fix seems to be in the tokenizers [`0.8.0.dev1 pre-release`](https://github.com/huggingface/tokenizers/issues/87), which I can't install with any package managers. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/257/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/257/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/256/comments | https://api.github.com/repos/huggingface/datasets/issues/256/events | https://github.com/huggingface/datasets/issues/256 | 635,596,295 | MDU6SXNzdWU2MzU1OTYyOTU= | 256 | [Feature request] Add a feature to dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sarahwie",
"id": 8027676,
"login": "sarahwie",
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sarahwie"
} | [] | closed | false | null | [] | null | [
"Do you have an example of what you would like to do? (you can just add a field in the output of the unction you give to map and this will add this field in the output table)",
"Given another source of data loaded in, I want to pre-add it to the dataset so that it aligns with the indices of the arrow dataset prior to performing map.\r\n\r\nE.g. \r\n```\r\nnew_info = list of length dataset['train']\r\n\r\ndataset['train'] = dataset['train'].map(lambda x: some_function(x, new_info[index of x]))\r\n\r\ndef some_function(x, new_info_x):\r\n # adds new_info[index of x] as a field to x\r\n x['new_info'] = new_info_x\r\n return x\r\n```\r\nI was thinking to instead create a new field in the arrow dataset so that instance x contains all the necessary information when map function is applied (since I don't have index information to pass to map function).",
"This is what I have so far: \r\n\r\n```\r\nimport pyarrow as pa\r\nfrom nlp.arrow_dataset import Dataset\r\n\r\naug_dataset = dataset['train'][:]\r\naug_dataset['new_info'] = new_info\r\n\r\n#reformat as arrow-table\r\nschema = dataset['train'].schema\r\n\r\n# this line doesn't work:\r\nschema.append(pa.field('new_info', pa.int32()))\r\n\r\ntable = pa.Table.from_pydict(\r\n aug_dataset,\r\n schema=schema\r\n)\r\ndataset['train'] = Dataset(table) \r\n```",
"Maybe you can use `with_indices`?\r\n\r\n```python\r\nnew_info = list of length dataset['train']\r\n\r\ndef some_function(indice, x):\r\n # adds new_info[index of x] as a field to x\r\n x['new_info'] = new_info_x[indice]\r\n return x\r\n\r\ndataset['train'] = dataset['train'].map(some_function, with_indices=True)\r\n```",
"Oh great. That should work. I missed that in the documentation- thanks :) "
] | 2020-06-09T16:38:12 | 2020-06-09T16:51:42 | 2020-06-09T16:51:42 | NONE | null | null | null | Is there a straightforward way to add a field to the arrow_dataset, prior to performing map? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/256/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/256/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/255/comments | https://api.github.com/repos/huggingface/datasets/issues/255/events | https://github.com/huggingface/datasets/pull/255 | 635,300,822 | MDExOlB1bGxSZXF1ZXN0NDMxNjg3MDM0 | 255 | Add dataset/piaf | {
"avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4",
"events_url": "https://api.github.com/users/RachelKer/events{/privacy}",
"followers_url": "https://api.github.com/users/RachelKer/followers",
"following_url": "https://api.github.com/users/RachelKer/following{/other_user}",
"gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RachelKer",
"id": 36986299,
"login": "RachelKer",
"node_id": "MDQ6VXNlcjM2OTg2Mjk5",
"organizations_url": "https://api.github.com/users/RachelKer/orgs",
"received_events_url": "https://api.github.com/users/RachelKer/received_events",
"repos_url": "https://api.github.com/users/RachelKer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RachelKer"
} | [] | closed | false | null | [] | null | [
"Very nice !"
] | 2020-06-09T10:16:01 | 2020-06-12T08:31:27 | 2020-06-12T08:31:27 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/255.diff",
"html_url": "https://github.com/huggingface/datasets/pull/255",
"merged_at": "2020-06-12T08:31:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/255.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/255"
} | Small SQuAD-like French QA dataset [PIAF](https://www.aclweb.org/anthology/2020.lrec-1.673.pdf) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/255/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/255/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/254/comments | https://api.github.com/repos/huggingface/datasets/issues/254/events | https://github.com/huggingface/datasets/issues/254 | 635,057,568 | MDU6SXNzdWU2MzUwNTc1Njg= | 254 | [Feature request] Be able to remove a specific sample of the dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [] | closed | false | null | [] | null | [
"Oh yes you can now do that with the `dataset.filter()` method that was added in #214 "
] | 2020-06-09T02:22:13 | 2020-06-09T08:41:38 | 2020-06-09T08:41:38 | NONE | null | null | null | As mentioned in #117, it's currently not possible to remove a sample of the dataset.
But it is a important use case : After applying some preprocessing, some samples might be empty for example. We should be able to remove these samples from the dataset, or at least mark them as `removed` so when iterating the dataset, we don't iterate these samples.
I think it should be a feature. What do you think ?
---
Any work-around in the meantime ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/254/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/253/comments | https://api.github.com/repos/huggingface/datasets/issues/253/events | https://github.com/huggingface/datasets/pull/253 | 634,791,939 | MDExOlB1bGxSZXF1ZXN0NDMxMjgwOTYz | 253 | add flue dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [
"The dummy data file was wrong. I only fixed it for the book config. Even though the tests are all green here, this should also be fixed for all other configs. Could you take a look there @mariamabarham ? ",
"Hi @mariamabarham \r\n\r\nFLUE can indeed become a very interesting benchmark for french NLP !\r\nUnfortunately, it seems that we've both been working on adding it to the repo...\r\nI was going to open a pull request before I came across yours.\r\nI didn't want to open a duplicate, that's why I'm commenting here (I hope it's not rude).\r\n\r\nWhen I look at your code there is one issue that jump out at me: for both `vsd` and `nsd`, the labels are missing. I believe this is more a data issue, as they were not kept in the cleaned dataframes of #223. I think the *word sense disambiguation* task was a bit misunderstood. \r\n\r\nMaybe you should directly use the data provided by FLUE for these ?",
"Hi @TheophileBlard thanks for pointing this out. I will give a look at it or maybe if you already done it you can update this PR. Also I haven't added yet the parsing datasets, I submited a request to get access to them. If you already have them, you can also add them.",
"Hi,\r\n\r\nAs @TheophileBlard pointed out, the labels for the vsd and nsd stains are missing.\r\n\r\nFor the wsd, it is my mistake, I added the files containing the labels on the drive.\r\nThere is still the join to do between the files that I didn't have time to do. It can be done after importing the two files, however if you wish to have a single dataframe already containing all the information, I could do it but only when I have free time because I have a lot of work at the moment at INSERM with the covid.\r\n\r\nFor the nsd, I've downloaded the files at https://zenodo.org/record/3549806, and if you do the same you'll see that they don't contain any labels.\r\nIn the files, you can see that some words have a WN code. I don't know what it corresponds to. On the FLUE github, they say to use the disambiguate tool (https://github.com/getalp/disambiguate) but I don't understand what he's doing.\r\n\r\n@mariamabarham for the parsing datasets, I have them in my possession. What it does that I haven't shared them is that they are licensed and you have to make a request to their creators. They give them away very easily for research purposes. For another use, you have to ask a commercial licence. All this means that if the data is freely available on your librairy, their licence and their application form are no longer of interest, which is why I did not add them.\r\nAfterwards, maybe the authors will change their policies and decide to make the data freely available through your librairy",
"@mariamabarham @lbourdois, Yea I don't think we can had the parsing datasets without asking the authors permission first. I also hope they'll change their policy.\r\n\r\nRegarding `vsd` and `nsd`, if I understand well the task, the labels are \"word senses\" and the goal is to find the correct word sense for each ambiguous word. For `vsd` there is one ambiguous verb per sentence, and the labels we manually annotated with \"wiktionary senses\". For `nsd`, there are multiple ambiguous word per sentence, and the labels are WordNet Princeton Identifiers (hence the WN tag). This dataset was translated in french & automatically aligned.\r\n\r\nImo, for these 2 datasets, each example should be made of:\r\n- a list of string tokens (the words of the sentence)\r\n- a list of string labels (the word senses or 'O' when the word is not ambiguous.\r\n\r\nIn fact, for `vsd` it could be even simpler, with a single string label (as there is only one ambiguous verb), + some \"idx\" feature to indicate the location of the ambiguous verb.\r\n\r\nUnfortunately, I cannot update your PR as I'm not a maintainer of the project. Maybe we could work together on a fork ? Here's [mine](https://github.com/TheophileBlard/nlp/commits/flue-benchmark).\r\n",
"Hi\r\n\r\nAny news about this PR ?\r\nBecause thinking back FLUE basically offers only two new datasets : those for the Word Sense Disambiguation task (vsd and nsd).\r\n\r\nWouldn't it be more clever to make separate PRs to add the datasets of the other tasks which are multi-lingual (and therefore can be used for other languages) ?\r\n\r\nXNLI being already present on your library, there would only be PAWS-X (datasets and bibtex available here : https://github.com/google-research-datasets/paws/tree/master/pawsx) and the Webis-CLS-10 dataset (dataset : https://zenodo.org/record/3251672#.XvCXN-d8taQ and bibtex : https://zenodo.org/record/3251672/export/hx#.XvCXZ-d8taQ) to do.\r\n\r\nAnd next for the FLUE benchmark, all you would have to do would be to use your own library by making an nlp.load_dataset() (for example nlp.load_dataset('xnli') which is already present in your library) for each of the datasets of the benchmark tasks and to keep only the 'fr' data.\r\n\r\n\r\n\r\nAlso @mariamabarham , did you get any feedback for the parsing task dataset request?\r\nIn case of refusal from the authors, there are other datasets in French to perform this task and in this case, I would open a new topic\r\n",
"Hi @lbourdois ,\r\nPAWS-X is also present in the lib, it's part of `xtreme` dataset, so it can be loaded by `nlp.load_dataset('xtreme', 'PAWS-X.fr')` for the french version.\r\nI think the parsing and the Word Sense Disambiguation task datasets are the only missing in the lib now. \r\nI did not get a feedback yet for the parsing dataset.\r\n",
"By the way, @TheophileBlard I commented some days ago in your fork. It would be great if you can maybe open a new PR with your code or if you have a better way to make it available to others for review.",
"> By the way, @TheophileBlard I commented some days ago in your fork. It would be great if you can maybe open a new PR with your code or if you have a better way to make it available to others for review.\r\n\r\nYea sorry, missed that! I think @lbourdois has a point, it helps no one to have the same dataset in multiple places. I will try to find some time to adapt the code of my fork and open PRs for `Webis-CLS-10` and `nsd`/`vsd`. Maybe we should group `nsd`/`vsd` together ?",
"Shall we close this PR then ? @mariamabarham @TheophileBlard @lbourdois "
] | 2020-06-08T17:11:09 | 2023-09-24T09:46:03 | 2020-07-16T07:50:59 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/253",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/253"
} | This PR add the Flue dataset as requested in this issue #223 . @lbourdois made a detailed description in that issue.
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/253/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/253/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/252/comments | https://api.github.com/repos/huggingface/datasets/issues/252/events | https://github.com/huggingface/datasets/issues/252 | 634,563,239 | MDU6SXNzdWU2MzQ1NjMyMzk= | 252 | NonMatchingSplitsSizesError error when reading the IMDB dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4",
"events_url": "https://api.github.com/users/antmarakis/events{/privacy}",
"followers_url": "https://api.github.com/users/antmarakis/followers",
"following_url": "https://api.github.com/users/antmarakis/following{/other_user}",
"gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/antmarakis",
"id": 17463361,
"login": "antmarakis",
"node_id": "MDQ6VXNlcjE3NDYzMzYx",
"organizations_url": "https://api.github.com/users/antmarakis/orgs",
"received_events_url": "https://api.github.com/users/antmarakis/received_events",
"repos_url": "https://api.github.com/users/antmarakis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/antmarakis"
} | [] | closed | false | null | [] | null | [
"I just tried on my side and I didn't encounter your problem.\r\nApparently the script doesn't generate all the examples on your side.\r\n\r\nCan you provide the version of `nlp` you're using ?\r\nCan you try to clear your cache and re-run the code ?",
"I updated it, that was it, thanks!",
"Hello, I am facing the same problem... how do you clear the huggingface cache?",
"Hi ! The cache is at ~/.cache/huggingface\r\nYou can just delete this folder if needed :)"
] | 2020-06-08T12:26:24 | 2021-08-27T15:20:58 | 2020-06-08T14:01:26 | NONE | null | null | null | Hi!
I am trying to load the `imdb` dataset with this line:
`dataset = nlp.load_dataset('imdb', data_dir='/A/PATH', cache_dir='/A/PATH')`
but I am getting the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/load.py", line 517, in load_dataset
save_infos=save_infos,
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 363, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 421, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=5929447, num_examples=4537, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]
```
Am I overlooking something? Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/252/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/252/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/251 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/251/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/251/comments | https://api.github.com/repos/huggingface/datasets/issues/251/events | https://github.com/huggingface/datasets/pull/251 | 634,544,977 | MDExOlB1bGxSZXF1ZXN0NDMxMDgwMDkw | 251 | Better access to all dataset information | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | [] | 2020-06-08T11:56:50 | 2020-06-12T08:13:00 | 2020-06-12T08:12:58 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/251.diff",
"html_url": "https://github.com/huggingface/datasets/pull/251",
"merged_at": "2020-06-12T08:12:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/251.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/251"
} | Moves all the dataset info down one level from `dataset.info.XXX` to `dataset.XXX`
This way it's easier to access `dataset.feature['label']` for instance
Also, add the original split instructions used to create the dataset in `dataset.split`
Ex:
```
from nlp import load_dataset
stsb = load_dataset('glue', name='stsb', split='train')
stsb.split
>>> NamedSplit('train')
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/251/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/251/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/250/comments | https://api.github.com/repos/huggingface/datasets/issues/250/events | https://github.com/huggingface/datasets/pull/250 | 634,416,751 | MDExOlB1bGxSZXF1ZXN0NDMwOTcyMzg4 | 250 | Remove checksum download in c4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Commenting again in case [previous thread](https://github.com/huggingface/nlp/pull/233) was inactive.\r\n\r\n@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI'm using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n```\r\n\r\nHere's the complete stack trace.\r\n\r\n```\r\nDownloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/devops/.cache/huggingface/datasets/c4/en/2.3.0/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7...\r\n\r\n---------------------------------------------------------------------------\r\nIsADirectoryError Traceback (most recent call last)\r\n<ipython-input-11-f622e6705e03> in <module>\r\n----> 1 dataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 964 pipeline = beam_utils.BeamPipeline(runner=beam_runner, options=beam_options,)\r\n 965 super(BeamBasedBuilder, self)._download_and_prepare(\r\n--> 966 dl_manager, verify_infos=False, pipeline=pipeline,\r\n 967 ) # TODO handle verify_infos in beam datasets\r\n 968 # Run pipeline\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 516 split_dict = SplitDict(dataset_name=self.name)\r\n 517 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 518 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 519 # Checksums verification\r\n 520 if verify_infos:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/datasets/c4/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7/c4.py in _split_generators(self, dl_manager, pipeline)\r\n 187 if self.config.realnewslike:\r\n 188 files_to_download[\"realnews_domains\"] = _REALNEWS_DOMAINS_URL\r\n--> 189 file_paths = dl_manager.download_and_extract(files_to_download)\r\n 190 \r\n 191 if self.config.webtextlike:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 218 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 219 \"\"\"\r\n--> 220 return self.extract(self.download(url_or_urls))\r\n 221 \r\n 222 def get_recorded_sizes_checksums(self):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download(self, url_or_urls)\r\n 156 lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n 157 )\r\n--> 158 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n 159 return downloaded_path_or_paths\r\n 160 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in _record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths)\r\n 106 flattened_downloaded_path_or_paths = flatten_nested(downloaded_path_or_paths)\r\n 107 for url, path in zip(flattened_urls_or_urls, flattened_downloaded_path_or_paths):\r\n--> 108 self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)\r\n 109 \r\n 110 def download_custom(self, url_or_urls, custom_download):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/info_utils.py in get_size_checksum_dict(path)\r\n 77 \"\"\"Compute the file size and the sha256 checksum of a file\"\"\"\r\n 78 m = sha256()\r\n---> 79 with open(path, \"rb\") as f:\r\n 80 for chunk in iter(lambda: f.read(1 << 20), b\"\"):\r\n 81 m.update(chunk)\r\n\r\nIsADirectoryError: [Errno 21] Is a directory: '/'\r\n\r\n```\r\n\r\nCan anyone please try to see what I am doing wrong or is this a bug?"
] | 2020-06-08T09:13:00 | 2020-08-25T07:04:56 | 2020-06-08T09:16:59 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/250.diff",
"html_url": "https://github.com/huggingface/datasets/pull/250",
"merged_at": "2020-06-08T09:16:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/250.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/250"
} | There was a line from the original tfds script that was still there and causing issues when loading the c4 script. This one should fix #233 and allow anyone to load the c4 script to generate the dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/250/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/250/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/249/comments | https://api.github.com/repos/huggingface/datasets/issues/249/events | https://github.com/huggingface/datasets/issues/249 | 633,393,443 | MDU6SXNzdWU2MzMzOTM0NDM= | 249 | [Dataset created] some critical small issues when I was creating a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Thanks for noticing all these :) They should be easy to fix indeed",
"Alright I think I fixed all the problems you mentioned. Thanks again, that will be useful for many people.\r\nThere is still more work needed for point 7. but we plan to have some nice docs soon."
] | 2020-06-07T12:58:54 | 2020-06-12T08:28:51 | 2020-06-12T08:28:51 | CONTRIBUTOR | null | null | null | Hi, I successfully created a dataset and has made a pr #248.
But I have encountered several problems when I was creating it, and those should be easy to fix.
1. Not found dataset_info.json
should be fixed by #241 , eager to wait it be merged.
2. Forced to install `apach_beam`
If we should install it, then it might be better to include it in the pakcage dependency or specified in `CONTRIBUTING.md`
```
Traceback (most recent call last):
File "nlp-cli", line 10, in <module>
from nlp.commands.run_beam import RunBeamCommand
File "/home/yisiang/nlp/src/nlp/commands/run_beam.py", line 6, in <module>
import apache_beam as beam
ModuleNotFoundError: No module named 'apache_beam'
```
3. `cached_dir` is `None`
```
File "/home/yisiang/nlp/src/nlp/datasets/bookscorpus/aea0bd5142d26df645a8fce23d6110bb95ecb81772bb2a1f29012e329191962c/bookscorpus.py", line 88, in _split_generators
downloaded_path_or_paths = dl_manager.download_custom(_GDRIVE_FILE_ID, download_file_from_google_drive)
File "/home/yisiang/nlp/src/nlp/utils/download_manager.py", line 128, in download_custom
downloaded_path_or_paths = map_nested(url_to_downloaded_path, url_or_urls)
File "/home/yisiang/nlp/src/nlp/utils/py_utils.py", line 172, in map_nested
return function(data_struct)
File "/home/yisiang/nlp/src/nlp/utils/download_manager.py", line 126, in url_to_downloaded_path
return os.path.join(self._download_config.cache_dir, hash_url_to_filename(url))
File "/home/yisiang/miniconda3/envs/nlppr/lib/python3.7/posixpath.py", line 80, in join
a = os.fspath(a)
```
This is because this line
https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/src/nlp/commands/test.py#L30-L32
And I add `--cache_dir="...."` to `python nlp-cli test datasets/<your-dataset-folder> --save_infos --all_configs` in the doc, finally I could pass this error.
But it seems to ignore my arg and use `/home/yisiang/.cache/huggingface/datasets/bookscorpus/plain_text/1.0.0` as cahe_dir
4. There is no `pytest`
So maybe in the doc we should specify a step to install pytest
5. Not enough capacity in my `/tmp`
When run test for dummy data, I don't know why it ask me for 5.6g to download something,
```
def download_and_prepare
...
if not utils.has_sufficient_disk_space(self.info.size_in_bytes or 0, directory=self._cache_dir_root):
raise IOError(
"Not enough disk space. Needed: {} (download: {}, generated: {})".format(
utils.size_str(self.info.size_in_bytes or 0),
utils.size_str(self.info.download_size or 0),
> utils.size_str(self.info.dataset_size or 0),
)
)
E OSError: Not enough disk space. Needed: 5.62 GiB (download: 1.10 GiB, generated: 4.52 GiB)
```
I add a `processed_temp_dir="some/dir"; raw_temp_dir="another/dir"` to 71, and the test passed
https://github.com/huggingface/nlp/blob/a67a6c422dece904b65d18af65f0e024e839dbe8/tests/test_dataset_common.py#L70-L72
I suggest we can create tmp dir under the `/home/user/tmp` but not `/tmp`, because take our lab server for example, everyone use `/tmp` thus it has not much capacity. Or at least we can improve error message, so the user know is what directory has no space and how many has it lefted. Or we could do both.
6. name of datasets
I was surprised by the dataset name `books_corpus`, and didn't know it is from `class BooksCorpus(nlp.GeneratorBasedBuilder)` . I change it to `Bookscorpus` afterwards. I think this point shold be also on the doc.
7. More thorough doc to how to create `dataset.py`
I believe there will be.
**Feel free to close this issue** if you think these are solved. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/249/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/249/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/248/comments | https://api.github.com/repos/huggingface/datasets/issues/248/events | https://github.com/huggingface/datasets/pull/248 | 633,390,427 | MDExOlB1bGxSZXF1ZXN0NDMwMDQ0MzU0 | 248 | add Toronto BooksCorpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | null | [] | null | [
"Thanks for adding this one !\r\n\r\nAbout the three points you mentioned:\r\n1. I think the `toronto_books_corpus` branch can be removed @mariamabarham ? \r\n2. You can use the download manager to download from google drive. For you case you can just do something like \r\n```python\r\nURL = \"https://drive.google.com/uc?export=download&id=16KCjV9z_FHm8LgZw05RSuk4EsAWPOP_z\"\r\n...\r\narch_path = dl_manager.download_and_extract(URL)\r\n```\r\nAlso this is is an unofficial host of the dataset, we should probably host it ourselves if we can.\r\n3. Not sure about the copyright here, but I maybe @thomwolf has better insights about it. ",
"Yes it can be removed",
"I just downloaded the file and put it on gs. The public url is\r\nhttps://storage.googleapis.com/huggingface-nlp/datasets/toronto_books_corpus/bookcorpus.tar.bz2\r\n\r\nCould you try to change the url to this one and heck that everything is ok ?",
"In `books.py`\r\n```\r\nURL = \"https://storage.googleapis.com/huggingface-nlp/datasets/toronto_books_corpus/bookcorpus.tar.bz2\"\r\n```\r\n```\r\nPython 3.7.6 (default, Jan 8 2020, 19:59:22) \r\n[GCC 7.3.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from nlp import load_dataset\r\n>>> book = load_dataset(\"nlp/datasets/bookscorpus/books.py\", cache_dir='~/tmp')\r\nDownloading and preparing dataset bookscorpus/plain_text (download: 1.10 GiB, generated: 4.52 GiB, total: 5.62 GiB) to /home/yisiang/tmp/bookscorpus/plain_text/1.0.0...\r\nDownloading: 100%|███████████████████████████████████████████████████████████| 1.18G/1.18G [00:39<00:00, 30.0MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/yisiang/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n save_infos=save_infos,\r\n File \"/home/yisiang/nlp/src/nlp/builder.py\", line 420, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/yisiang/nlp/src/nlp/builder.py\", line 460, in _download_and_prepare\r\n verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())\r\n File \"/home/yisiang/nlp/src/nlp/utils/info_utils.py\", line 31, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\nnlp.utils.info_utils.ExpectedMoreDownloadedFiles: {'16KCjV9z_FHm8LgZw05RSuk4EsAWPOP_z'}\r\n>>>\r\n```\r\n\r\nBTW, I notice the path `huggingface-nlp/datasets/toronto_books_corpus`, does it mean I have to change folder name \"bookscorpus\" to \"toronto_books_corpus\"",
"> In `books.py`\r\n> \r\n> ```\r\n> URL = \"https://storage.googleapis.com/huggingface-nlp/datasets/toronto_books_corpus/bookcorpus.tar.bz2\"\r\n> ```\r\n> \r\n> ```\r\n> Python 3.7.6 (default, Jan 8 2020, 19:59:22) \r\n> [GCC 7.3.0] :: Anaconda, Inc. on linux\r\n> Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n> >>> from nlp import load_dataset\r\n> >>> book = load_dataset(\"nlp/datasets/bookscorpus/books.py\", cache_dir='~/tmp')\r\n> Downloading and preparing dataset bookscorpus/plain_text (download: 1.10 GiB, generated: 4.52 GiB, total: 5.62 GiB) to /home/yisiang/tmp/bookscorpus/plain_text/1.0.0...\r\n> Downloading: 100%|███████████████████████████████████████████████████████████| 1.18G/1.18G [00:39<00:00, 30.0MB/s]\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yisiang/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n> save_infos=save_infos,\r\n> File \"/home/yisiang/nlp/src/nlp/builder.py\", line 420, in download_and_prepare\r\n> dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n> File \"/home/yisiang/nlp/src/nlp/builder.py\", line 460, in _download_and_prepare\r\n> verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())\r\n> File \"/home/yisiang/nlp/src/nlp/utils/info_utils.py\", line 31, in verify_checksums\r\n> raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\n> nlp.utils.info_utils.ExpectedMoreDownloadedFiles: {'16KCjV9z_FHm8LgZw05RSuk4EsAWPOP_z'}\r\n> >>>\r\n> ```\r\n> \r\n> BTW, I notice the path `huggingface-nlp/datasets/toronto_books_corpus`, does it mean I have to change folder name \"bookscorpus\" to \"toronto_books_corpus\"\r\n\r\nLet me change the url to match \"bookscorpus\", so that you don't have to change anything. Good catch.\r\n\r\nAbout the error you're getting: you just have to remove the `dataset_infos.json` and regenerate it",
"The new url is https://storage.googleapis.com/huggingface-nlp/datasets/bookscorpus/bookcorpus.tar.bz2",
"Hi, I found I made a mistake. I found the ELECTRA paper refer it as \"BooksCorpus\", but actually it is caleld \"BookCorpus\", according to the original paper. Sorry, I should have checked the original paper .\r\n\r\nCan you do me a favor and change the url path to ` https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2` ?",
"Yep I'm doing it right now. Could you please rename all the references to `bookscorpus` and `BooksCorpus` to `book_corpus` and `BookCorpus` (with the right casing) ?",
"Thank you @lhoestq ,\r\nJust to confirm it fits your naming convention\r\n* make the file path `book_corpus/book_corpus.py` ?\r\n* make `class Bookscorpus(nlp.GeneratorBasedBuilder)` -> `BookCorpus` (which make cache folder name `book_corpus` and user use `load_dataset('book_corpus')`) ?\r\n(Cuz I found \"HellaSwag\" dataset is named \"nlp/datasets/hellaswag\" and `class Hellaswag` )",
"Oh yea you're right about the Hellaswag example. We should keep the \"_\" symbol to replace spaces. As there are no space in BookCorpus, what we should do here is use:\r\n- class name: 'Bookcorpus'\r\n- script name: `bookcorpus/bookcorpus.py`\r\n- use url https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2\r\nAnd therefore the dataset name will be `bookcorpus`\r\n\r\nDon't forget to regenerate the `dataset_infos.json` and we'll be good :D ",
"Awesome thanks :)"
] | 2020-06-07T12:54:56 | 2020-06-12T08:45:03 | 2020-06-12T08:45:02 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/248.diff",
"html_url": "https://github.com/huggingface/datasets/pull/248",
"merged_at": "2020-06-12T08:45:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/248.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/248"
} | 1. I knew there is a branch `toronto_books_corpus`
- After I downloaded it, I found it is all non-english, and only have one row.
- It seems that it cites the wrong paper
- according to papar using it, it is called `BooksCorpus` but not `TornotoBooksCorpus`
2. It use a text mirror in google drive
- `bookscorpus.py` include a function `download_file_from_google_drive` , maybe you will want to put it elsewhere.
- text mirror is found in this [comment on the issue](https://github.com/soskek/bookcorpus/issues/24#issuecomment-556024973), and it said to have the same statistics as the one in the paper.
- You may want to download it and put it on your gs in case of it disappears someday.
3. Copyright ?
The paper has said
> **The BookCorpus Dataset.** In order to train our sentence similarity model we collected a corpus of 11,038 books ***from the web***. These are __**free books written by yet unpublished authors**__. We only included books that had more than 20K words in order to filter out perhaps noisier shorter stories. The dataset has books in 16 different genres, e.g., Romance (2,865 books), Fantasy (1,479), Science fiction (786), Teen (430), etc. Table 2 highlights the summary statistics of our book corpus.
and we have changed the form (not books), so I don't think it should have that problems. Or we can state that use it at your own risk or only for academic use. I know @thomwolf should know these things more.
This should solved #131 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/248/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/248/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/247/comments | https://api.github.com/repos/huggingface/datasets/issues/247/events | https://github.com/huggingface/datasets/pull/247 | 632,380,078 | MDExOlB1bGxSZXF1ZXN0NDI5MTMwMzQ2 | 247 | Make all dataset downloads deterministic by applying `sorted` to glob and os.listdir | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [
"That's great!\r\n\r\nI think it would be nice to test \"deterministic-ness\" of datasets in CI if we can do it (should be left for future PR of course)\r\n\r\nHere is a possibility (maybe there are other ways to do it): given that we may soon have efficient and large-scale hashing (cf our discussion on versioning/tracability), we could incorporate a hash of the final Arrow Dataset to the `dataset.json` file and have a test on it as well as CI on a diversity of platform to test the hash (Win/Mac/Linux + various python/env).\r\nWhat do you think @lhoestq @patrickvonplaten?",
"> That's great!\r\n> \r\n> I think it would be nice to test \"deterministic-ness\" of datasets in CI if we can do it (should be left for future PR of course)\r\n> \r\n> Here is a possibility (maybe there are other ways to do it): given that we may soon have efficient and large-scale hashing (cf our discussion on versioning/tracability), we could incorporate a hash of the final Arrow Dataset to the `dataset.json` file and have a test on it as well as CI on a diversity of platform to test the hash (Win/Mac/Linux + various python/env).\r\n> What do you think @lhoestq @patrickvonplaten?\r\n\r\nI think that's a great idea! The test should be a `RUN_SLOW` test, since it takes a considerable amount of time to download the dataset and generate the examples, but I think we should add some kind of hash test for each dataset.",
"Really nice!!"
] | 2020-06-06T11:02:10 | 2020-06-08T09:18:16 | 2020-06-08T09:18:14 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/247",
"merged_at": "2020-06-08T09:18:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/247"
} | This PR makes all datasets loading deterministic by applying `sorted()` to all `glob.glob` and `os.listdir` statements.
Are there other "non-deterministic" functions apart from `glob.glob()` and `os.listdir()` that you can think of @thomwolf @lhoestq @mariamabarham @jplu ?
**Important**
It does break backward compatibility for these datasets because
1. When loading the complete dataset the order in which the examples are saved is different now
2. When loading only part of a split, the examples themselves might be different.
@patrickvonplaten - the nlp / longformer notebook has to be updated since the examples might now be different | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/247/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/247/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/246/comments | https://api.github.com/repos/huggingface/datasets/issues/246/events | https://github.com/huggingface/datasets/issues/246 | 632,380,054 | MDU6SXNzdWU2MzIzODAwNTQ= | 246 | What is the best way to cache a dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/112599?v=4",
"events_url": "https://api.github.com/users/Mistobaan/events{/privacy}",
"followers_url": "https://api.github.com/users/Mistobaan/followers",
"following_url": "https://api.github.com/users/Mistobaan/following{/other_user}",
"gists_url": "https://api.github.com/users/Mistobaan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mistobaan",
"id": 112599,
"login": "Mistobaan",
"node_id": "MDQ6VXNlcjExMjU5OQ==",
"organizations_url": "https://api.github.com/users/Mistobaan/orgs",
"received_events_url": "https://api.github.com/users/Mistobaan/received_events",
"repos_url": "https://api.github.com/users/Mistobaan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mistobaan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mistobaan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mistobaan"
} | [] | closed | false | null | [] | null | [
"Everything is already cached by default in 🤗nlp (in particular dataset\nloading and all the “map()” operations) so I don’t think you need to do any\nspecific caching in streamlit.\n\nTell us if you feel like it’s not the case.\n\nOn Sat, 6 Jun 2020 at 13:02, Fabrizio Milo <notifications@github.com> wrote:\n\n> For example if I want to use streamlit with a nlp dataset:\n>\n> @st.cache\n> def load_data():\n> return nlp.load_dataset('squad')\n>\n> This code raises the error \"uncachable object\"\n>\n> Right now I just fixed with a constant for my specific case:\n>\n> @st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})\n>\n> But I was curious to know what is the best way in general\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/nlp/issues/246>, or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABYDIHKAKO7CWGX2QY55UXLRVIO3ZANCNFSM4NV333RQ>\n> .\n>\n",
"Closing this one. Feel free to re-open if you have other questions !"
] | 2020-06-06T11:02:07 | 2020-07-09T09:15:07 | 2020-07-09T09:15:07 | NONE | null | null | null | For example if I want to use streamlit with a nlp dataset:
```
@st.cache
def load_data():
return nlp.load_dataset('squad')
```
This code raises the error "uncachable object"
Right now I just fixed with a constant for my specific case:
```
@st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})
```
But I was curious to know what is the best way in general
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/246/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/246/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/245/comments | https://api.github.com/repos/huggingface/datasets/issues/245/events | https://github.com/huggingface/datasets/issues/245 | 631,985,108 | MDU6SXNzdWU2MzE5ODUxMDg= | 245 | SST-2 test labels are all -1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | [
"this also happened to me with `nlp.load_dataset('glue', 'mnli')`",
"Yes, this is because the test sets for glue are hidden so the labels are\nnot publicly available. You can read the glue paper for more details.\n\nOn Sat, 6 Jun 2020 at 18:16, Jack Morris <notifications@github.com> wrote:\n\n> this also happened to me with nlp.load_datasets('glue', 'mnli')\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/nlp/issues/245#issuecomment-640083980>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABYDIHMVQD2EDX2HTZUXG5DRVJTWRANCNFSM4NVG3AKQ>\n> .\n>\n",
"Thanks @thomwolf!",
"@thomwolf shouldn't this be visible in the .info and/or in the .features?",
"It should be in the datasets card (the README.md and on the hub) in my opinion. What do you think @yjernite?",
"I checked both before I got to looking at issues, so that would be fine as well.\r\n\r\nSome additional thoughts on this: Is there a specific reason why the \"test\" split even has a \"label\" column if it isn't tagged. Shouldn't there just not be any. Seems more transparent",
"I'm a little confused with the data size.\r\n`sst2` dataset is referenced to `Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank` and the link of the dataset in the paper is https://nlp.stanford.edu/sentiment/index.html which is often shown in GLUE/SST2 reference.\r\nFrom the original data, the standard train/dev/test splits split is 6920/872/1821 for binary classification. \r\nWhy in GLUE/SST2 the train/dev/test split is 67,349/872/1,821 ? \r\n\r\n",
"> I'm a little confused with the data size.\r\n> `sst2` dataset is referenced to `Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank` and the link of the dataset in the paper is https://nlp.stanford.edu/sentiment/index.html which is often shown in GLUE/SST2 reference.\r\n> From the original data, the standard train/dev/test splits split is 6920/872/1821 for binary classification.\r\n> Why in GLUE/SST2 the train/dev/test split is 67,349/872/1,821 ?\r\n\r\nHave you figured out this problem? AFAIK, the original sst-2 dataset is totally different from the GLUE/sst-2. Do you think so?",
"@yc1999 Sorry, I didn't solve this conflict. In the end, I just use a local data file provided by the previous work I followed(for consistent comparison), not use `datasets` package.\r\n\r\nRelated information: https://github.com/thunlp/OpenAttack/issues/146#issuecomment-766323571",
"@yc1999 I find that the original SST-2 dataset (6,920/872/1,821) can be loaded from https://huggingface.co/datasets/gpt3mix/sst2 or built with SST data and the scripts in https://github.com/prrao87/fine-grained-sentiment/tree/master/data/sst.\r\nThe GLUE/SST-2 dataset (67,349/872/1,821) should be a completely different version.\r\n"
] | 2020-06-05T21:41:42 | 2021-12-08T00:47:32 | 2020-06-06T16:56:41 | CONTRIBUTOR | null | null | null | I'm trying to test a model on the SST-2 task, but all the labels I see in the test set are -1.
```
>>> import nlp
>>> glue = nlp.load_dataset('glue', 'sst2')
>>> glue
{'train': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 67349), 'validation': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 872), 'test': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 1821)}
>>> list(l['label'] for l in glue['test'])
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/245/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/245/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/244/comments | https://api.github.com/repos/huggingface/datasets/issues/244/events | https://github.com/huggingface/datasets/pull/244 | 631,869,155 | MDExOlB1bGxSZXF1ZXN0NDI4NjgxMTcx | 244 | Add Allociné Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/37028092?v=4",
"events_url": "https://api.github.com/users/TheophileBlard/events{/privacy}",
"followers_url": "https://api.github.com/users/TheophileBlard/followers",
"following_url": "https://api.github.com/users/TheophileBlard/following{/other_user}",
"gists_url": "https://api.github.com/users/TheophileBlard/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TheophileBlard",
"id": 37028092,
"login": "TheophileBlard",
"node_id": "MDQ6VXNlcjM3MDI4MDky",
"organizations_url": "https://api.github.com/users/TheophileBlard/orgs",
"received_events_url": "https://api.github.com/users/TheophileBlard/received_events",
"repos_url": "https://api.github.com/users/TheophileBlard/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TheophileBlard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheophileBlard/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TheophileBlard"
} | [] | closed | false | null | [] | null | [
"great work @TheophileBlard ",
"LGTM, thanks a lot for adding dummy data tests :-) Was it difficult to create the correct dummy data folder? ",
"It was pretty easy actually. Documentation is on point !"
] | 2020-06-05T19:19:26 | 2020-06-11T07:47:26 | 2020-06-11T07:47:26 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/244.diff",
"html_url": "https://github.com/huggingface/datasets/pull/244",
"merged_at": "2020-06-11T07:47:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/244.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/244"
} | This is a french binary sentiment classification dataset, which was used to train this model: https://huggingface.co/tblard/tf-allocine.
Basically, it's a french "IMDB" dataset, with more reviews.
More info on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/244/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/244/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/243/comments | https://api.github.com/repos/huggingface/datasets/issues/243/events | https://github.com/huggingface/datasets/pull/243 | 631,735,848 | MDExOlB1bGxSZXF1ZXN0NDI4NTY2MTEy | 243 | Specify utf-8 encoding for GLUE | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [] | closed | false | null | [] | null | [
"Thanks for fixing the encoding :)"
] | 2020-06-05T16:33:00 | 2020-06-17T21:16:06 | 2020-06-08T08:42:01 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/243.diff",
"html_url": "https://github.com/huggingface/datasets/pull/243",
"merged_at": "2020-06-08T08:42:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/243.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/243"
} | #242
This makes the GLUE-MNLI dataset readable on my machine, not sure if it's a Windows-only bug. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/243/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/243/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/242/comments | https://api.github.com/repos/huggingface/datasets/issues/242/events | https://github.com/huggingface/datasets/issues/242 | 631,733,683 | MDU6SXNzdWU2MzE3MzM2ODM= | 242 | UnicodeDecodeError when downloading GLUE-MNLI | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [] | closed | false | null | [] | null | [
"It should be good now, thanks for noticing and fixing it ! I would say that it was because you are on windows but not 100% sure",
"On Windows Python supports Unicode almost everywhere, but one of the notable exceptions is open() where it uses the locale encoding schema. So platform independent python scripts would always set the encoding='utf-8' in calls to open explicitly. \r\nIn the meantime: since Python 3.7 Windows users can set the default encoding for everything including open() to Unicode by setting this environment variable: set PYTHONUTF8=1 (details can be found in [PEP 540](https://www.python.org/dev/peps/pep-0540/))\r\n\r\nFor me this fixed the problem described by the OP."
] | 2020-06-05T16:30:01 | 2020-06-09T16:06:47 | 2020-06-08T08:45:03 | CONTRIBUTOR | null | null | null | When I run
```python
dataset = nlp.load_dataset('glue', 'mnli')
```
I get an encoding error (could it be because I'm using Windows?) :
```python
# Lots of error log lines later...
~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\5256cc2368cf84497abef1f1a5f66648522d5854b225162148cb8fc78a5a91cc\glue.py in _generate_examples(self, data_file, split, mrpc_files)
529
--> 530 for n, row in enumerate(reader):
531 if is_cola_non_test:
~\Miniconda3\envs\nlp\lib\csv.py in __next__(self)
110 self.fieldnames
--> 111 row = next(self.reader)
112 self.line_num = self.reader.line_num
~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final)
22 def decode(self, input, final=False):
---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
24
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 6744: character maps to <undefined>
```
Anyway this can be solved by specifying to decode in UTF when reading the csv file. I am proposing a PR if that's okay. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/242/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/242/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/241/comments | https://api.github.com/repos/huggingface/datasets/issues/241/events | https://github.com/huggingface/datasets/pull/241 | 631,703,079 | MDExOlB1bGxSZXF1ZXN0NDI4NTQwMDM0 | 241 | Fix empty cache dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think",
"> Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think\r\n\r\nNo it shouldn't force to redownload"
] | 2020-06-05T15:45:22 | 2020-06-08T08:35:33 | 2020-06-08T08:35:31 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/241.diff",
"html_url": "https://github.com/huggingface/datasets/pull/241",
"merged_at": "2020-06-08T08:35:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/241.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/241"
} | If the cache dir of a dataset is empty, the dataset fails to load and throws a FileNotFounfError. We could end up with empty cache dir because there was a line in the code that created the cache dir without using a temp dir. Using a temp dir is useful as it gets renamed to the real cache dir only if the full process is successful.
So I removed this bad line, and I also reordered things a bit to make sure that we always use a temp dir. I also added warning if we still end up with empty cache dirs in the future.
This should fix #239
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/241/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/241/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/240/comments | https://api.github.com/repos/huggingface/datasets/issues/240/events | https://github.com/huggingface/datasets/issues/240 | 631,434,677 | MDU6SXNzdWU2MzE0MzQ2Nzc= | 240 | Deterministic dataset loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [
"Yes good point !",
"I think using `sorted(glob.glob())` would actually solve this problem. Can you think of other reasons why dataset loading might not be deterministic? @mariamabarham @yjernite @lhoestq @thomwolf . \r\n\r\nI can do a sweep through the dataset scripts and fix the glob.glob() if you guys are ok with it",
"I'm pretty sure it would solve the problem too.\r\n\r\nThe only other dataset that is not deterministic right now is `blog_authorship_corpus` (see #215) but this is a problem related to string encodings.",
"I think we should do the same also for `os.list_dir`"
] | 2020-06-05T09:03:26 | 2020-06-08T09:18:14 | 2020-06-08T09:18:14 | CONTRIBUTOR | null | null | null | When calling:
```python
import nlp
dataset = nlp.load_dataset("trivia_qa", split="validation[:1%]")
```
the resulting dataset is not deterministic over different google colabs.
After talking to @thomwolf, I suspect the reason to be the use of `glob.glob` in line:
https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/datasets/trivia_qa/trivia_qa.py#L180
which seems to return an ordering of files that depends on the filesystem:
https://stackoverflow.com/questions/6773584/how-is-pythons-glob-glob-ordered
I think we should go through all the dataset scripts and make sure to have deterministic behavior.
A simple solution for `glob.glob()` would be to just replace it with `sorted(glob.glob())` to have everything sorted by name.
What do you think @lhoestq? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/240/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/240/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/239/comments | https://api.github.com/repos/huggingface/datasets/issues/239/events | https://github.com/huggingface/datasets/issues/239 | 631,340,440 | MDU6SXNzdWU2MzEzNDA0NDA= | 239 | [Creating new dataset] Not found dataset_info.json | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"I think you can just `rm` this directory and it should be good :)",
"@lhoestq - this seems to happen quite often (already the 2nd issue). Can we maybe delete this automatically?",
"Yes I have an idea of what's going on. I'm sure I can fix that",
"Hi, I rebase my local copy to `fix-empty-cache-dir`, and try to run again `python nlp-cli test datasets/bookcorpus --save_infos --all_configs`.\r\n\r\nI got this, \r\n```\r\nTraceback (most recent call last):\r\n File \"nlp-cli\", line 10, in <module>\r\n from nlp.commands.run_beam import RunBeamCommand\r\n File \"/home/yisiang/nlp/src/nlp/commands/run_beam.py\", line 6, in <module>\r\n import apache_beam as beam\r\nModuleNotFoundError: No module named 'apache_beam'\r\n```\r\nAnd after I installed it. I got this\r\n```\r\nFile \"/home/yisiang/nlp/src/nlp/datasets/bookcorpus/aea0bd5142d26df645a8fce23d6110bb95ecb81772bb2a1f29012e329191962c/bookcorpus.py\", line 88, in _split_generators\r\n downloaded_path_or_paths = dl_manager.download_custom(_GDRIVE_FILE_ID, download_file_from_google_drive)\r\n File \"/home/yisiang/nlp/src/nlp/utils/download_manager.py\", line 128, in download_custom\r\n downloaded_path_or_paths = map_nested(url_to_downloaded_path, url_or_urls)\r\n File \"/home/yisiang/nlp/src/nlp/utils/py_utils.py\", line 172, in map_nested\r\n return function(data_struct)\r\n File \"/home/yisiang/nlp/src/nlp/utils/download_manager.py\", line 126, in url_to_downloaded_path\r\n return os.path.join(self._download_config.cache_dir, hash_url_to_filename(url))\r\n File \"/home/yisiang/miniconda3/envs/nlppr/lib/python3.7/posixpath.py\", line 80, in join\r\n a = os.fspath(a)\r\n```\r\nThe problem is when I print `self._download_config.cache_dir` using pdb, it is `None`.\r\n\r\nDid I miss something ? Or can you provide a workaround first so I can keep testing my script ?",
"I'll close this issue because I brings more reports in another issue #249 ."
] | 2020-06-05T06:15:04 | 2020-06-07T13:01:04 | 2020-06-07T13:01:04 | CONTRIBUTOR | null | null | null | Hi, I am trying to create Toronto Book Corpus. #131
I ran
`~/nlp % python nlp-cli test datasets/bookcorpus --save_infos --all_configs`
but this doesn't create `dataset_info.json` and try to use it
```
INFO:nlp.load:Checking datasets/bookcorpus/bookcorpus.py for additional imports.
INFO:filelock:Lock 139795325778640 acquired on datasets/bookcorpus/bookcorpus.py.lock
INFO:nlp.load:Found main folder for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus
INFO:nlp.load:Found specific version folder for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9
INFO:nlp.load:Found script file from datasets/bookcorpus/bookcorpus.py to /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9/bookcorpus.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/bookcorpus/dataset_infos.json
INFO:nlp.load:Found metadata file for dataset datasets/bookcorpus/bookcorpus.py at /home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/datasets/bookcorpus/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9/bookcorpus.json
INFO:filelock:Lock 139795325778640 released on datasets/bookcorpus/bookcorpus.py.lock
INFO:nlp.builder:Overwrite dataset info from restored data version.
INFO:nlp.info:Loading Dataset info from /home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/commands/test.py", line 78, in run
builders.append(builder_cls(name=config.name, data_dir=self._data_dir))
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/builder.py", line 610, in __init__
super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/builder.py", line 152, in __init__
self.info = DatasetInfo.from_directory(self._cache_dir)
File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/nlp/info.py", line 157, in from_directory
with open(os.path.join(dataset_info_dir, DATASET_INFO_FILENAME), "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0/dataset_info.json'
```
btw, `ls /home/yisiang/.cache/huggingface/datasets/book_corpus/plain_text/1.0.0/` show me nothing is in the directory.
I have also pushed the script to my fork [bookcorpus.py](https://github.com/richardyy1188/nlp/blob/bookcorpusdev/datasets/bookcorpus/bookcorpus.py).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/239/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/239/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/238/comments | https://api.github.com/repos/huggingface/datasets/issues/238/events | https://github.com/huggingface/datasets/issues/238 | 631,260,143 | MDU6SXNzdWU2MzEyNjAxNDM= | 238 | [Metric] Bertscore : Warning : Empty candidate sentence; Setting recall to be 0. | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | null | [] | null | [
"This print statement comes from the official implementation of bert_score (see [here](https://github.com/Tiiiger/bert_score/blob/master/bert_score/utils.py#L343)). The warning shows up only if the attention mask outputs no candidate.\r\nRight now we want to only use official code for metrics to have fair evaluations, so I'm not sure we can do anything about it. Maybe you can try to create an issue on their [repo](https://github.com/Tiiiger/bert_score) ?"
] | 2020-06-05T02:14:47 | 2020-06-29T17:10:19 | 2020-06-29T17:10:19 | NONE | null | null | null | When running BERT-Score, I'm meeting this warning :
> Warning: Empty candidate sentence; Setting recall to be 0.
Code :
```
import nlp
metric = nlp.load_metric("bertscore")
scores = metric.compute(["swag", "swags"], ["swags", "totally something different"], lang="en", device=0)
```
---
**What am I doing wrong / How can I hide this warning ?** | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/238/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/238/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/237/comments | https://api.github.com/repos/huggingface/datasets/issues/237/events | https://github.com/huggingface/datasets/issues/237 | 631,199,940 | MDU6SXNzdWU2MzExOTk5NDA= | 237 | Can't download MultiNLI | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [] | closed | false | null | [] | null | [
"You should use `load_dataset('glue', 'mnli')`",
"Thanks! I thought I had to use the same code displayed in the live viewer:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('multi_nli', 'plain_text')\r\n```\r\nYour suggestion works, even if then I got a different issue (#242). ",
"Glad it helps !\nThough I am not one of hf team, but maybe you should close this issue first."
] | 2020-06-04T23:05:21 | 2020-06-06T10:51:34 | 2020-06-06T10:51:34 | CONTRIBUTOR | null | null | null | When I try to download MultiNLI with
```python
dataset = load_dataset('multi_nli')
```
I get this long error:
```python
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-13-3b11f6be4cb9> in <module>
1 # Load a dataset and print the first examples in the training set
2 # nli_dataset = nlp.load_dataset('multi_nli')
----> 3 dataset = load_dataset('multi_nli')
4 # nli_dataset = nlp.load_dataset('multi_nli', split='validation_matched[:10%]')
5 # print(nli_dataset['train'][0])
~\Miniconda3\envs\nlp\lib\site-packages\nlp\load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
514
515 # Download and prepare data
--> 516 builder_instance.download_and_prepare(
517 download_config=download_config,
518 download_mode=download_mode,
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
417 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir):
418 verify_infos = not save_infos and not ignore_verifications
--> 419 self._download_and_prepare(
420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
421 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
455 split_dict = SplitDict(dataset_name=self.name)
456 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 457 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
458 # Checksums verification
459 if verify_infos:
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\multi_nli\60774175381b9f3f1e6ae1028229e3cdb270d50379f45b9f2c01008f50f09e6b\multi_nli.py in _split_generators(self, dl_manager)
99 def _split_generators(self, dl_manager):
100
--> 101 downloaded_dir = dl_manager.download_and_extract(
102 "http://storage.googleapis.com/tfds-data/downloads/multi_nli/multinli_1.0.zip"
103 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in download_and_extract(self, url_or_urls)
214 extracted_path(s): `str`, extracted paths of given URL(s).
215 """
--> 216 return self.extract(self.download(url_or_urls))
217
218 def get_recorded_sizes_checksums(self):
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in extract(self, path_or_paths)
194 path_or_paths.
195 """
--> 196 return map_nested(
197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
198 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)
168 return tuple(mapped)
169 # Singleton
--> 170 return function(data_struct)
171
172
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in <lambda>(path)
195 """
196 return map_nested(
--> 197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
198 )
199
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
231 if is_zipfile(output_path):
232 with ZipFile(output_path, "r") as zip_file:
--> 233 zip_file.extractall(output_path_extracted)
234 zip_file.close()
235 elif tarfile.is_tarfile(output_path):
~\Miniconda3\envs\nlp\lib\zipfile.py in extractall(self, path, members, pwd)
1644
1645 for zipinfo in members:
-> 1646 self._extract_member(zipinfo, path, pwd)
1647
1648 @classmethod
~\Miniconda3\envs\nlp\lib\zipfile.py in _extract_member(self, member, targetpath, pwd)
1698
1699 with self.open(member, pwd=pwd) as source, \
-> 1700 open(targetpath, "wb") as target:
1701 shutil.copyfileobj(source, target)
1702
OSError: [Errno 22] Invalid argument: 'C:\\Users\\Python\\.cache\\huggingface\\datasets\\3e12413b8ec69f22dfcfd54a79d1ba9e7aac2e18e334bbb6b81cca64fd16bffc\\multinli_1.0\\Icon\r'
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/237/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/237/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/236/comments | https://api.github.com/repos/huggingface/datasets/issues/236/events | https://github.com/huggingface/datasets/pull/236 | 631,099,875 | MDExOlB1bGxSZXF1ZXN0NDI4MDUwNzI4 | 236 | CompGuessWhat?! dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aleSuglia",
"id": 1479733,
"login": "aleSuglia",
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aleSuglia"
} | [] | closed | false | null | [] | null | [
"Hi @aleSuglia, thanks for this great PR. Indeed you can have both datasets in one file. You need to add a config class which will allows you to specify the different subdataset names and then you will be able to load them as follow.\r\nnlp.load_dataset(\"compguesswhat\", \"compguesswhat-gameplay\") \r\nnlp.load_dataset(\"compguesswhat\", \"compguesswhat-zs-gameplay\").\r\n\r\nMaybe you can refer to this file https://github.com/huggingface/nlp/blob/master/datasets/discofuse/discofuse.py",
"@mariamabarham Thanks for your suggestions. I've followed your advice and integrated the additional dataset using another `DatasetConfig` class. It looks like all tests passed. What do you think?",
"great @aleSuglia. I requested an additional review from @thomwolf @lhoestq and @patrickvonplaten @jplu . You can merge it after an approval from one of them",
"Looks great! Thanks for adding the dummy data :-) ",
"Not sure whether it's the most appropriate place but I'll ask another design question. For Vision+Language dataset, is very common to have visual features associated with each example. At the moment, for instance, I'm only integrating the image identifier so that people can later on lookup the image features during training. Do you recommend this approach or do you think it should be done in a different way?\r\n\r\nThank you for your answer!",
"Hi @aleSuglia your remark on the visual features is a good point.\r\n\r\nWe haven't started to dive deeply into how CV datasets are usually structured (cc @sgugger)\r\n\r\nDo you have a pointer to how visual features are currently loaded and accessed by people using GuessCompWhat? ",
"@thomwolf As far as I know, people using Language+Vision tasks they typically have their reference dataset (either in JSON or JSONL format) and for each example in it they have an identifier that specifies the reference image. Currently, images are represented by either pooling-based visual features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https://arxiv.org/abs/1611.08481), [Shekhar et.al, 2019](https://www.aclweb.org/anthology/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https://arxiv.org/abs/1502.03044)). A more common and recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https://arxiv.org/abs/1908.03557), is to use FastRCNN features. \r\n\r\nFor all these types of features, people use either HD5F or NumPy compressed representations. In my personal projects, I've ditched altogether HD5F because it doesn't have out-of-the-box support for multi-processing (unless you have an ad-hoc installation of it). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it (see [numpy.savez](https://numpy.org/doc/stable/reference/generated/numpy.savez.html)). However, I believe that Apache Arrow would be a really good fit for this type of features. \r\n\r\nLooking forward to hearing your thoughts about it!",
"Awesome work on this one thanks :)",
"@thomwolf I was thinking that I should create an issue regarding the visual features so that we can keep track of it for future work. I think it would be great to have it in NLP and I'll be happy to contribute. Let me know what you think :) "
] | 2020-06-04T19:45:50 | 2020-06-11T09:43:42 | 2020-06-11T07:45:21 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/236.diff",
"html_url": "https://github.com/huggingface/datasets/pull/236",
"merged_at": "2020-06-11T07:45:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/236.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/236"
} | Hello,
Thanks for the amazing library that you put together. I'm Alessandro Suglia, the first author of CompGuessWhat?!, a recently released dataset for grounded language learning accepted to ACL 2020 ([https://compguesswhat.github.io](https://compguesswhat.github.io)).
This pull-request adds the CompGuessWhat?! splits that have been extracted from the original dataset. This is only part of our evaluation framework because there is also an additional split of the dataset that has a completely different set of games. I didn't integrate it yet because I didn't know what would be the best practice in this case. Let me clarify the scenario.
In our paper, we have a main dataset (let's call it `compguesswhat-gameplay`) and a zero-shot dataset (let's call it `compguesswhat-zs-gameplay`). In the current code of the pull-request, I have only integrated `compguesswhat-gameplay`. I was thinking that it would be nice to have the `compguesswhat-zs-gameplay` in the same dataset class by simply specifying some particular option to the `nlp.load_dataset()` factory. For instance:
```python
cgw = nlp.load_dataset("compguesswhat")
cgw_zs = nlp.load_dataset("compguesswhat", zero_shot=True)
```
The other option would be to have a separate dataset class. Any preferences? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/236/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/236/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/235/comments | https://api.github.com/repos/huggingface/datasets/issues/235/events | https://github.com/huggingface/datasets/pull/235 | 630,952,297 | MDExOlB1bGxSZXF1ZXN0NDI3OTM1MjQ0 | 235 | Add experimental datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [
"I think it would be nicer to not create a new folder `datasets_experimental` , but just put your datasets also into the folder `datasets` for the following reasons:\r\n\r\n- From my point of view, the datasets are not very different from the other datasets (assuming that we soon have C4, and the beam datasets) so I don't see why we require a new dataset folder\r\n\r\n- I'm not a big fan of adding a boolean flag to the `load_dataset()` function that basically switches between folder names on S3. The user has to know whether a dataset script is experimental or not. User installing nlp with pip won't see that there are folders called `datasets` and `datasets_experimental`\r\n\r\n- If we do this just to bypass the test, I think a good solution could be: For all tests that are too complicated to be currently tested with the testing framework, we can add a class variable called `do_test = False` to the dataset builder class and a default `do_test = True` to the abstract dataset class and skip all tests that have that variable in the dataset test framework similar to what is done to beam datasets: https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/tests/test_dataset_common.py#L79 \r\nWe can also print a warning for all dataset tests having `do_test = False`. This variable would only concern testing and we would not have a problem removing it at a later stage IMO.\r\n\r\n- This way the datascripts are backward compatible and can be used with earlier versions of `nlp` (not that this matters too much atm) \r\n\r\nWhat is your opinion on this @lhoestq @thomwolf ?",
"Very cool to have add those datasets :)\r\nI understand that making the dummy data for this case is not fun. I'm sure we'll be able to add them soon. For now it's still interesting to have them in the library, even if we can't test all the code with dummy data.\r\n\r\nI like the idea of the `do_tests=False` class variable. \r\nHowever it would be cool to test at least that we can load the module and instantiate the builder (only ignore the dummy data test for now). In that case a better name could be `test_dummy_data=False` or something like that.\r\n\r\nIf we want to be picky we can also add a warning in `_download_and_prepare` to tell the user that datasets with `test_dummy_data=False` are still experimental.",
"Yeah I really like the idea of a partial test.\r\n\r\nMy main concern with the class variable is visibility, but having a warning would help with that. Maybe even get the user to agree > \"are you sure you want to go ahead?\"",
"> Very cool to have add those datasets :)\r\n> I understand that making the dummy data for this case is not fun. I'm sure we'll be able to add them soon. For now it's still interesting to have them in the library, even if we can't test all the code with dummy data.\r\n> \r\n> I like the idea of the `do_tests=False` class variable.\r\n> However it would be cool to test at least that we can load the module and instantiate the builder (only ignore the dummy data test for now). In that case a better name could be `test_dummy_data=False` or something like that.\r\n> \r\n> If we want to be picky we can also add a warning in `_download_and_prepare` to tell the user that datasets with `test_dummy_data=False` are still experimental.\r\n\r\n`test_dummy_data=False` sounds good to me!",
"There we go: added a `test_dummy_data` class variable that is `False` by default for the `BeamBasedBuilder` and `True` for everyone else (except the new `explainlikeimfive` and `wiki_snippets`)\r\n\r\nNote that `wiki_snippets` should become obsolete as soon as @lhoestq adds in the `IndexedDataset` class",
"Great! LGTM!"
] | 2020-06-04T15:54:56 | 2020-06-12T15:38:55 | 2020-06-12T15:38:55 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/235.diff",
"html_url": "https://github.com/huggingface/datasets/pull/235",
"merged_at": "2020-06-12T15:38:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/235.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/235"
} | ## Adding an *experimental datasets* folder
After using the 🤗nlp library for some time, I find that while it makes it super easy to create new memory-mapped datasets with lots of cool utilities, a lot of what I want to do doesn't work well with the current `MockDownloader` based testing paradigm, making it hard to share my work with the community.
My suggestion would be to add a **datasets\_experimental** folder so we can start making these new datasets public without having to completely re-think testing for every single one. We would allow contributors to submit dataset PRs in this folder, but require an explanation for why the current testing suite doesn't work for them. We can then aggregate the feedback and periodically see what's missing from the current tests.
I have added a **datasets\_experimental** folder to the repository and S3 bucket with two initial datasets: ELI5 (explainlikeimfive) and a Wikipedia Snippets dataset to support indexing (wiki\_snippets)
### ELI5
#### Dataset description
This allows people to download the [ELI5: Long Form Question Answering](https://arxiv.org/abs/1907.09190) dataset, along with two variants based on the r/askscience and r/AskHistorians. Full Reddit dumps for each month are downloaded from [pushshift](https://files.pushshift.io/reddit/), filtered for submissions and comments from the desired subreddits, then deleted one at a time to save space. The resulting dataset is split into a training, validation, and test dataset for r/explainlikeimfive, r/askscience, and r/AskHistorians respectively, where each item is a question along with all of its high scoring answers.
#### Issues with the current testing
1. the list of files to be downloaded is not pre-defined, but rather determined by parsing an index web page at run time. This is necessary as the name and compression type of the dump files changes from month to month as the pushshift website is maintained. Currently, the dummy folder requires the user to know which files will be downloaded.
2. to save time, the script works on the compressed files using the corresponding python packages rather than first running `download\_and\_extract` then filtering the extracted files.
### Wikipedia Snippets
#### Dataset description
This script creates a *snippets* version of a source Wikipedia dataset: each article is split into passages of fixed length which can then be indexed using ElasticSearch or a dense indexer. The script currently handles all **wikipedia** and **wiki40b** source datasets, and allows the user to choose the passage length and how much overlap they want across passages. In addition to the passage text, each snippet also has the article title, list of titles of sections covered by the text, and information to map the passage back to the initial dataset at the paragraph and character level.
#### Issues with the current testing
1. The DatasetBuilder needs to call `nlp.load_dataset()`. Currently, testing is not recursive (the test doesn't know where to find the dummy data for the source dataset)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/235/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/235/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/234/comments | https://api.github.com/repos/huggingface/datasets/issues/234/events | https://github.com/huggingface/datasets/issues/234 | 630,534,427 | MDU6SXNzdWU2MzA1MzQ0Mjc= | 234 | Huggingface NLP, Uploading custom dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42269506?v=4",
"events_url": "https://api.github.com/users/Nouman97/events{/privacy}",
"followers_url": "https://api.github.com/users/Nouman97/followers",
"following_url": "https://api.github.com/users/Nouman97/following{/other_user}",
"gists_url": "https://api.github.com/users/Nouman97/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Nouman97",
"id": 42269506,
"login": "Nouman97",
"node_id": "MDQ6VXNlcjQyMjY5NTA2",
"organizations_url": "https://api.github.com/users/Nouman97/orgs",
"received_events_url": "https://api.github.com/users/Nouman97/received_events",
"repos_url": "https://api.github.com/users/Nouman97/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Nouman97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nouman97/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Nouman97"
} | [] | closed | false | null | [] | null | [
"What do you mean 'custom' ? You may want to elaborate on it when ask a question.\r\n\r\nAnyway, there are two things you may interested\r\n`nlp.Dataset.from_file` and `load_dataset(..., cache_dir=)`",
"To load a dataset you need to have a script that defines the format of the examples, the splits and the way to generate examples. As your dataset has the same format of squad, you can just copy the squad script (see the [datasets](https://github.com/huggingface/nlp/tree/master/datasets) forlder) and just replace the url to load the data to your local or remote path.\r\n\r\nThen what you can do is `load_dataset(<path/to/your/script>)`",
"Also if you want to upload your script, you should be able to use the `nlp-cli`.\r\n\r\nUnfortunately the upload feature was not shipped in the latest version 0.2.0. so right now you can either clone the repo to use it or wait for the next release. We will add some docs to explain how to upload datasets.\r\n",
"Since the latest release 0.2.1 you can use \r\n```bash\r\nnlp-cli upload_dataset <path/to/dataset>\r\n```\r\nwhere `<path/to/dataset>` is a path to a folder containing your script (ex: `squad.py`).\r\nThis will upload the script under your namespace on our S3.\r\n\r\nOptionally the folder can also contain `dataset_infos.json` generated using\r\n```bash\r\nnlp-cli test <path/to/dataset> --all_configs --save_infos\r\n```\r\n\r\nThen you should be able to do\r\n```python\r\nnlp.load_dataset(\"my_namespace/dataset_name\")\r\n```"
] | 2020-06-04T05:59:06 | 2020-07-06T09:33:26 | 2020-07-06T09:33:26 | NONE | null | null | null | Hello,
Does anyone know how we can call our custom dataset using the nlp.load command? Let's say that I have a dataset based on the same format as that of squad-v1.1, how am I supposed to load it using huggingface nlp.
Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/234/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/234/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/233/comments | https://api.github.com/repos/huggingface/datasets/issues/233/events | https://github.com/huggingface/datasets/issues/233 | 630,432,132 | MDU6SXNzdWU2MzA0MzIxMzI= | 233 | Fail to download c4 english corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4",
"events_url": "https://api.github.com/users/donggyukimc/events{/privacy}",
"followers_url": "https://api.github.com/users/donggyukimc/followers",
"following_url": "https://api.github.com/users/donggyukimc/following{/other_user}",
"gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/donggyukimc",
"id": 16605764,
"login": "donggyukimc",
"node_id": "MDQ6VXNlcjE2NjA1NzY0",
"organizations_url": "https://api.github.com/users/donggyukimc/orgs",
"received_events_url": "https://api.github.com/users/donggyukimc/received_events",
"repos_url": "https://api.github.com/users/donggyukimc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/donggyukimc"
} | [] | closed | false | null | [] | null | [
"Hello ! Thanks for noticing this bug, let me fix that.\r\n\r\nAlso for information, as specified in the changelog of the latest release, C4 currently needs to have a runtime for apache beam to work on. Apache beam is used to process this very big dataset and it can work on dataflow, spark, flink, apex, etc. You can find more info on beam datasets [here](https://github.com/huggingface/nlp/blob/master/docs/beam_dataset.md).\r\n\r\nOur goal in the future is to make available an already-processed version of C4 (as we do for wikipedia for example) so that users without apache beam runtimes can load it.",
"@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI'm using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n```\r\n\r\nHere's the complete stack trace.\r\n\r\n```\r\nDownloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/devops/.cache/huggingface/datasets/c4/en/2.3.0/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7...\r\n\r\n---------------------------------------------------------------------------\r\nIsADirectoryError Traceback (most recent call last)\r\n<ipython-input-11-f622e6705e03> in <module>\r\n----> 1 dataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 964 pipeline = beam_utils.BeamPipeline(runner=beam_runner, options=beam_options,)\r\n 965 super(BeamBasedBuilder, self)._download_and_prepare(\r\n--> 966 dl_manager, verify_infos=False, pipeline=pipeline,\r\n 967 ) # TODO handle verify_infos in beam datasets\r\n 968 # Run pipeline\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 516 split_dict = SplitDict(dataset_name=self.name)\r\n 517 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 518 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 519 # Checksums verification\r\n 520 if verify_infos:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/datasets/c4/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7/c4.py in _split_generators(self, dl_manager, pipeline)\r\n 187 if self.config.realnewslike:\r\n 188 files_to_download[\"realnews_domains\"] = _REALNEWS_DOMAINS_URL\r\n--> 189 file_paths = dl_manager.download_and_extract(files_to_download)\r\n 190 \r\n 191 if self.config.webtextlike:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 218 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 219 \"\"\"\r\n--> 220 return self.extract(self.download(url_or_urls))\r\n 221 \r\n 222 def get_recorded_sizes_checksums(self):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download(self, url_or_urls)\r\n 156 lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n 157 )\r\n--> 158 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n 159 return downloaded_path_or_paths\r\n 160 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in _record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths)\r\n 106 flattened_downloaded_path_or_paths = flatten_nested(downloaded_path_or_paths)\r\n 107 for url, path in zip(flattened_urls_or_urls, flattened_downloaded_path_or_paths):\r\n--> 108 self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)\r\n 109 \r\n 110 def download_custom(self, url_or_urls, custom_download):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/info_utils.py in get_size_checksum_dict(path)\r\n 77 \"\"\"Compute the file size and the sha256 checksum of a file\"\"\"\r\n 78 m = sha256()\r\n---> 79 with open(path, \"rb\") as f:\r\n 80 for chunk in iter(lambda: f.read(1 << 20), b\"\"):\r\n 81 m.update(chunk)\r\n\r\nIsADirectoryError: [Errno 21] Is a directory: '/'\r\n\r\n```\r\n\r\nCan anyone please try to see what I am doing wrong or is this a bug?",
"I have the same problem as @prashant-kikani",
"Looks like a bug in the dataset script, can you open an issue ?",
"I see the same issue as @prashant-kikani. I'm using `datasets` version 1.2.0 to download C4."
] | 2020-06-04T01:06:38 | 2021-01-08T07:17:32 | 2020-06-08T09:16:59 | NONE | null | null | null | i run following code to download c4 English corpus.
```
dataset = nlp.load_dataset('c4', 'en', beam_runner='DirectRunner'
, data_dir='/mypath')
```
and i met failure as follows
```
Downloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/adam/.cache/huggingface/datasets/c4/en/2.3.0...
Traceback (most recent call last):
File "download_corpus.py", line 38, in <module>
, data_dir='/home/adam/data/corpus/en/c4')
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/load.py", line 520, in load_dataset
save_infos=save_infos,
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 420, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 816, in _download_and_prepare
dl_manager, verify_infos=False, pipeline=pipeline,
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 457, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/datasets/c4/f545de9f63300d8d02a6795e2eb34e140c47e62a803f572ac5599e170ee66ecc/c4.py", line 175, in _split_generators
dl_manager.download_checksums(_CHECKSUMS_URL)
AttributeError: 'DownloadManager' object has no attribute 'download_checksums
```
can i get any advice? | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/233/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/233/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/232/comments | https://api.github.com/repos/huggingface/datasets/issues/232/events | https://github.com/huggingface/datasets/pull/232 | 630,029,568 | MDExOlB1bGxSZXF1ZXN0NDI3MjI5NDcy | 232 | Nlp cli fix endpoints | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"LGTM 👍 "
] | 2020-06-03T14:10:39 | 2020-06-08T09:02:58 | 2020-06-08T09:02:57 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/232.diff",
"html_url": "https://github.com/huggingface/datasets/pull/232",
"merged_at": "2020-06-08T09:02:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/232.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/232"
} | With this PR users will be able to upload their own datasets and metrics.
As mentioned in #181, I had to use the new endpoints and revert the use of dataclasses (just in case we have changes in the API in the future).
We now distinguish commands for datasets and commands for metrics:
```bash
nlp-cli upload_dataset <path/to/dataset>
nlp-cli upload_metric <path/to/metric>
nlp-cli s3_datasets {rm, ls}
nlp-cli s3_metrics {rm, ls}
```
Does it sound good to you @julien-c @thomwolf ? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/232/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/232/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/231/comments | https://api.github.com/repos/huggingface/datasets/issues/231/events | https://github.com/huggingface/datasets/pull/231 | 629,988,694 | MDExOlB1bGxSZXF1ZXN0NDI3MTk3MTcz | 231 | Add .download to MockDownloadManager | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-06-03T13:20:00 | 2020-06-03T14:25:56 | 2020-06-03T14:25:55 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/231.diff",
"html_url": "https://github.com/huggingface/datasets/pull/231",
"merged_at": "2020-06-03T14:25:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/231.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/231"
} | One method from the DownloadManager was missing and some users couldn't run the tests because of that.
@yjernite | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/231/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/231/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/230/comments | https://api.github.com/repos/huggingface/datasets/issues/230/events | https://github.com/huggingface/datasets/pull/230 | 629,983,684 | MDExOlB1bGxSZXF1ZXN0NDI3MTkzMTQ0 | 230 | Don't force to install apache beam for wikipedia dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-06-03T13:13:07 | 2020-06-03T14:34:09 | 2020-06-03T14:34:07 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/230.diff",
"html_url": "https://github.com/huggingface/datasets/pull/230",
"merged_at": "2020-06-03T14:34:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/230.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/230"
} | As pointed out in #227, we shouldn't force users to install apache beam if the processed dataset can be downloaded. I moved the imports of some datasets to avoid this problem | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/230/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/230/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/229/comments | https://api.github.com/repos/huggingface/datasets/issues/229/events | https://github.com/huggingface/datasets/pull/229 | 629,956,490 | MDExOlB1bGxSZXF1ZXN0NDI3MTcxMzc5 | 229 | Rename dataset_infos.json to dataset_info.json | {
"avatar_url": "https://avatars.githubusercontent.com/u/11817160?v=4",
"events_url": "https://api.github.com/users/aswin-giridhar/events{/privacy}",
"followers_url": "https://api.github.com/users/aswin-giridhar/followers",
"following_url": "https://api.github.com/users/aswin-giridhar/following{/other_user}",
"gists_url": "https://api.github.com/users/aswin-giridhar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aswin-giridhar",
"id": 11817160,
"login": "aswin-giridhar",
"node_id": "MDQ6VXNlcjExODE3MTYw",
"organizations_url": "https://api.github.com/users/aswin-giridhar/orgs",
"received_events_url": "https://api.github.com/users/aswin-giridhar/received_events",
"repos_url": "https://api.github.com/users/aswin-giridhar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aswin-giridhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aswin-giridhar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aswin-giridhar"
} | [] | closed | false | null | [] | null | [
"\r\nThis was actually the right name. `dataset_infos.json` is used to have the infos of all the dataset configurations.\r\n\r\nOn the other hand `dataset_info.json` (without 's') is a cache file with the info of one specific configuration.\r\n\r\nTo fix #228, we probably just have to clear and reload the nlp-viewer cache."
] | 2020-06-03T12:31:44 | 2020-06-03T12:52:54 | 2020-06-03T12:48:33 | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/229.diff",
"html_url": "https://github.com/huggingface/datasets/pull/229",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/229.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/229"
} | As the file required for the viewing in the live nlp viewer is named as dataset_info.json | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/229/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/229/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/228/comments | https://api.github.com/repos/huggingface/datasets/issues/228/events | https://github.com/huggingface/datasets/issues/228 | 629,952,402 | MDU6SXNzdWU2Mjk5NTI0MDI= | 228 | Not able to access the XNLI dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/11817160?v=4",
"events_url": "https://api.github.com/users/aswin-giridhar/events{/privacy}",
"followers_url": "https://api.github.com/users/aswin-giridhar/followers",
"following_url": "https://api.github.com/users/aswin-giridhar/following{/other_user}",
"gists_url": "https://api.github.com/users/aswin-giridhar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aswin-giridhar",
"id": 11817160,
"login": "aswin-giridhar",
"node_id": "MDQ6VXNlcjExODE3MTYw",
"organizations_url": "https://api.github.com/users/aswin-giridhar/orgs",
"received_events_url": "https://api.github.com/users/aswin-giridhar/received_events",
"repos_url": "https://api.github.com/users/aswin-giridhar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aswin-giridhar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aswin-giridhar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aswin-giridhar"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srush",
"id": 35882,
"login": "srush",
"node_id": "MDQ6VXNlcjM1ODgy",
"organizations_url": "https://api.github.com/users/srush/orgs",
"received_events_url": "https://api.github.com/users/srush/received_events",
"repos_url": "https://api.github.com/users/srush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srush"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4",
"events_url": "https://api.github.com/users/srush/events{/privacy}",
"followers_url": "https://api.github.com/users/srush/followers",
"following_url": "https://api.github.com/users/srush/following{/other_user}",
"gists_url": "https://api.github.com/users/srush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srush",
"id": 35882,
"login": "srush",
"node_id": "MDQ6VXNlcjM1ODgy",
"organizations_url": "https://api.github.com/users/srush/orgs",
"received_events_url": "https://api.github.com/users/srush/received_events",
"repos_url": "https://api.github.com/users/srush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srush"
}
] | null | [
"Added pull request to change the name of the file from dataset_infos.json to dataset_info.json",
"Thanks for reporting this bug !\r\nAs it seems to be just a cache problem, I closed your PR.\r\nI think we might just need to clear and reload the `xnli` cache @srush ? ",
"Update: The dataset_info.json error is gone, but we have a new one instead:\r\n```\r\nConnectionError: Couldn't reach https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip\r\n```\r\nI am not able to reproduce on my side unfortunately. Any idea @srush ?",
"xnli is now properly shown in the viewer.\r\nClosing this one."
] | 2020-06-03T12:25:14 | 2020-07-17T17:44:22 | 2020-07-17T17:44:22 | NONE | null | null | null | When I try to access the XNLI dataset, I get the following error. The option of plain_text get selected automatically and then I get the following error.
```
FileNotFoundError: [Errno 2] No such file or directory: '/home/sasha/.cache/huggingface/datasets/xnli/plain_text/1.0.0/dataset_info.json'
Traceback:
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp_viewer/run.py", line 86, in <module>
dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp_viewer/run.py", line 72, in get
builder_instance = builder_cls(name=conf)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/builder.py", line 610, in __init__
super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/builder.py", line 152, in __init__
self.info = DatasetInfo.from_directory(self._cache_dir)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/info.py", line 157, in from_directory
with open(os.path.join(dataset_info_dir, DATASET_INFO_FILENAME), "r") as f:
```
Is it possible to see if the dataset_info.json is correctly placed? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/228/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/228/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/227/comments | https://api.github.com/repos/huggingface/datasets/issues/227/events | https://github.com/huggingface/datasets/issues/227 | 629,845,704 | MDU6SXNzdWU2Mjk4NDU3MDQ= | 227 | Should we still have to force to install apache_beam to download wikipedia ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/richarddwang",
"id": 17963619,
"login": "richarddwang",
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/richarddwang"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Thanks for your message 😊 \r\nIndeed users shouldn't have to install those dependencies",
"Got it, feel free to close this issue when you think it’s resolved.",
"It should be good now :)"
] | 2020-06-03T09:33:20 | 2020-06-03T15:25:41 | 2020-06-03T15:25:41 | CONTRIBUTOR | null | null | null | Hi, first thanks to @lhoestq 's revolutionary work, I successfully downloaded processed wikipedia according to the doc. 😍😍😍
But at the first try, it tell me to install `apache_beam` and `mwparserfromhell`, which I thought wouldn't be used according to #204 , it was kind of confusing me at that time.
Maybe we should not force users to install these ? Or we just add them to`nlp`'s dependency ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/227/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/227/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/226/comments | https://api.github.com/repos/huggingface/datasets/issues/226/events | https://github.com/huggingface/datasets/pull/226 | 628,344,520 | MDExOlB1bGxSZXF1ZXN0NDI1OTA0MjEz | 226 | add BlendedSkillTalk dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariamabarham",
"id": 38249783,
"login": "mariamabarham",
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariamabarham"
} | [] | closed | false | null | [] | null | [
"Awesome :D"
] | 2020-06-01T10:54:45 | 2020-06-03T14:37:23 | 2020-06-03T14:37:22 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/226.diff",
"html_url": "https://github.com/huggingface/datasets/pull/226",
"merged_at": "2020-06-03T14:37:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/226.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/226"
} | This PR add the BlendedSkillTalk dataset, which is used to fine tune the blenderbot. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/226/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/226/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/225/comments | https://api.github.com/repos/huggingface/datasets/issues/225/events | https://github.com/huggingface/datasets/issues/225 | 628,083,366 | MDU6SXNzdWU2MjgwODMzNjY= | 225 | [ROUGE] Different scores with `files2rouge` | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [
{
"color": "d722e8",
"default": false,
"description": "Discussions on the metrics",
"id": 2067400959,
"name": "Metric discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwOTU5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
}
] | null | [
"@Colanim unfortunately there are different implementations of the ROUGE metric floating around online which yield different results, and we had to chose one for the package :) We ended up including the one from the google-research repository, which does minimal post-processing before computing the P/R/F scores. If I recall correctly, files2rouge relies on the Perl, script, which among other things normalizes all numbers to a special token: in the case you presented, this should account for a good chunk of the difference.\r\n\r\nWe may end up adding in more versions of the metric, but probably not for a while (@lhoestq correct me if I'm wrong). However, feel free to take a stab at adding it in yourself and submitting a PR if you're interested!",
"Thank you for your kind answer.\r\n\r\nAs a side question : Isn't it better to have a package that normalize more ?\r\n\r\nI understand to idea of having a package that does minimal post-processing for transparency.\r\n\r\nBut it means that people using different architecture (with different tokenizers for example) will have difference in ROUGE scores even if their predictions are actually similar. \r\nThe goal of `nlp` is to have _one package to rule them all_, right ?\r\n\r\nI will look into it but I'm not sure I have the required skill for this ^^ ",
"You're right, there's a pretty interesting trade-off here between robustness and sensitivity :) The flip side of your argument is that we also still want the metric to be sensitive to model mistakes. How we think about number normalization for example has evolved a fair bit since the Perl script was written: at the time, ROUGE was used mostly to evaluate short-medium text summarization systems, where there were only a few numbers in the input and it was assumed that the most popular methods in use at the time would get those right. However, as your example showcases, that assumption does not hold any more, and we do want to be able to penalize a model that generates a wrong numerical value.\r\n\r\nAlso, we think that abstracting away tokenization differences is the role of the model/tokenizer: if you use the 🤗Tokenizers library for example, it will handle that for you ;)\r\n\r\nFinally, there is a lot of active research on developing model-powered metrics that are both more sensitive and more robust than ROUGE. Check out for example BERTscore, which is implemented in this library!"
] | 2020-06-01T00:50:36 | 2020-06-03T15:27:18 | 2020-06-03T15:27:18 | NONE | null | null | null | It seems that the ROUGE score of `nlp` is lower than the one of `files2rouge`.
Here is a self-contained notebook to reproduce both scores : https://colab.research.google.com/drive/14EyAXValB6UzKY9x4rs_T3pyL7alpw_F?usp=sharing
---
`nlp` : (Only mid F-scores)
>rouge1 0.33508031962733364
rouge2 0.14574333776191592
rougeL 0.2321187823256159
`files2rouge` :
>Running ROUGE...
===========================
1 ROUGE-1 Average_R: 0.48873 (95%-conf.int. 0.41192 - 0.56339)
1 ROUGE-1 Average_P: 0.29010 (95%-conf.int. 0.23605 - 0.34445)
1 ROUGE-1 Average_F: 0.34761 (95%-conf.int. 0.29479 - 0.39871)
===========================
1 ROUGE-2 Average_R: 0.20280 (95%-conf.int. 0.14969 - 0.26244)
1 ROUGE-2 Average_P: 0.12772 (95%-conf.int. 0.08603 - 0.17752)
1 ROUGE-2 Average_F: 0.14798 (95%-conf.int. 0.10517 - 0.19240)
===========================
1 ROUGE-L Average_R: 0.32960 (95%-conf.int. 0.26501 - 0.39676)
1 ROUGE-L Average_P: 0.19880 (95%-conf.int. 0.15257 - 0.25136)
1 ROUGE-L Average_F: 0.23619 (95%-conf.int. 0.19073 - 0.28663)
---
When using longer predictions/gold, the difference is bigger.
**How can I reproduce same score as `files2rouge` ?**
@lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/225/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/225/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/224/comments | https://api.github.com/repos/huggingface/datasets/issues/224/events | https://github.com/huggingface/datasets/issues/224 | 627,791,693 | MDU6SXNzdWU2Mjc3OTE2OTM= | 224 | [Feature Request/Help] BLEURT model -> PyTorch | {
"avatar_url": "https://avatars.githubusercontent.com/u/6889910?v=4",
"events_url": "https://api.github.com/users/adamwlev/events{/privacy}",
"followers_url": "https://api.github.com/users/adamwlev/followers",
"following_url": "https://api.github.com/users/adamwlev/following{/other_user}",
"gists_url": "https://api.github.com/users/adamwlev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adamwlev",
"id": 6889910,
"login": "adamwlev",
"node_id": "MDQ6VXNlcjY4ODk5MTA=",
"organizations_url": "https://api.github.com/users/adamwlev/orgs",
"received_events_url": "https://api.github.com/users/adamwlev/received_events",
"repos_url": "https://api.github.com/users/adamwlev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adamwlev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamwlev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adamwlev"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
}
] | null | [
"Is there any update on this? \r\n\r\nThanks!",
"Hitting this error when using bleurt with PyTorch ...\r\n\r\n```\r\nUnrecognizedFlagError: Unknown command line flag 'f'\r\n```\r\n... and I'm assuming because it was built for TF specifically. Is there a way to use this metric in PyTorch?",
"We currently provide a wrapper on the TensorFlow implementation: https://huggingface.co/metrics/bleurt\r\n\r\nWe have long term plans to better handle model-based metrics, but they probably won't be implemented right away\r\n\r\n@adamwlev it would still be cool to add the BLEURT checkpoints to the transformers repo if you're interested, but that would best be discussed there :) \r\n\r\nclosing for now",
"Hi there. We ran into the same problem this year (converting BLEURT to PyTorch) and thanks to @adamwlev found his colab notebook which didn't work but served as a good starting point. Finally, we **made it work** by doing just two simple conceptual fixes: \r\n\r\n1. Transposing 'kernel' layers instead of 'dense' ones when copying params from the original model;\r\n2. Taking pooler_output as a cls_state in forward function of the BleurtModel class.\r\n\r\nPlus few minor syntactical fixes for the outdated parts. The result is still not exactly the same, but is very close to the expected one (1.0483 vs 1.0474).\r\n\r\nFind the fixed version here (fixes are commented): https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing \r\n",
"I created a new model based on `transformers` that can load every BLEURT checkpoints released so far. https://github.com/lucadiliello/bleurt-pytorch",
"@LoraIpsum Thanks for sharing your work here. However, I'm unable to reproduce the results. That's strange because you are. FYI, I am trying to convert a finetuned BLEURT to PyTorch. Any suggestions on how I can reproduce results?"
] | 2020-05-30T18:30:40 | 2023-08-26T17:38:48 | 2021-01-04T09:53:32 | NONE | null | null | null | Hi, I am interested in porting google research's new BLEURT learned metric to PyTorch (because I wish to do something experimental with language generation and backpropping through BLEURT). I noticed that you guys don't have it yet so I am partly just asking if you plan to add it (@thomwolf said you want to do so on Twitter).
I had a go of just like manually using the checkpoint that they publish which includes the weights. It seems like the architecture is exactly aligned with the out-of-the-box BertModel in transformers just with a single linear layer on top of the CLS embedding. I loaded all the weights to the PyTorch model but I am not able to get the same numbers as the BLEURT package's python api. Here is my colab notebook where I tried https://colab.research.google.com/drive/1Bfced531EvQP_CpFvxwxNl25Pj6ptylY?usp=sharing . If you have any pointers on what might be going wrong that would be much appreciated!
Thank you muchly! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/224/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/224/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/223/comments | https://api.github.com/repos/huggingface/datasets/issues/223/events | https://github.com/huggingface/datasets/issues/223 | 627,683,386 | MDU6SXNzdWU2Mjc2ODMzODY= | 223 | [Feature request] Add FLUE dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/58078086?v=4",
"events_url": "https://api.github.com/users/lbourdois/events{/privacy}",
"followers_url": "https://api.github.com/users/lbourdois/followers",
"following_url": "https://api.github.com/users/lbourdois/following{/other_user}",
"gists_url": "https://api.github.com/users/lbourdois/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lbourdois",
"id": 58078086,
"login": "lbourdois",
"node_id": "MDQ6VXNlcjU4MDc4MDg2",
"organizations_url": "https://api.github.com/users/lbourdois/orgs",
"received_events_url": "https://api.github.com/users/lbourdois/received_events",
"repos_url": "https://api.github.com/users/lbourdois/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lbourdois/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lbourdois/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lbourdois"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"Hi @lbourdois, yes please share it with us",
"@mariamabarham \r\nI put all the datasets on this drive: https://1drv.ms/u/s!Ao2Rcpiny7RFinDypq7w-LbXcsx9?e=iVsEDh\r\n\r\n\r\nSome information : \r\n• For FLUE, the quote used is\r\n\r\n> @misc{le2019flaubert,\r\n> title={FlauBERT: Unsupervised Language Model Pre-training for French},\r\n> author={Hang Le and Loïc Vial and Jibril Frej and Vincent Segonne and Maximin Coavoux and Benjamin Lecouteux and Alexandre Allauzen and Benoît Crabbé and Laurent Besacier and Didier Schwab},\r\n> year={2019},\r\n> eprint={1912.05372},\r\n> archivePrefix={arXiv},\r\n> primaryClass={cs.CL}\r\n> }\r\n\r\n• The Github repo of FLUE is avaible here : https://github.com/getalp/Flaubert/tree/master/flue\r\n\r\n\r\n\r\nInformation related to the different tasks of FLUE : \r\n\r\n**1. Classification**\r\nThree dataframes are available: \r\n- Book\r\n- DVD\r\n- Music\r\nFor each of these dataframes is available a set of training and test data, and a third one containing unlabelled data.\r\n\r\nCitation : \r\n>@dataset{prettenhofer_peter_2010_3251672,\r\n author = {Prettenhofer, Peter and\r\n Stein, Benno},\r\n title = {{Webis Cross-Lingual Sentiment Dataset 2010 (Webis- \r\n CLS-10)}},\r\n month = jul,\r\n year = 2010,\r\n publisher = {Zenodo},\r\n doi = {10.5281/zenodo.3251672},\r\n url = {https://doi.org/10.5281/zenodo.3251672}\r\n}\r\n\r\n\r\n**2. Paraphrasing** \r\nFrench part of the PAWS-X dataset (https://github.com/google-research-datasets/paws).\r\nThree dataframes are available: \r\n- train\r\n- dev\r\n- test \r\n\r\nCitation : \r\n> @InProceedings{pawsx2019emnlp,\r\n> title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}},\r\n> author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},\r\n> booktitle = {Proc. of EMNLP},\r\n> year = {2019}\r\n> }\r\n\r\n\r\n\r\n**3. Natural Language Inference**\r\nFrench part of the XNLI dataset (https://github.com/facebookresearch/XNLI).\r\nThree dataframes are available: \r\n- train\r\n- dev\r\n- test \r\n\r\nFor the dev and test datasets, extra columns compared to the train dataset were available so I left them in the dataframe (I didn't know if these columns could be useful for other tasks or not). \r\nIn the context of the FLUE benchmark, only the columns gold_label, sentence1 and sentence2 are useful.\r\n\r\n\r\nCitation : \r\n\r\n> @InProceedings{conneau2018xnli,\r\n> author = \"Conneau, Alexis\r\n> and Rinott, Ruty\r\n> and Lample, Guillaume\r\n> and Williams, Adina\r\n> and Bowman, Samuel R.\r\n> and Schwenk, Holger\r\n> and Stoyanov, Veselin\",\r\n> title = \"XNLI: Evaluating Cross-lingual Sentence Representations\",\r\n> booktitle = \"Proceedings of the 2018 Conference on Empirical Methods\r\n> in Natural Language Processing\",\r\n> year = \"2018\",\r\n> publisher = \"Association for Computational Linguistics\",\r\n> location = \"Brussels, Belgium\",\r\n\r\n\r\n**4. Parsing**\r\nThe dataset used by the FLUE authors for this task is not freely available.\r\nUsers of your library will therefore not be able to access it.\r\nNevertheless, I think maybe it is useful to add a link to the site where to request this dataframe: http://ftb.linguist.univ-paris-diderot.fr/telecharger.php?langue=en \r\n(personally it was sent to me less than 48 hours after I requested it).\r\n\r\n\r\n**5. Word Sense Disambiguation Tasks**\r\n5.1 Verb Sense Disambiguation\r\n\r\nTwo dataframes are available: train and test\r\nFor both dataframes, 4 columns are available: document, sentence, lemma and word.\r\nI created the document column thinking that there were several documents in the dataset but afterwards it turns out that there were not: several sentences but only one document. It's up to you to keep it or not when importing these two dataframes.\r\n\r\nThe sentence column is used to determine to which sentence the word in the word column belongs. It is in the form of a dictionary {'id': 'd000.s001', 'idx': '1'}. I thought for a while to keep only the idx because the id doesn't matter any more information. Nevertheless for the test dataset, the dictionary has an extra value indicating the source of the sentence. I don't know if it's useful or not, that's why I left the dictionary just in case. The user is free to do what he wants with it.\r\n\r\nCitation : \r\n\r\n> Segonne, V., Candito, M., and Crabb ́e, B. (2019). Usingwiktionary as a resource for wsd: the case of frenchverbs. InProceedings of the 13th International Confer-ence on Computational Semantics-Long Papers, pages259–270\r\n\r\n5.2 Noun Sense Disambiguation\r\nTwo dataframes are available: 2 train and 1 test\r\n\r\nI confess I didn't fully understand the procedure for this task.\r\n\r\nCitation : \r\n\r\n> @dataset{loic_vial_2019_3549806,\r\n> author = {Loïc Vial},\r\n> title = {{French Word Sense Disambiguation with Princeton \r\n> WordNet Identifiers}},\r\n> month = nov,\r\n> year = 2019,\r\n> publisher = {Zenodo},\r\n> version = {1.0},\r\n> doi = {10.5281/zenodo.3549806},\r\n> url = {https://doi.org/10.5281/zenodo.3549806}\r\n> }\r\n\r\nFinally, additional information about FLUE is available in the FlauBERT publication : \r\nhttps://arxiv.org/abs/1912.05372 (p. 4).\r\n\r\n\r\nHoping to have provided you with everything you need to add this benchmark :) \r\n",
"https://github.com/huggingface/datasets/pull/943"
] | 2020-05-30T08:52:15 | 2020-12-03T13:39:33 | 2020-12-03T13:39:33 | NONE | null | null | null | Hi,
I think it would be interesting to add the FLUE dataset for francophones or anyone wishing to work on French.
In other requests, I read that you are already working on some datasets, and I was wondering if FLUE was planned.
If it is not the case, I can provide each of the cleaned FLUE datasets (in the form of a directly exploitable dataset rather than in the original xml formats which require additional processing, with the French part for cases where the dataset is based on a multilingual dataframe, etc.). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/223/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/223/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/222/comments | https://api.github.com/repos/huggingface/datasets/issues/222/events | https://github.com/huggingface/datasets/issues/222 | 627,586,690 | MDU6SXNzdWU2Mjc1ODY2OTA= | 222 | Colab Notebook breaks when downloading the squad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/338917?v=4",
"events_url": "https://api.github.com/users/carlos-aguayo/events{/privacy}",
"followers_url": "https://api.github.com/users/carlos-aguayo/followers",
"following_url": "https://api.github.com/users/carlos-aguayo/following{/other_user}",
"gists_url": "https://api.github.com/users/carlos-aguayo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/carlos-aguayo",
"id": 338917,
"login": "carlos-aguayo",
"node_id": "MDQ6VXNlcjMzODkxNw==",
"organizations_url": "https://api.github.com/users/carlos-aguayo/orgs",
"received_events_url": "https://api.github.com/users/carlos-aguayo/received_events",
"repos_url": "https://api.github.com/users/carlos-aguayo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/carlos-aguayo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/carlos-aguayo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/carlos-aguayo"
} | [] | closed | false | null | [] | null | [
"The notebook forces version 0.1.0. If I use the latest, things work, I'll run the whole notebook and create a PR.\r\n\r\nBut in the meantime, this issue gets fixed by changing:\r\n`!pip install nlp==0.1.0`\r\nto\r\n`!pip install nlp`",
"It still breaks very near the end\r\n\r\n![image](https://user-images.githubusercontent.com/338917/83312264-aa96a600-a1df-11ea-987f-2f4a0474247e.png)\r\n",
"When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.\r\nIf you don't restart, then it breaks like in your first message ",
"Thanks for reporting the second one ! We'll update the notebook to fix this one :)",
"This trick from @thomwolf seems to be the most reliable solution to fix this colab notebook issue:\r\n\r\n```python\r\n# install nlp\r\n!pip install -qq nlp==0.2.0\r\n\r\n# Make sure that we have a recent version of pyarrow in the session before we continue - otherwise reboot Colab to activate it\r\nimport pyarrow\r\nif int(pyarrow.__version__.split('.')[1]) < 16:\r\n import os\r\n os.kill(os.getpid(), 9)\r\n```",
"The second part got fixed here: 2cbc656d6fc4b18ce57eac070baec05b31180d39\r\n\r\nThanks! I'm then closing this issue."
] | 2020-05-29T22:55:59 | 2020-06-04T00:21:05 | 2020-06-04T00:21:05 | NONE | null | null | null | When I run the notebook in Colab
https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb
breaks when running this cell:
![image](https://user-images.githubusercontent.com/338917/83311709-ffd1b800-a1dd-11ea-8394-3a87df0d7f8b.png)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/222/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/222/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/221/comments | https://api.github.com/repos/huggingface/datasets/issues/221/events | https://github.com/huggingface/datasets/pull/221 | 627,300,648 | MDExOlB1bGxSZXF1ZXN0NDI1MTI5OTc0 | 221 | Fix tests/test_dataset_common.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/13635495?v=4",
"events_url": "https://api.github.com/users/tayciryahmed/events{/privacy}",
"followers_url": "https://api.github.com/users/tayciryahmed/followers",
"following_url": "https://api.github.com/users/tayciryahmed/following{/other_user}",
"gists_url": "https://api.github.com/users/tayciryahmed/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tayciryahmed",
"id": 13635495,
"login": "tayciryahmed",
"node_id": "MDQ6VXNlcjEzNjM1NDk1",
"organizations_url": "https://api.github.com/users/tayciryahmed/orgs",
"received_events_url": "https://api.github.com/users/tayciryahmed/received_events",
"repos_url": "https://api.github.com/users/tayciryahmed/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tayciryahmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tayciryahmed/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tayciryahmed"
} | [] | closed | false | null | [] | null | [
"Thanks ! Good catch :)\r\n\r\nTo fix the CI you can do:\r\n1 - rebase from master\r\n2 - then run `make style` as specified in [CONTRIBUTING.md](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md) ?"
] | 2020-05-29T14:12:15 | 2020-06-01T12:20:42 | 2020-05-29T15:02:23 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/221.diff",
"html_url": "https://github.com/huggingface/datasets/pull/221",
"merged_at": "2020-05-29T15:02:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/221.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/221"
} | When I run the command `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_arcd` while working on #220. I get the error ` unexpected keyword argument "'download_and_prepare_kwargs'"` at the level of `load_dataset`. Indeed, this [function](https://github.com/huggingface/nlp/blob/master/src/nlp/load.py#L441) no longer has the argument `download_and_prepare_kwargs` but rather `download_config`. So here I change the tests accordingly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/221/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/220/comments | https://api.github.com/repos/huggingface/datasets/issues/220/events | https://github.com/huggingface/datasets/pull/220 | 627,280,683 | MDExOlB1bGxSZXF1ZXN0NDI1MTEzMzEy | 220 | dataset_arcd | {
"avatar_url": "https://avatars.githubusercontent.com/u/13635495?v=4",
"events_url": "https://api.github.com/users/tayciryahmed/events{/privacy}",
"followers_url": "https://api.github.com/users/tayciryahmed/followers",
"following_url": "https://api.github.com/users/tayciryahmed/following{/other_user}",
"gists_url": "https://api.github.com/users/tayciryahmed/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tayciryahmed",
"id": 13635495,
"login": "tayciryahmed",
"node_id": "MDQ6VXNlcjEzNjM1NDk1",
"organizations_url": "https://api.github.com/users/tayciryahmed/orgs",
"received_events_url": "https://api.github.com/users/tayciryahmed/received_events",
"repos_url": "https://api.github.com/users/tayciryahmed/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tayciryahmed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tayciryahmed/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tayciryahmed"
} | [] | closed | false | null | [] | null | [
"you can rebase from master to fix the CI error :)",
"Awesome !"
] | 2020-05-29T13:46:50 | 2020-05-29T14:58:40 | 2020-05-29T14:57:21 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/220.diff",
"html_url": "https://github.com/huggingface/datasets/pull/220",
"merged_at": "2020-05-29T14:57:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/220.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/220"
} | Added Arabic Reading Comprehension Dataset (ARCD): https://arxiv.org/abs/1906.05394 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/220/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/220/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/219/comments | https://api.github.com/repos/huggingface/datasets/issues/219/events | https://github.com/huggingface/datasets/pull/219 | 627,235,893 | MDExOlB1bGxSZXF1ZXN0NDI1MDc2NjQx | 219 | force mwparserfromhell as third party | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-05-29T12:33:17 | 2020-05-29T13:30:13 | 2020-05-29T13:30:12 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/219.diff",
"html_url": "https://github.com/huggingface/datasets/pull/219",
"merged_at": "2020-05-29T13:30:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/219.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/219"
} | This should fix your env because you had `mwparserfromhell ` as a first party for `isort` @patrickvonplaten | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/219/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/218/comments | https://api.github.com/repos/huggingface/datasets/issues/218/events | https://github.com/huggingface/datasets/pull/218 | 627,173,407 | MDExOlB1bGxSZXF1ZXN0NDI1MDI2NzEz | 218 | Add Natual Questions and C4 scripts | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-05-29T10:40:30 | 2020-05-29T12:31:01 | 2020-05-29T12:31:00 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/218.diff",
"html_url": "https://github.com/huggingface/datasets/pull/218",
"merged_at": "2020-05-29T12:31:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/218.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/218"
} | Scripts are ready !
However they are not processed nor directly available from gcp yet. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/218/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/218/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/217/comments | https://api.github.com/repos/huggingface/datasets/issues/217/events | https://github.com/huggingface/datasets/issues/217 | 627,128,403 | MDU6SXNzdWU2MjcxMjg0MDM= | 217 | Multi-task dataset mixing | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | [] | null | [
"I like this feature! I think the first question we should decide on is how to convert all datasets into the same format. In T5, the authors decided to format every dataset into a text-to-text format. If the dataset had \"multiple\" inputs like MNLI, the inputs were concatenated. So in MNLI the input:\r\n\r\n> - **Hypothesis**: The St. Louis Cardinals have always won.\r\n> \r\n> - **Premise**: yeah well losing is i mean i’m i’m originally from Saint Louis and Saint Louis Cardinals when they were there were uh a mostly a losing team but \r\n\r\nwas flattened to a single input:\r\n\r\n> mnli hypothesis: The St. Louis Cardinals have always won. premise:\r\n> yeah well losing is i mean i’m i’m originally from Saint Louis and Saint Louis Cardinals\r\n> when they were there were uh a mostly a losing team but.\r\n\r\nThis flattening is actually a very simple operation in `nlp` already. You would just need to do the following:\r\n\r\n```python \r\ndef flatten_inputs(example):\r\n return {\"input\": \"mnli hypothesis: \" + example['hypothesis'] + \" premise: \" + example['premise']}\r\n\r\nt5_ready_mnli_ds = mnli_ds.map(flatten_inputs, remove_columns=[<all columns except output>])\r\n```\r\n\r\nSo I guess converting the datasets into the same format can be left to the user for now. \r\nThen the question is how we can merge the datasets. I would probably be in favor of a simple \r\n\r\n```python \r\ndataset.add()\r\n```\r\n\r\nfunction that checks if the dataset is of the same format and if yes merges the two datasets. Finally, how should the sampling be implemented? **Examples-proportional mixing** corresponds to just merging the datasets and shuffling. For the other two sampling approaches we would need some higher-level features, maybe even a `dataset.sample()` function for merged datasets. \r\n\r\nWhat are your thoughts on this @thomwolf @lhoestq @ghomasHudson @enzoampil ?",
"I agree that we should leave the flattening of the dataset to the user for now. Especially because although the T5 framing seems obvious, there are slight variations on how the T5 authors do it in comparison to other approaches such as gpt-3 and decaNLP.\r\n\r\nIn terms of sampling, Examples-proportional mixing does seem the simplest to implement so would probably be a good starting point.\r\n\r\nTemperature-scaled mixing would probably most useful, offering flexibility as it can simulate the other 2 methods by setting the temperature parameter. There is a [relevant part of the T5 repo](https://github.com/google-research/text-to-text-transfer-transformer/blob/03c94165a7d52e4f7230e5944a0541d8c5710788/t5/data/utils.py#L889-L1118) which should help with implementation.\r\n\r\nAccording to the T5 authors, equal-mixing performs worst. Among the other two methods, tuning the K value (the artificial dataset size limit) has a large impact.\r\n",
"I agree with going with temperature-scaled mixing for its flexibility!\r\n\r\nFor the function that combines the datasets, I also find `dataset.add()` okay while also considering that users may want it to be easy to combine a list of say 10 data sources in one go.\r\n\r\n`dataset.sample()` should also be good. By the looks of it, we're planning to have as main parameters: `temperature`, and `K`.\r\n\r\nOn converting the datasets to the same format, I agree that we can leave these to the users for now. But, I do imagine it'd be an awesome feature for the future to have this automatically handled, based on a chosen *approach* to formatting :smile: \r\n\r\nE.g. T5, GPT-3, decaNLP, original raw formatting, or a contributed way of formatting in text-to-text. ",
"This is an interesting discussion indeed and it would be nice to make multi-task easier.\r\n\r\nProbably the best would be to have a new type of dataset especially designed for that in order to easily combine and sample from the multiple datasets.\r\n\r\nThis way we could probably handle the combination of datasets with differing schemas as well (unlike T5).",
"@thomwolf Are you suggesting making a wrapper class which can take existing datasets as arguments and do all the required sampling/combining, to present the same interface as a normal dataset?\r\n\r\nThat doesn't seem too complicated to implement.\r\n",
"I guess we're looking at the end user writing something like:\r\n``` python\r\nds = nlp.load_dataset('multitask-t5',datasets=[\"squad\",\"cnn_dm\",...], k=1000, t=2.0)\r\n```\r\nUsing the t5 method of combining here (or this could be a function passed in as an arg) \r\n\r\nPassing kwargs to each 'sub-dataset' might become tricky.",
"From thinking upon @thomwolf 's suggestion, I've started experimenting:\r\n```python\r\nclass MultitaskDataset(DatasetBuilder):\r\n def __init__(self, *args, **kwargs):\r\n super(MultitaskDataset, self).__init__(*args, **kwargs)\r\n self._datasets = kwargs.get(\"datasets\")\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=nlp.Features({\r\n \"source\": nlp.Value(\"string\"),\r\n \"target\": nlp.Sequence(nlp.Value(\"string\"))\r\n })\r\n )\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self._datasets'''\r\n min_set = None\r\n for dataset in self._datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n....\r\n\r\n# Maybe this?:\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\nmultitask_dataset = nlp.load_dataset(\r\n 'multitask_dataset',\r\n datasets=[squad,cnn_dailymail], \r\n k=1000, \r\n t=2.0\r\n)\r\n\r\n```\r\n\r\nDoes anyone know what methods of `MultitaskDataset` I would need to implement? Maybe `as_dataset` and `download_and_prepare`? Most of these should be just calling the methods of the sub-datasets. \r\n\r\nI'm assuming DatasetBuilder is better than the more specific `GeneratorBasedBuilder`, `BeamBasedBuilder`, etc....\r\n\r\nOne of the other problems is that the dataset size is unknown till you construct it (as you can pick the sub-datasets). Am hoping not to need to make changes to `nlp.load_dataset` just for this class.\r\n\r\nI'd appreciate it if anyone more familiar with nlp's internal workings could tell me if I'm on the right track!",
"I think I would probably go for a `MultiDataset` wrapper around a list of `Dataset`.\r\n\r\nI'm not sure we need to give it `k` and `t` parameters at creation, it can maybe be something along the lines of:\r\n```python\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\n\r\nmultitask_dataset = nlp.MultiDataset(squad, cnn_dm)\r\n\r\nbatch = multitask_dataset.sample(10, temperature=2.0, k=1000)\r\n```\r\n\r\nThe first proof-of-concept for multi-task datasets could definitely require that the provided datasets have the same name/type for columns (if needed you easily rename/cast a column prior to instantiating the `MultiDataset`).\r\n\r\nIt's good to think about it for some time though and don't overfit too much on the T5 examples (in particular for the ways/kwargs for sampling among datasets).",
"The problem with changing `k` and `t` per sampling is that you'd have to somehow remember which examples you'd already returned while re-weighting the remaining examples based on the new `k` and `t`values. It seems possible but complicated (I can't really see a reason why you'd want to change the weighting of datasets after you constructed the multidataset).\r\n\r\nWouldn't it be convenient if it implemented the dataset interface? Then if someone has code using a single nlp dataset, they can replace it with a multitask combination of more datasets without having to change other code. We would at least need to be able to pass it into a `DataLoader`.\r\n\r\n",
"A very janky (but working) implementation of `multitask_dataset.sample()` could be something like this:\r\n```python\r\nimport nlp\r\nimport torch\r\n\r\nclass MultiDataset():\r\n def __init__(self, *args, temperature=2.0, k=1000, maximum=None, scale=1):\r\n self.datasets = args\r\n self._dataloaders = {}\r\n for split in self._get_common_splits():\r\n split_datasets = [ds[split] for ds in self.datasets]\r\n mixing_rates = self._calc_mixing_rates(split_datasets,temperature, k, maximum, scale)\r\n weights = []\r\n for i in range(len(self.datasets)):\r\n weights += [mixing_rates[i]]*len(self.datasets[i][split])\r\n self._dataloaders[split] = torch.utils.data.DataLoader(torch.utils.data.ConcatDataset(split_datasets),\r\n sampler=torch.utils.data.sampler.WeightedRandomSampler(\r\n num_samples=len(weights),\r\n weights = weights,\r\n replacement=True),\r\n shuffle=False)\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in self.datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n\r\n def _calc_mixing_rates(self,datasets, temperature=2.0, k=1000, maximum=None, scale=1):\r\n '''Work out the weighting of each dataset based on t and k'''\r\n mixing_rates = []\r\n for dataset in datasets:\r\n rate = len(dataset)\r\n rate *= scale\r\n if maximum:\r\n rate = min(rate, maximum)\r\n if temperature != 1.0:\r\n rate = rate ** (1.0/temperature)\r\n mixing_rates.append(rate)\r\n return mixing_rates\r\n\r\n def sample(self,n,split):\r\n batch = []\r\n for example in self._dataloaders[split]:\r\n batch.append(example)\r\n n -= 1\r\n if n == 0:\r\n return batch\r\n\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\nmultitask_dataset = MultiDataset(squad, cnn_dm)\r\nbatch = multitask_dataset.sample(100,\"train\")\r\n```\r\n\r\nThere's definitely a more sensible way than embedding `DataLoader`s inside. ",
"There is an interesting related investigation by @zphang here https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb",
"Good spot! Here are my thoughts:\r\n\r\n- Aside: Adding `MultitaskModel` to transformers might be a thing to raise - even though having task-specific heads has become unfashionable in recent times in favour of text-to-text type models.\r\n- Adding the task name as an extra field also seems useful for these kind of models which have task-specific heads\r\n- There is some validation of our approach that the user should be expected to `map` datasets into a common form.\r\n- The size-proportional sampling (also called \"Examples-proportional mixing\") used here doesn't perform too badly in the T5 paper (it's comparable to temperature-scaled mixing in many cases but less flexible. This is only reasonable with a `K` maximum size parameter to prevent very large datasets dominating). This might be good for a first prototype using:\r\n ```python\r\n def __iter__(self):\r\n \"\"\"\r\n For each batch, sample a task, and yield a batch from the respective\r\n task Dataloader.\r\n\r\n We use size-proportional sampling, but you could easily modify this\r\n to sample from some-other distribution.\r\n \"\"\"\r\n task_choice_list = []\r\n for i, task_name in enumerate(self.task_name_list):\r\n task_choice_list += [i] * self.num_batches_dict[task_name]\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n dataloader_iter_dict = {\r\n task_name: iter(dataloader) \r\n for task_name, dataloader in self.dataloader_dict.items()\r\n }\r\n for task_choice in task_choice_list:\r\n task_name = self.task_name_list[task_choice]\r\n yield next(dataloader_iter_dict[task_name]) \r\n ```\r\n We'd just need to pull samples from the raw datasets and not from `DataLoader`s for each task. We can assume the user has done `dataset.shuffle()` if they want to.\r\n\r\n Other sampling methods can later be implemented by changing how the `task_choice_list` is generated. This should allow more flexibility and not tie us to specific methods for sampling among datasets.\r\n",
"Another thought: Multitasking over benchmarks (represented as Meta-datasets in nlp) is probably a common use case. Would be nice to pass an entire benchmark to our `MultiDataset` wrapper rather than having to pass individual components.",
"Here's a fully working implementation based on the `__iter__` function of @zphang.\r\n\r\n- I've generated the task choice list in the constructor as it allows us to index into the MultiDataset just like a normal dataset. I'm changing `task_choice_list` into a list of `(dataset_idx, example_idx)` so each entry references a unique dataset example. The shuffling has to be done before this as we don't want to shuffle within each task (we assume this is done by the user if this is what they intend).\r\n- I'm slightly concerned this list could become very large if many large datasets were used. Can't see a way round it at the moment though.\r\n- I've used `task.info.builder_name` as the dataset name. Not sure if this is correct.\r\n- I'd love to add some of the other `Dataset` methods (map, slicing by column, etc...). Would be great to implement the whole interface so a single dataset can be simply replaced by this.\r\n- This does everything on the individual example-level. If some application required batches all from a single task in turn we can't really do that.\r\n\r\n```python\r\nimport nlp\r\nimport numpy as np\r\n\r\nclass MultiDataset:\r\n def __init__(self,tasks):\r\n self.tasks = tasks\r\n\r\n # Create random order of tasks\r\n # Using size-proportional sampling\r\n task_choice_list = []\r\n for i, task in enumerate(self.tasks):\r\n task_choice_list += [i] * len(task)\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n # Add index into each dataset\r\n # - We don't want to shuffle within each task\r\n counters = {}\r\n self.task_choice_list = []\r\n for i in range(len(task_choice_list)):\r\n idx = counters.get(task_choice_list[i],0)\r\n self.task_choice_list.append((task_choice_list[i],idx))\r\n counters[task_choice_list[i]] = idx + 1\r\n\r\n\r\n def __len__(self):\r\n return np.sum([len(t) for t in self.tasks])\r\n\r\n def __repr__(self):\r\n task_str = \", \".join([str(t) for t in self.tasks])\r\n return f\"MultiDataset(tasks: {task_str})\"\r\n\r\n def __getitem__(self,key):\r\n if isinstance(key, int):\r\n task_idx, example_idx = self.task_choice_list[key]\r\n task = self.tasks[task_idx]\r\n example = task[example_idx]\r\n example[\"task_name\"] = task.info.builder_name\r\n return example\r\n elif isinstance(key, slice):\r\n raise NotImplementedError()\r\n\r\n def __iter__(self):\r\n for i in range(len(self)):\r\n yield self[i]\r\n\r\n\r\ndef load_multitask(*datasets):\r\n '''Create multitask datasets per split'''\r\n\r\n def _get_common_splits(datasets):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n common_splits = _get_common_splits(datasets)\r\n out = {}\r\n for split in common_splits:\r\n out[split] = MultiDataset([d[split] for d in datasets])\r\n return out\r\n\r\n\r\n##########################################\r\n# Dataset Flattening\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\r\n \"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\n#############################################\r\n\r\nmtds = load_multitask(squad,cnn_dm)\r\n\r\nfor example in mtds[\"train\"]:\r\n print(example[\"task_name\"],example[\"target\"])\r\n```\r\nLet me know if you have any thoughts. I've started using this in some of my projects and it seems to work. If people are happy with the general approach for a first version, I can make a pull request.",
"Hey! Happy to jump into the discussion here. I'm still getting familiar with bits of this code, but the reasons I sampled over data loaders rather than datasets is 1) ensuring that each sampled batch corresponds to only 1 task (in case of different inputs formats/downstream models) and 2) potentially having different batch sizes per task (e.g. some tasks have very long/short inputs). How are you currently dealing with these in your PR?",
"The short answer is - I'm not! Everything is currently on a per-example basis. It would be fairly simple to add a `batch_size` argument which would ensure that every `batch_size` examples come from the same task. That should suit most use-cases (unless you wanted to ensure batches all came from the same task and apply something like `SortishSampler` on each task first)\r\n\r\nYour notebook was really inspiring by the way - thanks!",
"@zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.",
"mt-dnn's [batcher.py](https://github.com/namisan/mt-dnn/blob/master/mt_dnn/batcher.py) might be worth looking at.",
"> @zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.\r\n\r\nI think having different batch sizes per task is particularly helpful in some scenarios where each task has different amount of data. For example, the problem I'm currently facing is one task has tens of thousands of samples while one task has a couple hundreds. I think in this case different batch size could help. But if using the same batch size is a lot simpler to implement, I guess it makes sense to go with that.",
"I think that instead of proportional to size sampling you should specify weights or probabilities for drawing a batch from each dataset. We should also ensure that the smaller datasets are repeated so that the encoder layer doesn't overtrain on the largest dataset.",
"Are there any references for people doing different batch sizes per task in the literature? I've only seen constant batch sizes with differing numbers of batches for each task which seems sufficient to prevent the impact of large datasets (Read 3.5.3 of the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) for example).\r\n\r\n",
"Hi,\r\nregarding building T5 dataset , I think we can use datasets https://github.com/huggingface/datasets and then need something similar to tf.data.experimental.sample_from_datasets, do you know if similar functionality exist in pytorch? Which can sample multiple datasets with the given rates. thanks. ",
"Is this feature part of a `datasets` release yet? ",
"> Here's a fully working implementation based on the `__iter__` function of @zphang.\r\n> \r\n> * I've generated the task choice list in the constructor as it allows us to index into the MultiDataset just like a normal dataset. I'm changing `task_choice_list` into a list of `(dataset_idx, example_idx)` so each entry references a unique dataset example. The shuffling has to be done before this as we don't want to shuffle within each task (we assume this is done by the user if this is what they intend).\r\n> * I'm slightly concerned this list could become very large if many large datasets were used. Can't see a way round it at the moment though.\r\n> * I've used `task.info.builder_name` as the dataset name. Not sure if this is correct.\r\n> * I'd love to add some of the other `Dataset` methods (map, slicing by column, etc...). Would be great to implement the whole interface so a single dataset can be simply replaced by this.\r\n> * This does everything on the individual example-level. If some application required batches all from a single task in turn we can't really do that.\r\n> \r\n> ```python\r\n> import nlp\r\n> import numpy as np\r\n> \r\n> class MultiDataset:\r\n> def __init__(self,tasks):\r\n> self.tasks = tasks\r\n> \r\n> # Create random order of tasks\r\n> # Using size-proportional sampling\r\n> task_choice_list = []\r\n> for i, task in enumerate(self.tasks):\r\n> task_choice_list += [i] * len(task)\r\n> task_choice_list = np.array(task_choice_list)\r\n> np.random.shuffle(task_choice_list)\r\n> \r\n> # Add index into each dataset\r\n> # - We don't want to shuffle within each task\r\n> counters = {}\r\n> self.task_choice_list = []\r\n> for i in range(len(task_choice_list)):\r\n> idx = counters.get(task_choice_list[i],0)\r\n> self.task_choice_list.append((task_choice_list[i],idx))\r\n> counters[task_choice_list[i]] = idx + 1\r\n> \r\n> \r\n> def __len__(self):\r\n> return np.sum([len(t) for t in self.tasks])\r\n> \r\n> def __repr__(self):\r\n> task_str = \", \".join([str(t) for t in self.tasks])\r\n> return f\"MultiDataset(tasks: {task_str})\"\r\n> \r\n> def __getitem__(self,key):\r\n> if isinstance(key, int):\r\n> task_idx, example_idx = self.task_choice_list[key]\r\n> task = self.tasks[task_idx]\r\n> example = task[example_idx]\r\n> example[\"task_name\"] = task.info.builder_name\r\n> return example\r\n> elif isinstance(key, slice):\r\n> raise NotImplementedError()\r\n> \r\n> def __iter__(self):\r\n> for i in range(len(self)):\r\n> yield self[i]\r\n> \r\n> \r\n> def load_multitask(*datasets):\r\n> '''Create multitask datasets per split'''\r\n> \r\n> def _get_common_splits(datasets):\r\n> '''Finds the common splits present in all self.datasets'''\r\n> min_set = None\r\n> for dataset in datasets:\r\n> if min_set != None:\r\n> min_set.intersection(set(dataset.keys()))\r\n> else:\r\n> min_set = set(dataset.keys())\r\n> return min_set\r\n> \r\n> common_splits = _get_common_splits(datasets)\r\n> out = {}\r\n> for split in common_splits:\r\n> out[split] = MultiDataset([d[split] for d in datasets])\r\n> return out\r\n> \r\n> \r\n> ##########################################\r\n> # Dataset Flattening\r\n> \r\n> def flatten(dataset,flatten_fn):\r\n> for k in dataset.keys():\r\n> if isinstance(dataset[k],nlp.Dataset):\r\n> dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n> \r\n> # Squad\r\n> def flatten_squad(example):\r\n> return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\r\n> \"target\":example[\"answers\"][\"text\"]}\r\n> squad = nlp.load_dataset(\"squad\")\r\n> flatten(squad,flatten_squad)\r\n> \r\n> # CNN_DM\r\n> def flatten_cnn_dm(example):\r\n> return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\n> cnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\n> flatten(cnn_dm,flatten_cnn_dm)\r\n> \r\n> #############################################\r\n> \r\n> mtds = load_multitask(squad,cnn_dm)\r\n> \r\n> for example in mtds[\"train\"]:\r\n> print(example[\"task_name\"],example[\"target\"])\r\n> ```\r\n> \r\n> Let me know if you have any thoughts. I've started using this in some of my projects and it seems to work. If people are happy with the general approach for a first version, I can make a pull request.\r\n\r\nNot sure if this is what I'm looking for, but I implemented a version of Examples-Proportional mixing supporting only the basic feature [here](https://stackoverflow.com/a/74070116/10732321), seems to work in my project. ",
"You can use `interleave_datasets` to mix several datasets together. By default it alternates between all the datasets, but you can also provide sampling probabilities if you want to oversample from one of the datasets\r\n\r\n```python\r\nfrom datasets import load_dataset, interleave_datasets\r\n\r\nsquad = load_dataset(\"squad\", split=\"train\")\r\ncnn_dm = load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"train\")\r\nds = interleave_datasets([squad, cnn_dm])\r\n\r\nprint(ds[0])\r\n# {'id': '5733be284776f41900661182',\r\n# 'title': 'University_of_Notre_Dame',\r\n# 'context': 'Architecturally, the school has a Catholic character...',\r\n# 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',\r\n# 'answers': {'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]},\r\n# 'article': None,\r\n# 'highlights': None}\r\nprint(ds[1])\r\n# {'id': '42c027e4ff9730fbb3de84c1af0d2c506e41c3e4',\r\n# 'title': None,\r\n# 'context': None,\r\n# 'question': None,\r\n# 'answers': None,\r\n# 'article': 'LONDON, England (Reuters) -- Harry Potter star Daniel Radcliffe...',\r\n# 'highlights': \"Harry Potter star Daniel Radcliffe...\"}\r\n```\r\n\r\nsee docs at https://huggingface.co/docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.interleave_datasets",
"I also have this implementation of multi-task sampler here which I used it to tune T5: https://github.com/rabeehk/hyperformer/blob/main/hyperformer/data/multitask_sampler.py "
] | 2020-05-29T09:22:26 | 2022-10-22T00:45:50 | null | CONTRIBUTOR | null | null | null | It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks).
The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning:
- **Examples-proportional mixing** - sample from tasks proportionally to their dataset size
- **Equal mixing** - sample uniformly from each task
- **Temperature-scaled mixing** - The generalized approach used by multilingual BERT which uses a temperature T, where the mixing rate of each task is raised to the power 1/T and renormalized. When T=1 this is equivalent to equal mixing, and becomes closer to equal mixing with increasing T.
Following this discussion https://github.com/huggingface/transformers/issues/4340 in [transformers](https://github.com/huggingface/transformers), @enzoampil suggested that the `nlp` library might be a better place for this functionality.
Some method for combining datasets could be implemented ,e.g.
```
dataset = nlp.load_multitask(['squad','imdb','cnn_dm'], temperature=2.0, ...)
```
We would need a few additions:
- Method of identifying the tasks - how can we support adding a string to each task as an identifier: e.g. 'summarisation: '?
- Method of combining the metrics - a standard approach is to use the specific metric for each task and add them together for a combined score.
It would be great to support common use cases such as pretraining on the GLUE benchmark before fine-tuning on each GLUE task in turn.
I'm willing to write bits/most of this I just need some guidance on the interface and other library details so I can integrate it properly.
| {
"+1": 12,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 12,
"url": "https://api.github.com/repos/huggingface/datasets/issues/217/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/217/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/216/comments | https://api.github.com/repos/huggingface/datasets/issues/216/events | https://github.com/huggingface/datasets/issues/216 | 626,896,890 | MDU6SXNzdWU2MjY4OTY4OTA= | 216 | ❓ How to get ROUGE-2 with the ROUGE metric ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [] | closed | false | null | [] | null | [
"ROUGE-1 and ROUGE-L shouldn't return the same thing. This is weird",
"For the rouge2 metric you can do\r\n\r\n```python\r\nrouge = nlp.load_metric('rouge')\r\nwith open(\"pred.txt\") as p, open(\"ref.txt\") as g:\r\n for lp, lg in zip(p, g):\r\n rouge.add(lp, lg)\r\nscore = rouge.compute(rouge_types=[\"rouge2\"])\r\n```\r\n\r\nNote that I just did a PR to have both `.add` and `.add_batch` for metrics, that's why now this is `rouge.add(lp, lg)` and not `rouge.add([lp], [lg])`",
"Well I just tested with the official script and both rouge1 and rougeL return exactly the same thing for the input you gave, so this is actually fine ^^\r\n\r\nI hope it helped :)"
] | 2020-05-28T23:47:32 | 2020-06-01T00:04:35 | 2020-06-01T00:04:35 | NONE | null | null | null | I'm trying to use ROUGE metric, but I don't know how to get the ROUGE-2 metric.
---
I compute scores with :
```python
import nlp
rouge = nlp.load_metric('rouge')
with open("pred.txt") as p, open("ref.txt") as g:
for lp, lg in zip(p, g):
rouge.add([lp], [lg])
score = rouge.compute()
```
then : _(print only the F-score for readability)_
```python
for k, s in score.items():
print(k, s.mid.fmeasure)
```
It gives :
>rouge1 0.7915168355671788
rougeL 0.7915168355671788
---
**How can I get the ROUGE-2 score ?**
Also, it's seems weird that ROUGE-1 and ROUGE-L scores are the same. Did I made a mistake ?
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/216/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/216/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/215/comments | https://api.github.com/repos/huggingface/datasets/issues/215/events | https://github.com/huggingface/datasets/issues/215 | 626,867,879 | MDU6SXNzdWU2MjY4Njc4Nzk= | 215 | NonMatchingSplitsSizesError when loading blog_authorship_corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/52105365?v=4",
"events_url": "https://api.github.com/users/cedricconol/events{/privacy}",
"followers_url": "https://api.github.com/users/cedricconol/followers",
"following_url": "https://api.github.com/users/cedricconol/following{/other_user}",
"gists_url": "https://api.github.com/users/cedricconol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cedricconol",
"id": 52105365,
"login": "cedricconol",
"node_id": "MDQ6VXNlcjUyMTA1MzY1",
"organizations_url": "https://api.github.com/users/cedricconol/orgs",
"received_events_url": "https://api.github.com/users/cedricconol/received_events",
"repos_url": "https://api.github.com/users/cedricconol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cedricconol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cedricconol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cedricconol"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | [
"I just ran it on colab and got this\r\n```\r\n[{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train',\r\nnum_bytes=611607465, num_examples=533285, dataset_name='blog_authorship_corpus')},\r\n{'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation',\r\nnum_bytes=35652716, num_examples=30804, dataset_name='blog_authorship_corpus')}]\r\n```\r\nwhich is different from the `dataset_infos.json` and also different from yours.\r\n\r\nIt looks like the script for generating examples is not consistent",
"The files provided by the authors are corrupted and the script seems to ignore the xml files that can't be decoded (it does `try:... except UnicodeDecodeError`). Maybe depending of the environment some files can be opened and some others don't but not sure why",
"Feel free to do `ignore_verifications=True` for now... The verifications only include a check on the checksums of the downloaded files, and a check on the number of examples in each splits.",
"I'm getting this same issue when loading the `imdb` corpus via `dataset = load_dataset(\"imdb\")`. When I try `ignore_verifications=True`, no examples are read into the `train` portion of the dataset. ",
"> I'm getting this same issue when loading the `imdb` corpus via `dataset = load_dataset(\"imdb\")`. When I try `ignore_verifications=True`, no examples are read into the `train` portion of the dataset.\r\n\r\nWhen the checksums don't match, it may mean that the file you downloaded is corrupted. In this case you can try to load the dataset again `load_dataset(\"imdb\", download_mode=\"force_redownload\")`\r\n\r\nAlso I just checked on my side and it worked fine:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imdb\")\r\nprint(len(dataset[\"train\"]))\r\n# 25000\r\n```\r\n\r\nLet me know if redownloading fixes your issue @EmilyAlsentzer .\r\nIf not, feel free to open a separate issue.",
"It doesn't seem to fix the problem. I'll open a separate issue. Thanks. ",
"I wasn't aware of the \"force_redownload\" option and manually removed the '/home/me/.cache/huggingface/datasets/' dir, this worked for me (dataset 'cnn_dailymail')",
"Yes I think this might not be documented well enough. Let’s add it to the doc @lhoestq @SBrandeis.\r\nAnd everything on how to control the cache behavior better (removing, overriding, changing the path, etc)",
"Already fixed:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"blog_authorship_corpus\")\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'date', 'gender', 'age', 'horoscope', 'job'],\r\n num_rows: 689793\r\n })\r\n validation: Dataset({\r\n features: ['text', 'date', 'gender', 'age', 'horoscope', 'job'],\r\n num_rows: 37919\r\n })\r\n})\r\n",
"In my case, I had to remove the cache datasets directory completely as @putssander suggested, the download_mode='forced_redownload' was insufficient.\r\n\r\nI had a private repository with data files that I loaded with a loading script. It was working fine until I pushed a new version of the data files and then the NonMatchingSplitsSizesError was raised.\r\n"
] | 2020-05-28T22:55:19 | 2023-03-30T15:16:44 | 2022-02-10T13:05:45 | NONE | null | null | null | Getting this error when i run `nlp.load_dataset('blog_authorship_corpus')`.
```
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train',
num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'),
'recorded': SplitInfo(name='train', num_bytes=616473500, num_examples=536323,
dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation',
num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'),
'recorded': SplitInfo(name='validation', num_bytes=30786661, num_examples=27766,
dataset_name='blog_authorship_corpus')}]
```
Upon checking it seems like there is a disparity between the information in `datasets/blog_authorship_corpus/dataset_infos.json` and what was downloaded. Although I can get away with this by passing `ignore_verifications=True` in `load_dataset`, I'm thinking doing so might give problems later on. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/215/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/215/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/214/comments | https://api.github.com/repos/huggingface/datasets/issues/214/events | https://github.com/huggingface/datasets/pull/214 | 626,641,549 | MDExOlB1bGxSZXF1ZXN0NDI0NTk1NjIx | 214 | [arrow_dataset.py] add new filter function | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [
"I agree that a `.filter` method would be VERY useful and appreciated. I'm not a big fan of using `flatten_nested` as it completely breaks down the structure of the example and it may create bugs. Right now I think it may not work for nested structures. Maybe there's a simpler way that we've not figured out yet.",
"Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n```python\r\nfor i in range(num_examples):\r\n example = map_nested(lambda x: x[i], batch)\r\n # ... then test to keep it or not\r\n```",
"> Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n> \r\n> ```python\r\n> for i in range(num_examples):\r\n> example = map_nested(lambda x: x[i], batch)\r\n> # ... then test to keep it or not\r\n> ```\r\n\r\nAwesome I'll check it out :-) ",
"> Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n> \r\n> ```python\r\n> for i in range(num_examples):\r\n> example = map_nested(lambda x: x[i], batch)\r\n> # ... then test to keep it or not\r\n> ```\r\n\r\nAwesome this function is definitely much nicer!",
"Actually I just realized that `map_nested` might not work either as it applies the function at the very last list of the structure. However we can imagine that a single example has also a list in its structure:\r\n```python\r\none_example = {\r\n \"title\": \"blabla\",\r\n \"paragraphs\": [\r\n \"p1\", \"p2\", ...\r\n ]\r\n}\r\n```",
"We'll probably have to take into account the `dset._data.schema` to extract the examples from the batch.",
"> Actually I just realized that `map_nested` might not work either as it applies the function at the very last list of the structure. However we can imagine that a single example has also a list in its structure:\r\n> \r\n> ```python\r\n> one_example = {\r\n> \"title\": \"blabla\",\r\n> \"paragraphs\": [\r\n> \"p1\", \"p2\", ...\r\n> ]\r\n> }\r\n> ```\r\n\r\nThey both work. I'm using it on trivia_qa which is pretty nested. If you use the option `dict_only=True` I think it's fine.",
"> We'll probably have to take into account the `dset._data.schema` to extract the examples from the batch.\r\n\r\nWhy? ",
"Actually it's fine. I guess this is going to be yet another thing to be unit-tested just to make sure ^^",
"Yes, I will need to add tests and documentation! \r\n@thomwolf - would a function like this be ok? It abstracts `.map()` a bit which might be hard to understand. ",
"I tried on some datasets with nested structure and it works fine ! Great work :D \r\n",
"Awesome :-), I will add documentation and some simple unittests",
"Ok merging!"
] | 2020-05-28T16:21:40 | 2020-05-29T11:43:29 | 2020-05-29T11:32:20 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/214.diff",
"html_url": "https://github.com/huggingface/datasets/pull/214",
"merged_at": "2020-05-29T11:32:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/214.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/214"
} | The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples.
I think, filtering out examples is also a very common operation people would like to perform on datasets.
This PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function.
Here is a sample code you can play around with:
```python
ds = nlp.load_dataset("squad", split="validation[:10%]")
def remove_under_idx_5(example, idx):
return idx < 5
def only_keep_examples_with_is_in_context(example):
return "is" in example["context"]
result_keep_only_first_5 = ds.filter(remove_under_idx_5, with_indices=True, load_from_cache_file=False)
result_keep_examples_with_is_in_context = ds.filter(only_keep_examples_with_is_in_context, load_from_cache_file=False)
print("Original number of examples: {}".format(len(ds)))
print("First five examples number of examples: {}".format(len(result_keep_only_first_5)))
print("Is in context examples number of examples: {}".format(len(result_keep_examples_with_is_in_context)))
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/214/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/213/comments | https://api.github.com/repos/huggingface/datasets/issues/213/events | https://github.com/huggingface/datasets/pull/213 | 626,587,995 | MDExOlB1bGxSZXF1ZXN0NDI0NTUxODE3 | 213 | better message if missing beam options | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-05-28T15:06:57 | 2020-05-29T09:51:17 | 2020-05-29T09:51:16 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/213",
"merged_at": "2020-05-29T09:51:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/213"
} | WDYT @yjernite ?
For example:
```python
dataset = nlp.load_dataset('wikipedia', '20200501.aa')
```
Raises:
```
MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.aa', beam_runner='DirectRunner')`
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/213/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/212/comments | https://api.github.com/repos/huggingface/datasets/issues/212/events | https://github.com/huggingface/datasets/pull/212 | 626,580,198 | MDExOlB1bGxSZXF1ZXN0NDI0NTQ1NjAy | 212 | have 'add' and 'add_batch' for metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-05-28T14:56:47 | 2020-05-29T10:41:05 | 2020-05-29T10:41:04 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/212.diff",
"html_url": "https://github.com/huggingface/datasets/pull/212",
"merged_at": "2020-05-29T10:41:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/212.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/212"
} | This should fix #116
Previously the `.add` method of metrics expected a batch of examples.
Now `.add` expects one prediction/reference and `.add_batch` expects a batch.
I think it is more coherent with the way the ArrowWriter works. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/212/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/211/comments | https://api.github.com/repos/huggingface/datasets/issues/211/events | https://github.com/huggingface/datasets/issues/211 | 626,565,994 | MDU6SXNzdWU2MjY1NjU5OTQ= | 211 | [Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
},
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Here the full error trace:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-1-7aaf3f011358> in <module>\r\n 1 import nlp\r\n 2 ds = nlp.load_dataset(\"trivia_qa\", \"rc\", split=\"validation[:1%]\") # this might take 2.3 min to download but it's cached afterwards...\r\n----> 3 ds.map(lambda x: x, load_from_cache_file=False)\r\n\r\n~/python_bin/nlp/arrow_dataset.py in map(self, function, with_indices, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, arrow_schema, disable_nullable)\r\n 549\r\n 550 if update_data:\r\n--> 551 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n 552\r\n 553 # Create new Dataset from buffer or file\r\n\r\n~/python_bin/nlp/arrow_writer.py in finalize(self, close_stream)\r\n 182 def finalize(self, close_stream=True):\r\n 183 if self.pa_writer is not None:\r\n--> 184 self.write_on_file()\r\n 185 self.pa_writer.close()\r\n 186 if close_stream:\r\n\r\n~/python_bin/nlp/arrow_writer.py in write_on_file(self)\r\n 104 \"\"\"\r\n 105 if self.current_rows:\r\n--> 106 pa_array = pa.array(self.current_rows, type=self._type)\r\n 107 first_example = pa.array(self.current_rows[0:1], type=self._type)[0]\r\n 108 # Sanity check\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n~/hugging_face/venv_3.7/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Could not convert TagMe with type str: converting to null type\r\n```",
"Actually thinking a bit more about it, it's probably a data sample that is not correct in `trivia_qa`. But I'm a bit surprised though that we managed to write it in .arrow format and now cannot write it anymore after an \"identity\" mapping.",
"I don't have this error :x",
"Interesting, maybe I have a very old cache of trivia_qa...thanks for checking",
"I'm running it right now on colab to double check",
"Actually, I know what the problem is...I'm quite sure it's a bug. Here we take some test inputs: https://github.com/huggingface/nlp/blob/0e0ef12c14d2175e0b0bd7d8aa814b09e2cd7e1f/src/nlp/arrow_dataset.py#L472\r\n\r\nIt might be that in the test inputs, a `Sequence` type value is an emtpy list. So in my case I have `ds[0][\"entity_pages'][\"wiki_context\"] = []`. => this leads to an `arrow_schema` equal to `null` for `[\"entity_pages'][\"wiki_context\"]` => see line: https://github.com/huggingface/nlp/blob/0e0ef12c14d2175e0b0bd7d8aa814b09e2cd7e1f/src/nlp/arrow_dataset.py#L501 instead of list of string which it should for other examples. \r\n\r\nGuess it's an edge case, but it can happen.",
"Good point, I think the schema should be infered at the writing stage where we have a `writer_batch_size` number of examples (typically 10k) so it's even less likely to run into this scenario."
] | 2020-05-28T14:38:14 | 2020-07-23T10:15:16 | 2020-07-23T10:15:16 | CONTRIBUTOR | null | null | null | Running the following code
```
import nlp
ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards...
ds.map(lambda x: x, load_from_cache_file=False)
```
triggers a `ArrowInvalid: Could not convert TagMe with type str: converting to null type` error.
On the other hand if we remove a certain column of `trivia_qa` which seems responsible for the bug, it works:
```
import nlp
ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards...
ds.map(lambda x: x, remove_columns=["entity_pages"], load_from_cache_file=False)
```
. Seems quite hard to debug what's going on here... @lhoestq @thomwolf - do you have a good first guess what the problem could be?
**Note** BTW: I think this could be a good test to check that the datasets work correctly: Take a tiny portion of the dataset and check that it can be written correctly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/211/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/211/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/210/comments | https://api.github.com/repos/huggingface/datasets/issues/210/events | https://github.com/huggingface/datasets/pull/210 | 626,504,243 | MDExOlB1bGxSZXF1ZXN0NDI0NDgyNDgz | 210 | fix xnli metric kwargs description | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-05-28T13:21:44 | 2020-05-28T13:22:11 | 2020-05-28T13:22:10 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/210.diff",
"html_url": "https://github.com/huggingface/datasets/pull/210",
"merged_at": "2020-05-28T13:22:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/210.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/210"
} | The text was wrong as noticed in #202 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/210/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/210/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/209/comments | https://api.github.com/repos/huggingface/datasets/issues/209/events | https://github.com/huggingface/datasets/pull/209 | 626,405,849 | MDExOlB1bGxSZXF1ZXN0NDI0NDAwOTc4 | 209 | Add a Google Drive exception for small files | {
"avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4",
"events_url": "https://api.github.com/users/airKlizz/events{/privacy}",
"followers_url": "https://api.github.com/users/airKlizz/followers",
"following_url": "https://api.github.com/users/airKlizz/following{/other_user}",
"gists_url": "https://api.github.com/users/airKlizz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/airKlizz",
"id": 25703835,
"login": "airKlizz",
"node_id": "MDQ6VXNlcjI1NzAzODM1",
"organizations_url": "https://api.github.com/users/airKlizz/orgs",
"received_events_url": "https://api.github.com/users/airKlizz/received_events",
"repos_url": "https://api.github.com/users/airKlizz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/airKlizz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/airKlizz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/airKlizz"
} | [] | closed | false | null | [] | null | [
"Can you run the style formatting tools to pass the code quality test?\r\n\r\nYou can find all the details in CONTRIBUTING.md: https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp",
"Nice ! ",
"``make style`` done! Thanks for the approvals."
] | 2020-05-28T10:40:17 | 2020-05-28T15:15:04 | 2020-05-28T15:15:04 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/209.diff",
"html_url": "https://github.com/huggingface/datasets/pull/209",
"merged_at": "2020-05-28T15:15:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/209.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/209"
} | I tried to use the ``nlp`` library to load personnal datasets. I mainly copy-paste the code for ``multi-news`` dataset because my files are stored on Google Drive.
One of my dataset is small (< 25Mo) so it can be verified by Drive without asking the authorization to the user. This makes the download starts directly.
Currently the ``nlp`` raises a error: ``ConnectionError: Couldn't reach https://drive.google.com/uc?export=download&id=1DGnbUY9zwiThTdgUvVTSAvSVHoloCgun`` while the url is working. So I just add a new exception as you have already done for ``firebasestorage.googleapis.com`` :
```
elif (response.status_code == 400 and "firebasestorage.googleapis.com" in url) or (response.status_code == 405 and "drive.google.com" in url)
```
I make an example of the error that you can run on [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ae_JJ9uvUt-9GBh0uGZhjbF5aXkl-BPv?usp=sharing)
I avoid the error by adding an exception but there is maybe a proper way to do it.
Many thanks :hugs:
Best, | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/209/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/209/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/208/comments | https://api.github.com/repos/huggingface/datasets/issues/208/events | https://github.com/huggingface/datasets/pull/208 | 626,398,519 | MDExOlB1bGxSZXF1ZXN0NDI0Mzk0ODIx | 208 | [Dummy data] insert config name instead of config | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [] | 2020-05-28T10:28:19 | 2020-05-28T12:48:01 | 2020-05-28T12:48:00 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/208.diff",
"html_url": "https://github.com/huggingface/datasets/pull/208",
"merged_at": "2020-05-28T12:48:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/208.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/208"
} | Thanks @yjernite for letting me know. in the dummy data command the config name shuold be passed to the dataset builder and not the config itself.
Also, @lhoestq fixed small import bug introduced by beam command I think. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/208/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/208/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/207/comments | https://api.github.com/repos/huggingface/datasets/issues/207/events | https://github.com/huggingface/datasets/issues/207 | 625,932,200 | MDU6SXNzdWU2MjU5MzIyMDA= | 207 | Remove test set from NLP viewer | {
"avatar_url": "https://avatars.githubusercontent.com/u/748399?v=4",
"events_url": "https://api.github.com/users/chrisdonahue/events{/privacy}",
"followers_url": "https://api.github.com/users/chrisdonahue/followers",
"following_url": "https://api.github.com/users/chrisdonahue/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisdonahue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chrisdonahue",
"id": 748399,
"login": "chrisdonahue",
"node_id": "MDQ6VXNlcjc0ODM5OQ==",
"organizations_url": "https://api.github.com/users/chrisdonahue/orgs",
"received_events_url": "https://api.github.com/users/chrisdonahue/received_events",
"repos_url": "https://api.github.com/users/chrisdonahue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chrisdonahue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisdonahue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chrisdonahue"
} | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | [] | null | [
"~is the viewer also open source?~\r\n[is a streamlit app!](https://docs.streamlit.io/en/latest/getting_started.html)",
"Appears that [two thirds of those polled on Twitter](https://twitter.com/srush_nlp/status/1265734497632477185) are in favor of _some_ mechanism for averting eyeballs from the test data.",
"We do no longer use datasets-viewer."
] | 2020-05-27T18:32:07 | 2022-02-10T13:17:45 | 2022-02-10T13:17:45 | NONE | null | null | null | While the new [NLP viewer](https://huggingface.co/nlp/viewer/) is a great tool, I think it would be best to outright remove the option of looking at the test sets. At the very least, a warning should be displayed to users before showing the test set. Newcomers to the field might not be aware of best practices, and small things like this can help increase awareness. | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/207/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/207/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/206/comments | https://api.github.com/repos/huggingface/datasets/issues/206/events | https://github.com/huggingface/datasets/issues/206 | 625,842,989 | MDU6SXNzdWU2MjU4NDI5ODk= | 206 | [Question] Combine 2 datasets which have the same columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4",
"events_url": "https://api.github.com/users/airKlizz/events{/privacy}",
"followers_url": "https://api.github.com/users/airKlizz/followers",
"following_url": "https://api.github.com/users/airKlizz/following{/other_user}",
"gists_url": "https://api.github.com/users/airKlizz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/airKlizz",
"id": 25703835,
"login": "airKlizz",
"node_id": "MDQ6VXNlcjI1NzAzODM1",
"organizations_url": "https://api.github.com/users/airKlizz/orgs",
"received_events_url": "https://api.github.com/users/airKlizz/received_events",
"repos_url": "https://api.github.com/users/airKlizz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/airKlizz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/airKlizz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/airKlizz"
} | [] | closed | false | null | [] | null | [
"We are thinking about ways to combine datasets for T5 in #217, feel free to share your thoughts about this.",
"Ok great! I will look at it. Thanks"
] | 2020-05-27T16:25:52 | 2020-06-10T09:11:14 | 2020-06-10T09:11:14 | CONTRIBUTOR | null | null | null | Hi,
I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-specific prefixes for T5 : *summarize:* for english and *zusammenfassen:* for german)
My issue is that I want to train T5 on the combined english and german datasets to see if it improves results. So I would like to combine 2 datasets (which have the same columns) to make one and train T5 on it. I was wondering if there is a proper way to do it? I assume that it can be done by combining all examples of each dataset but maybe you have a better solution.
Hoping this is clear enough,
Thanks a lot 😊
Best | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/206/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/205/comments | https://api.github.com/repos/huggingface/datasets/issues/205/events | https://github.com/huggingface/datasets/pull/205 | 625,839,335 | MDExOlB1bGxSZXF1ZXN0NDIzOTY2ODE1 | 205 | Better arrow dataset iter | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-05-27T16:20:21 | 2020-05-27T16:39:58 | 2020-05-27T16:39:56 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/205.diff",
"html_url": "https://github.com/huggingface/datasets/pull/205",
"merged_at": "2020-05-27T16:39:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/205.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/205"
} | I tried to play around with `tf.data.Dataset.from_generator` and I found out that the `__iter__` that we have for `nlp.arrow_dataset.Dataset` ignores the format that has been set (torch or tensorflow).
With these changes I should be able to come up with a `tf.data.Dataset` that uses lazy loading, as asked in #193. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/205/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/205/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/204/comments | https://api.github.com/repos/huggingface/datasets/issues/204/events | https://github.com/huggingface/datasets/pull/204 | 625,655,849 | MDExOlB1bGxSZXF1ZXN0NDIzODE5MTQw | 204 | Add Dataflow support + Wikipedia + Wiki40b | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-05-27T12:32:49 | 2020-05-28T08:10:35 | 2020-05-28T08:10:34 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/204.diff",
"html_url": "https://github.com/huggingface/datasets/pull/204",
"merged_at": "2020-05-28T08:10:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/204.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/204"
} | # Add Dataflow support + Wikipedia + Wiki40b
## Support datasets processing with Apache Beam
Some datasets are too big to be processed on a single machine, for example: wikipedia, wiki40b, etc. Apache Beam allows to process datasets on many execution engines like Dataflow, Spark, Flink, etc.
To process such datasets with Beam, I added a command to run beam pipelines `nlp-cli run_beam path/to/dataset/script`. Then I used it to process the english + french wikipedia, and the english of wiki40b.
The processed arrow files are on GCS and are the result of a Dataflow job.
I added a markdown documentation file in `docs` that explains how to use it properly.
## Load already processed datasets
Now that we have those datasets already processed, I made it possible to load datasets that are already processed. You can do `load_dataset('wikipedia', '20200501.en')` and it will download the processed files from the Hugging Face GCS directly into the user's cache and be ready to use !
The Wikipedia dataset was already asked in #187 and this PR should soon allow to add Natural Questions as asked in #129
## Other changes in the code
To make things work, I had to do a few adjustments:
- add a `ship_files_with_pipeline` method to the `DownloadManager`. This is because beam pipelines can be run in the cloud and therefore need to have access to your downloaded data. I used it in the wikipedia script:
```python
if not pipeline.is_local():
downloaded_files = dl_manager.ship_files_with_pipeline(downloaded_files, pipeline)
```
- add parquet to arrow conversion. This is because the output of beam pipelines are parquet files so we need to convert them to arrow and have the arrow files on GCS
- add a test script with a dummy beam dataset
- minor adjustments to allow read/write operations on remote files using `apache_beam.io.filesystems.FileSystems` if we want (it can be connected to gcp, s3, hdfs, etc...) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/204/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/204/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/203/comments | https://api.github.com/repos/huggingface/datasets/issues/203/events | https://github.com/huggingface/datasets/pull/203 | 625,515,488 | MDExOlB1bGxSZXF1ZXN0NDIzNzEyMTQ3 | 203 | Raise an error if no config name for datasets like glue | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2020-05-27T09:03:58 | 2020-05-27T16:40:39 | 2020-05-27T16:40:38 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/203.diff",
"html_url": "https://github.com/huggingface/datasets/pull/203",
"merged_at": "2020-05-27T16:40:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/203.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/203"
} | Some datasets like glue (see #130) and scientific_papers (see #197) have many configs.
For example for glue there are cola, sst2, mrpc etc.
Currently if a user does `load_dataset('glue')`, then Cola is loaded by default and it can be confusing. Instead, we should raise an error to let the user know that he has to pick one of the available configs (as proposed in #152). For example for glue, the message looks like:
```
ValueError: Config name is missing.
Please pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']
Example of usage:
`load_dataset('glue', 'cola')`
```
The error is raised if the config name is missing and if there are >=2 possible configs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/203/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/203/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/202/comments | https://api.github.com/repos/huggingface/datasets/issues/202/events | https://github.com/huggingface/datasets/issues/202 | 625,493,983 | MDU6SXNzdWU2MjU0OTM5ODM= | 202 | Mistaken `_KWARGS_DESCRIPTION` for XNLI metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/33572125?v=4",
"events_url": "https://api.github.com/users/phiyodr/events{/privacy}",
"followers_url": "https://api.github.com/users/phiyodr/followers",
"following_url": "https://api.github.com/users/phiyodr/following{/other_user}",
"gists_url": "https://api.github.com/users/phiyodr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/phiyodr",
"id": 33572125,
"login": "phiyodr",
"node_id": "MDQ6VXNlcjMzNTcyMTI1",
"organizations_url": "https://api.github.com/users/phiyodr/orgs",
"received_events_url": "https://api.github.com/users/phiyodr/received_events",
"repos_url": "https://api.github.com/users/phiyodr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/phiyodr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phiyodr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/phiyodr"
} | [] | closed | false | null | [] | null | [
"Indeed, good catch ! thanks\r\nFixing it right now"
] | 2020-05-27T08:34:42 | 2020-05-28T13:22:36 | 2020-05-28T13:22:36 | NONE | null | null | null | Hi!
The [`_KWARGS_DESCRIPTION`](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/xnli/xnli.py#L45) for the XNLI metric uses `Args` and `Returns` text from [BLEU](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/bleu/bleu.py#L58) metric:
```
_KWARGS_DESCRIPTION = """
Computes XNLI score which is just simple accuracy.
Args:
predictions: list of translations to score.
Each translation should be tokenized into a list of tokens.
references: list of lists of references for each translation.
Each reference should be tokenized into a list of tokens.
max_order: Maximum n-gram order to use when computing BLEU score.
smooth: Whether or not to apply Lin et al. 2004 smoothing.
Returns:
'bleu': bleu score,
'precisions': geometric mean of n-gram precisions,
'brevity_penalty': brevity penalty,
'length_ratio': ratio of lengths,
'translation_length': translation_length,
'reference_length': reference_length
"""
```
But it should be something like:
```
_KWARGS_DESCRIPTION = """
Computes XNLI score which is just simple accuracy.
Args:
predictions: Predicted labels.
references: Ground truth labels.
Returns:
'accuracy': accuracy
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/202/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/202/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/201/comments | https://api.github.com/repos/huggingface/datasets/issues/201/events | https://github.com/huggingface/datasets/pull/201 | 625,235,430 | MDExOlB1bGxSZXF1ZXN0NDIzNDkzNTMw | 201 | Fix typo in README | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik"
} | [] | closed | false | null | [] | null | [
"Amazing, @LysandreJik!",
"Really did my best!"
] | 2020-05-26T22:18:21 | 2020-05-26T23:40:31 | 2020-05-26T23:00:56 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/201",
"merged_at": "2020-05-26T23:00:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/201"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/201/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/200/comments | https://api.github.com/repos/huggingface/datasets/issues/200/events | https://github.com/huggingface/datasets/pull/200 | 625,226,638 | MDExOlB1bGxSZXF1ZXN0NDIzNDg2NTM0 | 200 | [ArrowWriter] Set schema at first write example | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Good point!\r\n\r\nI guess we could add this to `write_batch` as well (before using `self._schema` in the first line of this method)?"
] | 2020-05-26T21:59:48 | 2020-05-27T09:07:54 | 2020-05-27T09:07:53 | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/200.diff",
"html_url": "https://github.com/huggingface/datasets/pull/200",
"merged_at": "2020-05-27T09:07:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/200.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/200"
} | Right now if the schema was not specified when instantiating `ArrowWriter`, then it could be set with the first `write_table` for example (it calls `self._build_writer()` to do so).
I noticed that it was not done if the first example is added via `.write`, so I added it for coherence. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/200/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/200/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/199/comments | https://api.github.com/repos/huggingface/datasets/issues/199/events | https://github.com/huggingface/datasets/pull/199 | 625,217,440 | MDExOlB1bGxSZXF1ZXN0NDIzNDc4ODIx | 199 | Fix GermEval 2014 dataset infos | {
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stefan-it",
"id": 20651387,
"login": "stefan-it",
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stefan-it"
} | [] | closed | false | null | [] | null | [
"Hopefully. this also fixes the dataset view on https://huggingface.co/nlp/viewer/ :)",
"Oh good catch ! This should fix it indeed"
] | 2020-05-26T21:41:44 | 2020-05-26T21:50:24 | 2020-05-26T21:50:24 | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/199",
"merged_at": "2020-05-26T21:50:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/199"
} | Hi,
this PR just removes the `dataset_info.json` file and adds a newly generated `dataset_infos.json` file. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/199/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/198/comments | https://api.github.com/repos/huggingface/datasets/issues/198/events | https://github.com/huggingface/datasets/issues/198 | 625,200,627 | MDU6SXNzdWU2MjUyMDA2Mjc= | 198 | Index outside of table length | {
"avatar_url": "https://avatars.githubusercontent.com/u/305717?v=4",
"events_url": "https://api.github.com/users/casajarm/events{/privacy}",
"followers_url": "https://api.github.com/users/casajarm/followers",
"following_url": "https://api.github.com/users/casajarm/following{/other_user}",
"gists_url": "https://api.github.com/users/casajarm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/casajarm",
"id": 305717,
"login": "casajarm",
"node_id": "MDQ6VXNlcjMwNTcxNw==",
"organizations_url": "https://api.github.com/users/casajarm/orgs",
"received_events_url": "https://api.github.com/users/casajarm/received_events",
"repos_url": "https://api.github.com/users/casajarm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/casajarm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casajarm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/casajarm"
} | [] | closed | false | null | [] | null | [
"Sounds like something related to the nlp viewer @srush ",
"Fixed. "
] | 2020-05-26T21:09:40 | 2020-05-26T22:43:49 | 2020-05-26T22:43:49 | NONE | null | null | null | The offset input box warns of numbers larger than a limit (like 2000) but then the errors start at a smaller value than that limit (like 1955).
> ValueError: Index (2000) outside of table length (2000).
> Traceback:
> File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script
> exec(code, module.__dict__)
> File "/home/sasha/nlp_viewer/run.py", line 116, in <module>
> v = d[item][k]
> File "/home/sasha/.local/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 338, in __getitem__
> output_all_columns=self._output_all_columns,
> File "/home/sasha/.local/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 290, in _getitem
> raise ValueError(f"Index ({key}) outside of table length ({self._data.num_rows}).") | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/198/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/198/timeline | null | completed | false |