url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.11B
| node_id
stringlengths 18
32
| number
int64 1
3.59k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,642B
| updated_at
int64 1,587B
1,642B
| closed_at
int64 1,587B
1,642B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/448/comments | https://api.github.com/repos/huggingface/datasets/issues/448/events | https://github.com/huggingface/datasets/pull/448 | 666,893,443 | MDExOlB1bGxSZXF1ZXN0NDU3NjYwMDU2 | 448 | add aws load metric test | {
"login": "idoh",
"id": 5303103,
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idoh",
"html_url": "https://github.com/idoh",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"repos_url": "https://api.github.com/users/idoh/repos",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Could you run `make style` to fix the code_quality fail ?\r\nYou'll need `black` and `isort` that you can install by doing `pip install -e .[quality]`",
"Thanks @lhoestq\r\nI fixed the styling",
"Thank you :)"
] | 1,595,926,222,000 | 1,595,948,547,000 | 1,595,948,547,000 | CONTRIBUTOR | null | Following issue #445
Added a test to recognize import errors of all metrics | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/448/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/448",
"html_url": "https://github.com/huggingface/datasets/pull/448",
"diff_url": "https://github.com/huggingface/datasets/pull/448.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/448.patch",
"merged_at": 1595948546000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/447/comments | https://api.github.com/repos/huggingface/datasets/issues/447/events | https://github.com/huggingface/datasets/pull/447 | 666,842,115 | MDExOlB1bGxSZXF1ZXN0NDU3NjE2NDA0 | 447 | [BugFix] fix wrong import of DEFAULT_TOKENIZER | {
"login": "idoh",
"id": 5303103,
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idoh",
"html_url": "https://github.com/idoh",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"repos_url": "https://api.github.com/users/idoh/repos",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,922,070,000 | 1,595,941,081,000 | 1,595,940,725,000 | CONTRIBUTOR | null | Fixed the path to `DEFAULT_TOKENIZER`
#445 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/447/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/447",
"html_url": "https://github.com/huggingface/datasets/pull/447",
"diff_url": "https://github.com/huggingface/datasets/pull/447.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/447.patch",
"merged_at": 1595940725000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/446/comments | https://api.github.com/repos/huggingface/datasets/issues/446/events | https://github.com/huggingface/datasets/pull/446 | 666,837,351 | MDExOlB1bGxSZXF1ZXN0NDU3NjEyNTg5 | 446 | [BugFix] fix wrong import of DEFAULT_TOKENIZER | {
"login": "idoh",
"id": 5303103,
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idoh",
"html_url": "https://github.com/idoh",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"repos_url": "https://api.github.com/users/idoh/repos",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,921,567,000 | 1,595,921,686,000 | 1,595,921,639,000 | CONTRIBUTOR | null | Fixed the path to `DEFAULT_TOKENIZER`
#445 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/446/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/446",
"html_url": "https://github.com/huggingface/datasets/pull/446",
"diff_url": "https://github.com/huggingface/datasets/pull/446.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/446.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/445/comments | https://api.github.com/repos/huggingface/datasets/issues/445/events | https://github.com/huggingface/datasets/issues/445 | 666,836,658 | MDU6SXNzdWU2NjY4MzY2NTg= | 445 | DEFAULT_TOKENIZER import error in sacrebleu | {
"login": "idoh",
"id": 5303103,
"node_id": "MDQ6VXNlcjUzMDMxMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/idoh",
"html_url": "https://github.com/idoh",
"followers_url": "https://api.github.com/users/idoh/followers",
"following_url": "https://api.github.com/users/idoh/following{/other_user}",
"gists_url": "https://api.github.com/users/idoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/idoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/idoh/subscriptions",
"organizations_url": "https://api.github.com/users/idoh/orgs",
"repos_url": "https://api.github.com/users/idoh/repos",
"events_url": "https://api.github.com/users/idoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/idoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This issue was resolved by #447 "
] | 1,595,921,490,000 | 1,595,941,136,000 | 1,595,941,136,000 | CONTRIBUTOR | null | Latest Version 0.3.0
When loading the metric "sacrebleu" there is an import error due to the wrong path
![image](https://user-images.githubusercontent.com/5303103/88633063-2c5e5f00-d0bd-11ea-8ca8-4704dc975433.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/445/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/444 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/444/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/444/comments | https://api.github.com/repos/huggingface/datasets/issues/444/events | https://github.com/huggingface/datasets/issues/444 | 666,280,842 | MDU6SXNzdWU2NjYyODA4NDI= | 444 | Keep loading old file even I specify a new file in load_dataset | {
"login": "joshhu",
"id": 10594453,
"node_id": "MDQ6VXNlcjEwNTk0NDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/10594453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshhu",
"html_url": "https://github.com/joshhu",
"followers_url": "https://api.github.com/users/joshhu/followers",
"following_url": "https://api.github.com/users/joshhu/following{/other_user}",
"gists_url": "https://api.github.com/users/joshhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshhu/subscriptions",
"organizations_url": "https://api.github.com/users/joshhu/orgs",
"repos_url": "https://api.github.com/users/joshhu/repos",
"events_url": "https://api.github.com/users/joshhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshhu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Same here !",
"This is the only fix I could come up with without touching the repo's code.\r\n```python\r\nfrom nlp.builder import FORCE_REDOWNLOAD\r\ndataset = load_dataset('csv', data_file='./a.csv', download_mode=FORCE_REDOWNLOAD, version='0.0.1')\r\n```\r\nYou'll have to change the version each time you want to load a different csv file.\r\nIf you're willing to add a ```print```, you can go to ```nlp.load``` and add ```print(builder_instance.cache_dir)``` right before the ```return ds``` in the ```load_dataset``` method. It'll print the cache folder, and you'll just have to erase it (and then you won't need the change here above)."
] | 1,595,855,286,000 | 1,596,031,042,000 | 1,596,031,042,000 | NONE | null | I used load a file called 'a.csv' by
```
dataset = load_dataset('csv', data_file='./a.csv')
```
And after a while, I tried to load another csv called 'b.csv'
```
dataset = load_dataset('csv', data_file='./b.csv')
```
However, the new dataset seems to remain the old 'a.csv' and not loading new csv file.
Even worse, after I load a.csv, the load_dataset function keeps loading the 'a.csv' afterward.
Is this a cache problem?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/444/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/444/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/443/comments | https://api.github.com/repos/huggingface/datasets/issues/443/events | https://github.com/huggingface/datasets/issues/443 | 666,246,716 | MDU6SXNzdWU2NjYyNDY3MTY= | 443 | Cannot unpickle saved .pt dataset with torch.save()/load() | {
"login": "vegarab",
"id": 24683907,
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegarab",
"html_url": "https://github.com/vegarab",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"repos_url": "https://api.github.com/users/vegarab/repos",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This seems to be fixed in a non-released version. \r\n\r\nInstalling nlp from source\r\n```\r\ngit clone https://github.com/huggingface/nlp\r\ncd nlp\r\npip install .\r\n```\r\nsolves the issue. "
] | 1,595,852,017,000 | 1,595,855,111,000 | 1,595,855,111,000 | CONTRIBUTOR | null | Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling:
```python
>>> import torch
>>> import nlp
>>> squad = nlp.load_dataset("squad.py", split="train")
>>> squad
Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype='string', id=None)}, num_rows: 87599)
>>> squad = squad.map(create_features, batched=True)
>>> squad.set_format(type="torch", columns=["source_ids", "target_ids", "attention_mask"])
>>> torch.save(squad, "squad.pt")
>>> squad_pt = torch.load("squad.pt")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 773, in _legacy_load
result = unpickler.load()
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/splits.py", line 493, in __setitem__
raise ValueError("Cannot add elem. Use .add() instead.")
ValueError: Cannot add elem. Use .add() instead.
```
where `create_features` is a function that tokenizes the data using `batch_encode_plus` and returns a Dict with `input_ids`, `target_ids` and `attention_mask`.
```python
def create_features(batch):
source_text_encoding = tokenizer.batch_encode_plus(
batch["source_text"],
max_length=max_source_length,
pad_to_max_length=True,
truncation=True)
target_text_encoding = tokenizer.batch_encode_plus(
batch["target_text"],
max_length=max_target_length,
pad_to_max_length=True,
truncation=True)
features = {
"source_ids": source_text_encoding["input_ids"],
"target_ids": target_text_encoding["input_ids"],
"attention_mask": source_text_encoding["attention_mask"]
}
return features
```
I found a similar issue in [issue 5267 in the huggingface/transformers repo](https://github.com/huggingface/transformers/issues/5267) which was solved by downgrading to `nlp==0.2.0`. That did not solve this problem, however. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/443/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/442/comments | https://api.github.com/repos/huggingface/datasets/issues/442/events | https://github.com/huggingface/datasets/issues/442 | 666,201,810 | MDU6SXNzdWU2NjYyMDE4MTA= | 442 | [Suggestion] Glue Diagnostic Data with Labels | {
"login": "ggbetz",
"id": 3662782,
"node_id": "MDQ6VXNlcjM2NjI3ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3662782?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggbetz",
"html_url": "https://github.com/ggbetz",
"followers_url": "https://api.github.com/users/ggbetz/followers",
"following_url": "https://api.github.com/users/ggbetz/following{/other_user}",
"gists_url": "https://api.github.com/users/ggbetz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ggbetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ggbetz/subscriptions",
"organizations_url": "https://api.github.com/users/ggbetz/orgs",
"repos_url": "https://api.github.com/users/ggbetz/repos",
"events_url": "https://api.github.com/users/ggbetz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ggbetz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | open | false | null | [] | null | [] | 1,595,847,598,000 | 1,598,282,000,000 | null | NONE | null | Hello! First of all, thanks for setting up this useful project!
I've just realised you provide the the [Glue Diagnostics Data](https://huggingface.co/nlp/viewer/?dataset=glue&config=ax) without labels, indicating in the `GlueConfig` that you've only a test set.
Yet, the data with labels is available, too (see also [here](https://gluebenchmark.com/diagnostics#introduction)):
https://www.dropbox.com/s/ju7d95ifb072q9f/diagnostic-full.tsv?dl=1
Have you considered incorporating it? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/442/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/442/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/441/comments | https://api.github.com/repos/huggingface/datasets/issues/441/events | https://github.com/huggingface/datasets/pull/441 | 666,148,413 | MDExOlB1bGxSZXF1ZXN0NDU3MDQyMjY3 | 441 | Add features parameter in load dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This one is ready for review now",
"I changed to using features only, instead of info.\r\nLet mw know if it sounds good to you now @thomwolf "
] | 1,595,843,401,000 | 1,596,113,477,000 | 1,596,113,476,000 | MEMBER | null | Added `features` argument in `nlp.load_dataset`.
If they don't match the data type, it raises a `ValueError`.
It's a draft PR because #440 needs to be merged first. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/441/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/441",
"html_url": "https://github.com/huggingface/datasets/pull/441",
"diff_url": "https://github.com/huggingface/datasets/pull/441.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/441.patch",
"merged_at": 1596113476000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/440/comments | https://api.github.com/repos/huggingface/datasets/issues/440/events | https://github.com/huggingface/datasets/pull/440 | 666,116,823 | MDExOlB1bGxSZXF1ZXN0NDU3MDE2MjQy | 440 | Fix user specified features in map | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,840,666,000 | 1,595,928,323,000 | 1,595,928,322,000 | MEMBER | null | `.map` didn't keep the user specified features because of an issue in the writer.
The writer used to overwrite the user specified features with inferred features.
I also added tests to make sure it doesn't happen again. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/440/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/440",
"html_url": "https://github.com/huggingface/datasets/pull/440",
"diff_url": "https://github.com/huggingface/datasets/pull/440.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/440.patch",
"merged_at": 1595928322000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/439/comments | https://api.github.com/repos/huggingface/datasets/issues/439/events | https://github.com/huggingface/datasets/issues/439 | 665,964,673 | MDU6SXNzdWU2NjU5NjQ2NzM= | 439 | Issues: Adding a FAISS or Elastic Search index to a Dataset | {
"login": "nsankar",
"id": 431890,
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsankar",
"html_url": "https://github.com/nsankar",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"repos_url": "https://api.github.com/users/nsankar/repos",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`DPRContextEncoder` and `DPRContextEncoderTokenizer` will be available in the next release of `transformers`.\r\n\r\nRight now you can experiment with it by installing `transformers` from the master branch.\r\nYou can also check the docs of DPR [here](https://huggingface.co/transformers/master/model_doc/dpr.html).\r\n\r\nMoreover all the indexing features will also be available in the next release of `nlp`.",
"@lhoestq Thanks for the info ",
"@lhoestq I tried installing transformer from the master branch. Python imports for DPR again didnt' work. Anyways, Looking forward to trying it in the next release of nlp ",
"@nsankar have you tried with the latest version of the library?",
"@yjernite it worked. Thanks"
] | 1,595,823,917,000 | 1,603,849,584,000 | 1,603,849,584,000 | NONE | null | It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on the latest PyArrow 1.0.0 ? Is it yet to be made generally available ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/439/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/438/comments | https://api.github.com/repos/huggingface/datasets/issues/438/events | https://github.com/huggingface/datasets/issues/438 | 665,865,490 | MDU6SXNzdWU2NjU4NjU0OTA= | 438 | New Datasets: IWSLT15+, ITTB | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Thanks Sam, we now have a very detailed tutorial and template on how to add a new dataset to the library. It typically take 1-2 hours to add one. Do you want to give it a try ?\r\nThe tutorial on writing a new dataset loading script is here: https://huggingface.co/nlp/add_dataset.html\r\nAnd the part on how to share a new dataset is here: https://huggingface.co/nlp/share_dataset.html",
"Hi @sshleifer, I'm trying to add IWSLT using the link you provided but the download urls are not working. Only `[en, de]` pair is working. For others language pairs it throws a `404` error.\r\n\r\n"
] | 1,595,799,784,000 | 1,598,281,935,000 | null | CONTRIBUTOR | null | **Links:**
[iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html)
Don't know if that link is up to date.
[ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/)
**Motivation**: replicate mbart finetuning results (table below)
![image](https://user-images.githubusercontent.com/6045025/88490093-0c1c8c00-cf67-11ea-960d-8dcaad2aa8eb.png)
For future readers, we already have the following language pairs in the wmt namespaces:
```
wmt14: ['cs-en', 'de-en', 'fr-en', 'hi-en', 'ru-en']
wmt15: ['cs-en', 'de-en', 'fi-en', 'fr-en', 'ru-en']
wmt16: ['cs-en', 'de-en', 'fi-en', 'ro-en', 'ru-en', 'tr-en']
wmt17: ['cs-en', 'de-en', 'fi-en', 'lv-en', 'ru-en', 'tr-en', 'zh-en']
wmt18: ['cs-en', 'de-en', 'et-en', 'fi-en', 'kk-en', 'ru-en', 'tr-en', 'zh-en']
wmt19: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/438/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/437/comments | https://api.github.com/repos/huggingface/datasets/issues/437/events | https://github.com/huggingface/datasets/pull/437 | 665,597,176 | MDExOlB1bGxSZXF1ZXN0NDU2NjIzNjc3 | 437 | Fix XTREME PAN-X loading | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"There is an interesting design question here (cc @lhoestq).\r\n\r\nI guess the labels form a closed set so we could also use a [nlp.ClassLabel](https://huggingface.co/nlp/package_reference/main_classes.html#nlp.ClassLabel) instead of a string. The differences will be mainly that:\r\n- the labels are stored as integers and thus ready for training a model\r\n- the string to int conversion methods are handled by the `nlp.ClassLabel` feature (see the [doc](https://huggingface.co/nlp/package_reference/main_classes.html#nlp.ClassLabel) and [here](https://huggingface.co/nlp/features.html) and [here](https://huggingface.co/nlp/quicktour.html#fine-tuning-a-deep-learning-model)).\r\n\r\nIn my opinion, storing the labels as integers instead of strings makes it:\r\n- slightly less readable when accessing a dataset example (e.g. with `dataset[0]`)\r\n- force you with a specific mapping from string to integers\r\n- more clear that there is a fixed and predefined list of labels\r\n- easier to list all the labels (directly visible in the features).\r\n\r\n=> overall I'm pretty neutral about using one or the other option (`nlp.string` or `nlp.ClassLabel`).\r\n\r\nNote that we can now rather easily convert from one to the other with the map function and something like:\r\n```python\r\ndataset = dataset.map(lambda x: x, features=nlp.Features({'labels': nlp.ClassLabel(MY_LABELS_NAMES)}))\r\ndataset = dataset.map(lambda x: {'labels': dataset.features['labels'].int2str(x['labels'])}, features=nlp.Features({'labels': nlp.Value('string')}))\r\n```\r\n^^ this could probably be made even simpler (in particular for the second case)",
"I see. This is an interesting question.\r\nMaybe as the dataset doesn't provide the mapping we shouldn't force an arbitrary one, and keep them as strings ?\r\nMoreover for NER the labels are often different from a dataset to the other so it's probably good to keep strings (there is no conventional mapping).\r\nAlso as the column is called \"ner_tags\" (or \"langs\"), you can already assume that there is a fixed and predefined list of labels.",
"Yes sounds good to me.\r\nThis make me wonder if we don’t want to have a default identity function in `map` so this method could also be used to easily cast features. What do you think?",
"Yes sounds good. I also noticed that people use map with identity to write a dataset into a specified cache file."
] | 1,595,688,297,000 | 1,596,097,695,000 | 1,596,097,695,000 | CONTRIBUTOR | null | Hi 🤗
In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https://github.com/huggingface/transformers/tree/master/examples/token-classification) in the transformers repo.
With the fix the output of the dataset should look as follows:
```python
>>> dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
>>> dataset['train'][0]
{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],
'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],
'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/437/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/437",
"html_url": "https://github.com/huggingface/datasets/pull/437",
"diff_url": "https://github.com/huggingface/datasets/pull/437.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/437.patch",
"merged_at": 1596097695000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/436/comments | https://api.github.com/repos/huggingface/datasets/issues/436/events | https://github.com/huggingface/datasets/issues/436 | 665,582,167 | MDU6SXNzdWU2NjU1ODIxNjc= | 436 | Google Colab - load_dataset - PyArrow exception | {
"login": "nsankar",
"id": 431890,
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsankar",
"html_url": "https://github.com/nsankar",
"followers_url": "https://api.github.com/users/nsankar/followers",
"following_url": "https://api.github.com/users/nsankar/following{/other_user}",
"gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsankar/subscriptions",
"organizations_url": "https://api.github.com/users/nsankar/orgs",
"repos_url": "https://api.github.com/users/nsankar/repos",
"events_url": "https://api.github.com/users/nsankar/events{/privacy}",
"received_events_url": "https://api.github.com/users/nsankar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Indeed, we’ll make a new PyPi release next week to solve this. Cc @lhoestq ",
"+1! this is the reason our tests are failing at [TextAttack](https://github.com/QData/TextAttack) \r\n\r\n(Though it's worth noting if we fixed the version number of pyarrow to 0.16.0 that would fix our problem too. But in this case we'll just wait for you all to update)",
"Came to raise this issue, great to see other already have and it's being fixed so soon!\r\n\r\nAs an aside, since no one wrote this already, it seems like the version check only looks at the second part of the version number making sure it is >16, but pyarrow newest version is 1.0.0 so the second past is 0!",
"> Indeed, we’ll make a new PyPi release next week to solve this. Cc @lhoestq\r\n\r\nYes definitely",
"please fix this on pypi! @lhoestq ",
"Is this issue fixed ?",
"We’ll release the new version later today. Apologies for the delay.",
"I just pushed the new version on pypi :)",
"Thanks for the update."
] | 1,595,682,320,000 | 1,597,910,898,000 | 1,597,910,898,000 | NONE | null | With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue
ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`.
The error goes only when I install version 0.16.0
i.e. !pip install pyarrow==0.16.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/436/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/436/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/435/comments | https://api.github.com/repos/huggingface/datasets/issues/435/events | https://github.com/huggingface/datasets/issues/435 | 665,507,141 | MDU6SXNzdWU2NjU1MDcxNDE= | 435 | ImportWarning for pyarrow 1.0.0 | {
"login": "HanGuo97",
"id": 18187806,
"node_id": "MDQ6VXNlcjE4MTg3ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/18187806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HanGuo97",
"html_url": "https://github.com/HanGuo97",
"followers_url": "https://api.github.com/users/HanGuo97/followers",
"following_url": "https://api.github.com/users/HanGuo97/following{/other_user}",
"gists_url": "https://api.github.com/users/HanGuo97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HanGuo97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HanGuo97/subscriptions",
"organizations_url": "https://api.github.com/users/HanGuo97/orgs",
"repos_url": "https://api.github.com/users/HanGuo97/repos",
"events_url": "https://api.github.com/users/HanGuo97/events{/privacy}",
"received_events_url": "https://api.github.com/users/HanGuo97/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This was fixed in #434 \r\nWe'll do a release later this week to include this fix.\r\nThanks for reporting",
"I dont know if the fix was made but the problem is still present : \r\nInstaled with pip : NLP 0.3.0 // pyarrow 1.0.0 \r\nOS : archlinux with kernel zen 5.8.5",
"Yes it was fixed in `nlp>=0.4.0`\r\nYou can update with pip",
"Sorry, I didn't got the updated version, all is now working perfectly thanks"
] | 1,595,648,679,000 | 1,599,587,835,000 | 1,596,472,652,000 | NONE | null | The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/435/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/435/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/434/comments | https://api.github.com/repos/huggingface/datasets/issues/434/events | https://github.com/huggingface/datasets/pull/434 | 665,477,638 | MDExOlB1bGxSZXF1ZXN0NDU2NTM3Njgz | 434 | Fixed check for pyarrow | {
"login": "nadahlberg",
"id": 58701810,
"node_id": "MDQ6VXNlcjU4NzAxODEw",
"avatar_url": "https://avatars.githubusercontent.com/u/58701810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nadahlberg",
"html_url": "https://github.com/nadahlberg",
"followers_url": "https://api.github.com/users/nadahlberg/followers",
"following_url": "https://api.github.com/users/nadahlberg/following{/other_user}",
"gists_url": "https://api.github.com/users/nadahlberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nadahlberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nadahlberg/subscriptions",
"organizations_url": "https://api.github.com/users/nadahlberg/orgs",
"repos_url": "https://api.github.com/users/nadahlberg/repos",
"events_url": "https://api.github.com/users/nadahlberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/nadahlberg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Great, thanks!"
] | 1,595,636,213,000 | 1,595,658,994,000 | 1,595,658,994,000 | CONTRIBUTOR | null | Fix check for pyarrow in __init__.py. Previously would raise an error for pyarrow >= 1.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/434/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/434",
"html_url": "https://github.com/huggingface/datasets/pull/434",
"diff_url": "https://github.com/huggingface/datasets/pull/434.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/434.patch",
"merged_at": 1595658994000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/433/comments | https://api.github.com/repos/huggingface/datasets/issues/433/events | https://github.com/huggingface/datasets/issues/433 | 665,311,025 | MDU6SXNzdWU2NjUzMTEwMjU= | 433 | How to reuse functionality of a (generic) dataset? | {
"login": "ArneBinder",
"id": 3375489,
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArneBinder",
"html_url": "https://github.com/ArneBinder",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions",
"organizations_url": "https://api.github.com/users/ArneBinder/orgs",
"repos_url": "https://api.github.com/users/ArneBinder/repos",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArneBinder/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @ArneBinder, we have a few \"generic\" datasets which are intended to load data files with a predefined format:\r\n- csv: https://github.com/huggingface/nlp/tree/master/datasets/csv\r\n- json: https://github.com/huggingface/nlp/tree/master/datasets/json\r\n- text: https://github.com/huggingface/nlp/tree/master/datasets/text\r\n\r\nYou can find more details about this way to load datasets here in the documentation: https://huggingface.co/nlp/loading_datasets.html#from-local-files\r\n\r\nMaybe your brat loading script could be shared in a similar fashion?",
"> Maybe your brat loading script could be shared in a similar fashion?\r\n\r\n@thomwolf that was also my first idea and I think I will tackle that in the next days. I separated the code and created a real abstract class `AbstractBrat` to allow to inherit from that (I've just seen that the dataset_loader loads the first non abstract class), now `Brat` is very similar in its functionality to https://github.com/huggingface/nlp/tree/master/datasets/text but inherits from `AbstractBrat`.\r\n\r\nHowever, it is still not clear to me how to add a specific dataset (as explained in https://huggingface.co/nlp/add_dataset.html) to your repo that uses this format/abstract class, i.e. re-using the `features` entry of the `DatasetInfo` object and `_generate_examples()`. Again, by doing so, the only remaining entries/functions to define would be `_DESCRIPTION`, `_CITATION`, `homepage` and `_URL` (which is all copy-paste stuff) and `_split_generators()`.\r\n \r\nIn a lack of better ideas, I tried sth like below, but of course it does not work outside `nlp` (`AbstractBrat` is currently defined in [datasets/brat.py](https://github.com/ArneBinder/nlp/blob/5e81fb8710546ee7be3353a7f02a3045e9a8351e/datasets/brat/brat.py)):\r\n```python\r\nfrom __future__ import absolute_import, division, print_function\r\n\r\nimport os\r\n\r\nimport nlp\r\n\r\nfrom datasets.brat.brat import AbstractBrat\r\n\r\n_CITATION = \"\"\"\r\n@inproceedings{lauscher2018b,\r\n title = {An argument-annotated corpus of scientific publications},\r\n booktitle = {Proceedings of the 5th Workshop on Mining Argumentation},\r\n publisher = {Association for Computational Linguistics},\r\n author = {Lauscher, Anne and Glava\\v{s}, Goran and Ponzetto, Simone Paolo},\r\n address = {Brussels, Belgium},\r\n year = {2018},\r\n pages = {40–46}\r\n}\r\n\"\"\"\r\n\r\n_DESCRIPTION = \"\"\"\\\r\nThis dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing \r\nfine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific \r\npublications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of \r\nscientific writing.\r\n\"\"\"\r\n\r\n_URL = \"http://data.dws.informatik.uni-mannheim.de/sci-arg/compiled_corpus.zip\"\r\n\r\n\r\nclass Sciarg(AbstractBrat):\r\n\r\n VERSION = nlp.Version(\"1.0.0\")\r\n\r\n def _info(self):\r\n\r\n brat_features = super()._info().features\r\n return nlp.DatasetInfo(\r\n # This is the description that will appear on the datasets page.\r\n description=_DESCRIPTION,\r\n # nlp.features.FeatureConnectors\r\n features=brat_features,\r\n # If there's a common (input, target) tuple from the features,\r\n # specify them here. They'll be used if as_supervised=True in\r\n # builder.as_dataset.\r\n #supervised_keys=None,\r\n # Homepage of the dataset for documentation\r\n homepage=\"https://github.com/anlausch/ArguminSci\",\r\n citation=_CITATION,\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n # TODO: Downloads the data and defines the splits\r\n # dl_manager is a nlp.download.DownloadManager that can be used to\r\n # download and extract URLs\r\n dl_dir = dl_manager.download_and_extract(_URL)\r\n data_dir = os.path.join(dl_dir, \"compiled_corpus\")\r\n print(f'data_dir: {data_dir}')\r\n return [\r\n nlp.SplitGenerator(\r\n name=nlp.Split.TRAIN,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"directory\": data_dir,\r\n },\r\n ),\r\n ]\r\n``` \r\n\r\nNevertheless, many thanks for tackling the dataset accessibility problem with this great library!",
"As temporary fix I've created [ArneBinder/nlp-formats](https://github.com/ArneBinder/nlp-formats) (contributions welcome)."
] | 1,595,611,657,000 | 1,596,190,997,000 | null | NONE | null | I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to reuse formats and loading functionality for datasets with a common format?
In my case, it took a bit of time to create the Brat dataset and I think others would appreciate to not have to think about that again. Also, I assume there are other formats (e.g. conll) that are widely used, so having this would really ease dataset onboarding and adoption of the library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/433/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/432/comments | https://api.github.com/repos/huggingface/datasets/issues/432/events | https://github.com/huggingface/datasets/pull/432 | 665,234,340 | MDExOlB1bGxSZXF1ZXN0NDU2MzQxNDk3 | 432 | Fix handling of config files while loading datasets from multiple processes | {
"login": "orsharir",
"id": 99543,
"node_id": "MDQ6VXNlcjk5NTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orsharir",
"html_url": "https://github.com/orsharir",
"followers_url": "https://api.github.com/users/orsharir/followers",
"following_url": "https://api.github.com/users/orsharir/following{/other_user}",
"gists_url": "https://api.github.com/users/orsharir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orsharir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orsharir/subscriptions",
"organizations_url": "https://api.github.com/users/orsharir/orgs",
"repos_url": "https://api.github.com/users/orsharir/repos",
"events_url": "https://api.github.com/users/orsharir/events{/privacy}",
"received_events_url": "https://api.github.com/users/orsharir/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Ok for this but I think we may want to use the general `filelock` method we are using at other places in the library instead of filecmp (in particular `filelock` take care of being an atomic operation which is safer for concurrent processes)",
"Ok I see.\r\nWhy not use filelock in this case then ?",
"I think we should 🙂",
"Thanks for approving my patch.\n\nI agree that if copying is needed then some locking mechanism should be put in place. But, I don't think a file should be needlessly copied without a check. So I guess the flow should be, lock => copy if needed => unlock, and add locks wherever else that file is being accessed.\n\nI'll also add that my personal experience with filelock on a different project hasn't been that great, and on some occasions a process somehow got through the lock -- I've never gotten to the bottom of that but it tainted my view of that module. Perhaps it's been fixed (or I just miss used it), but thought you should know to take steps to test it."
] | 1,595,603,457,000 | 1,596,301,902,000 | 1,596,097,528,000 | CONTRIBUTOR | null | When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written.
This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/432/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/432",
"html_url": "https://github.com/huggingface/datasets/pull/432",
"diff_url": "https://github.com/huggingface/datasets/pull/432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/432.patch",
"merged_at": 1596097528000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/431/comments | https://api.github.com/repos/huggingface/datasets/issues/431/events | https://github.com/huggingface/datasets/pull/431 | 665,044,416 | MDExOlB1bGxSZXF1ZXN0NDU2MTgyNDE2 | 431 | Specify split post processing + Add post processing resources downloading | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I was using a hack in `wiki_dpr` to download the index from GCS even for the configurations without the embeddings.\r\nHowever as GCS is something internal, I changed the logic to add a download step for indexes directly in the dataset script, using the `DownloadManager`.\r\n\r\nThis change was directly linked to the changes I did to take into account the split name in the post processing, so I included this change in this PR too.\r\n\r\nTo summarize:\r\n\r\nDataset builders can now implement\r\n- `_post_processing_resources(split)`: return a dict `resource_name -> resource_file_name`. It defines the additional resources such as indexes or arrow files that you need in post processing\r\n- `_download_post_processing_resources(split, resource_name, dl_manager))`: if some resources can be downloaded, you can use the download_manager to download them\r\n- `_post_process(dataset, resources_path)`: (main function for post processing) given a dataset, you can apply dataset transforms or add indexes. For resources that have been downloaded, you can load them. For the others, you can generate and save them. The paths to load/save resources are in `resources_path` which is a dictionary `resource_name -> resource_path`\r\n\r\nAbout the CI:\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr\r\n```\r\nIt fails because I changed the input of post processing functions (to include the split name)",
"I started to add metadata in the DatasetInfo.\r\nNote that because there are new fields, **ALL the dataset_info[s].json generated after these changes won't be loadable from older versions of the lib**\r\n\r\nRight now it looks like this:\r\n```json\r\n \"post_processing_resources_checksums\": {\r\n \"train\": {\r\n \"embeddings_index\": {\r\n \"num_bytes\": 30720045,\r\n \"checksum\": \"b04fb4f4f3ab83b9d1b9f6f9eb236f1c04a9fd61bef7cee16b12df8ac911766a\"\r\n }\r\n }\r\n },\r\n \"post_processing_size\": 30720045,\r\n```",
"Good point. Should we anticipate already that we may add other fields in the future and change the code to support the addition of new fields without breaking backward compatibility in the future?",
"I added:\r\n- post processing features (inside a PostProcessedInfo object)\r\n- backward compatibility for dataset info\r\n- post processing tests (as_dataset and download_and_prepare) for map (change features), select (change number of elements) and add_faiss_index (add indexes)\r\nAnd I fixed a bug in `map` that I found thanks to the new tests\r\n\r\nNow I just have to move `post_processing_resources_checksums` to PostProcessedInfo as well and everything should be good :)\r\nEdit: done"
] | 1,595,582,959,000 | 1,596,186,304,000 | 1,596,186,303,000 | MEMBER | null | Previously if you tried to do
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True)
```
Then you'd get an error `Index size should match Dataset size...`
This was because it was trying to use the full index (21M elements).
To fix that I made it so post processing resources can be named according to the split.
I'm going to add tests on post processing too.
Note that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):
```
=========================== short test summary info ============================
FAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr
FAILED tests/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr
```
EDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/431/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/431",
"html_url": "https://github.com/huggingface/datasets/pull/431",
"diff_url": "https://github.com/huggingface/datasets/pull/431.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/431.patch",
"merged_at": 1596186303000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/430 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/430/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/430/comments | https://api.github.com/repos/huggingface/datasets/issues/430/events | https://github.com/huggingface/datasets/pull/430 | 664,583,837 | MDExOlB1bGxSZXF1ZXN0NDU1ODAxOTI2 | 430 | add DatasetDict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I did the changes in the docstrings and I added a type check in each `DatasetDict` method to make sure all values are of type `Dataset`",
"Awesome, do you mind adding these in the doc as well?",
"I added it to the docs (processing + main classes)",
"I'm trying to follow along with the following about datasets from the docs:\r\n\r\nhttps://huggingface.co/nlp/loading_datasets.html\r\nhttps://huggingface.co/nlp/processing.html\r\n\r\nHowever the train_test_split method no longer works as it is expecting a dataset, rather than a datsetdict. How would I got about splitting a CSV into a train and test set? \r\n\r\nI'm trying to utilize the Trainer() class, but am having trouble converting my data from a csv into dataset objects to pass in."
] | 1,595,519,029,000 | 1,596,502,913,000 | 1,596,013,582,000 | MEMBER | null | ## Add DatasetDict
### Overview
When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example).
If you wanted to apply dataset transforms you had to iterate over each split and apply the transform.
Instead of returning a dict, it now returns a `nlp.DatasetDict` object which inherits from dict and contains the same data as before, except that now users can call dataset transforms directly from the output, and they'll be applied on each split.
Before:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = {
split_name: dataset.map(my_func) for split_name, dataset in squad.items()
}
print(squad.keys())
# dict_keys(['train', 'validation'])
```
Now:
```python
from nlp import load_dataset
squad = load_dataset("squad")
print(squad.keys())
# dict_keys(['train', 'validation'])
squad = squad.map(my_func)
print(squad.keys())
# dict_keys(['train', 'validation'])
```
### Dataset transforms
`nlp.DatasetDict` implements the following dataset transforms:
- map
- filter
- sort
- shuffle
### Arguments
The arguments of the methods are the same except for split-specific arguments like `cache_file_name`.
For such arguments, the expected input is a dictionary `{split_name: argument_value}`
It concerns:
- `cache_file_name` in map, filter, sort, shuffle
- `seed` and `generator` in shuffle | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/430/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/430",
"html_url": "https://github.com/huggingface/datasets/pull/430",
"diff_url": "https://github.com/huggingface/datasets/pull/430.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/430.patch",
"merged_at": 1596013582000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/429/comments | https://api.github.com/repos/huggingface/datasets/issues/429/events | https://github.com/huggingface/datasets/pull/429 | 664,412,137 | MDExOlB1bGxSZXF1ZXN0NDU1NjU2MDk5 | 429 | mlsum | {
"login": "RachelKer",
"id": 36986299,
"node_id": "MDQ6VXNlcjM2OTg2Mjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RachelKer",
"html_url": "https://github.com/RachelKer",
"followers_url": "https://api.github.com/users/RachelKer/followers",
"following_url": "https://api.github.com/users/RachelKer/following{/other_user}",
"gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions",
"organizations_url": "https://api.github.com/users/RachelKer/orgs",
"repos_url": "https://api.github.com/users/RachelKer/repos",
"events_url": "https://api.github.com/users/RachelKer/events{/privacy}",
"received_events_url": "https://api.github.com/users/RachelKer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks @RachelKer for this PR.\r\n\r\nI think the dummy_data structure does not also match. In the `_split_generator` you have something like `os.path.join(downloaded_files[\"validation\"], lang+'_val.jsonl')` but in you dummy_data you have `os.path.join(downloaded_files[\"validation\"], lang+\"_val.zip\", lang+'_val.jsonl')`. I think ` jsonl` files should be directly in the `dummy_data` folder without the sub-folder \r\n\r\n@lhoestq ",
"Hi @RachelKer :)\r\nThanks for adding MLSUM !\r\n\r\nTo fix the CI I think you just have to rebase from master",
"Great, I think it is working now. Thanks :)",
"It looks like your PR does tons of changes in other datasets. \r\nMaybe this is because of the merge from master ?",
"Hmm, I see, sorry I messed up somewhere. Maybe it's easier if we close the pull request and I do another one ?",
"Yea if it's easier for you feel free to re-open a PR"
] | 1,595,505,159,000 | 1,596,195,980,000 | 1,596,195,980,000 | CONTRIBUTOR | null | Hello,
The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/429/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/429",
"html_url": "https://github.com/huggingface/datasets/pull/429",
"diff_url": "https://github.com/huggingface/datasets/pull/429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/429.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/428/comments | https://api.github.com/repos/huggingface/datasets/issues/428/events | https://github.com/huggingface/datasets/pull/428 | 664,367,086 | MDExOlB1bGxSZXF1ZXN0NDU1NjE3Nzcy | 428 | fix concatenate_datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,500,259,000 | 1,595,500,500,000 | 1,595,500,498,000 | MEMBER | null | `concatenate_datatsets` used to test that the different`nlp.Dataset.schema` match, but this attribute was removed in #423 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/428/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/428",
"html_url": "https://github.com/huggingface/datasets/pull/428",
"diff_url": "https://github.com/huggingface/datasets/pull/428.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/428.patch",
"merged_at": 1595500498000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/427/comments | https://api.github.com/repos/huggingface/datasets/issues/427/events | https://github.com/huggingface/datasets/pull/427 | 664,341,623 | MDExOlB1bGxSZXF1ZXN0NDU1NTk1Nzc3 | 427 | Allow sequence features for beam + add processed Natural Questions | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,497,961,000 | 1,595,509,770,000 | 1,595,509,769,000 | MEMBER | null | ## Allow Sequence features for Beam Datasets + add Natural Questions
### The issue
The steps of beam datasets processing is the following:
- download the source files and send them in a remote storage (gcs)
- process the files using a beam runner (dataflow)
- save output in remote storage (gcs)
- convert output to arrow in remote storage (gcs)
However it wasn't possible to process `natural_questions` because apache beam's processing outputs parquet files, and it's not yet possible to read parquet files with list features.
### The proposed solution
To allow sequence features for beam I added a workaround that serializes the values using `json.dumps`, so that we end up with strings instead of the original features. Then when the arrow file is created, the serialized objects are transformed back to normal with `json.loads`. Not sure if there's a better way to do it.
### Natural Questions
I was able to process NQ with it, and so I added the json infos file in this PR too.
The processed arrow files are also stored in gcs.
It allows you to load NQ with
```python
from nlp import load_dataset
nq = load_dataset("natural_questions") # download the 90GB arrow files from gcs and return the dataset
```
### Tests
I added a test case to make sure it works as expected.
Note that the CI will fail because I am updating `natural_questions.py`: it's not synced with the script on S3. It will be synced as soon as this PR is merged.
```
=========================== short test summary info ============================
FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_natural_questions/default
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/427/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/427/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/427",
"html_url": "https://github.com/huggingface/datasets/pull/427",
"diff_url": "https://github.com/huggingface/datasets/pull/427.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/427.patch",
"merged_at": 1595509769000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/426/comments | https://api.github.com/repos/huggingface/datasets/issues/426/events | https://github.com/huggingface/datasets/issues/426 | 664,203,897 | MDU6SXNzdWU2NjQyMDM4OTc= | 426 | [FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter | {
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Yes that's definitely something we plan to add ^^",
"Yes, that would be nice. We could take a look at what tensorflow `tf.data` does under the hood for instance.",
"So `tf.data.Dataset.map()` returns a `ParallelMapDataset` if `num_parallel_calls is not None` [link](https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/data/ops/dataset_ops.py#L1623).\r\n\r\nThere, `num_parallel_calls` is turned into a tensor and and fed to `gen_dataset_ops.parallel_map_dataset` where it looks like tensorflow takes over.\r\n\r\nWe could start with something simple like a thread or process pool that `imap`s over some shards.\r\n ",
"Multiprocessing was added in #552 . You can set the number of processes with `.map(..., num_proc=...)`. It also works for `filter`\r\n\r\nClosing this one, but feel free to reo-open if you have other questions",
"@lhoestq Great feature implemented! Do you have plans to add it to official tutorials [Processing data in a Dataset](https://huggingface.co/docs/datasets/processing.html?highlight=save#augmenting-the-dataset)? It took me sometime to find this parallel processing api.",
"Thanks for the heads up !\r\n\r\nI just added a paragraph about multiprocessing:\r\nhttps://huggingface.co/docs/datasets/master/processing.html#multiprocessing"
] | 1,595,480,441,000 | 1,615,541,652,000 | 1,599,490,084,000 | NONE | null | It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/426/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/426/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/425/comments | https://api.github.com/repos/huggingface/datasets/issues/425/events | https://github.com/huggingface/datasets/issues/425 | 664,029,848 | MDU6SXNzdWU2NjQwMjk4NDg= | 425 | Correct data structure for PAN-X task in XTREME dataset? | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for noticing ! This looks more reasonable indeed.\r\nFeel free to open a PR",
"Hi @lhoestq \r\nI made the proposed changes to the `xtreme.py` script. I noticed that I also need to change the schema in the `dataset_infos.json` file. More specifically the `\"features\"` part of the PAN-X.LANG dataset:\r\n\r\n```json\r\n\"features\":{\r\n \"word\":{\r\n \"dtype\":\"string\",\r\n \"id\":null,\r\n \"_type\":\"Value\"\r\n },\r\n \"ner_tag\":{\r\n \"dtype\":\"string\",\r\n \"id\":null,\r\n \"_type\":\"Value\"\r\n },\r\n \"lang\":{\r\n \"dtype\":\"string\",\r\n \"id\":null,\r\n \"_type\":\"Value\"\r\n }\r\n}\r\n```\r\nTo fit the code above the fields `\"word\"`, `\"ner_tag\"`, and `\"lang\"` would become `\"words\"`, `ner_tags\"` and `\"langs\"`. In addition the `dtype` should be changed from `\"string\"` to `\"list\"`.\r\n\r\n I made this changes but when trying to test this locally with `dataset = load_dataset(\"xtreme\", \"PAN-X.en\", data_dir='./data')` I face the issue that the `dataset_info.json` file is always overwritten by a downloaded version with the old settings, which then throws an error because the schema does not match. This makes it hard to test the changes locally. Do you have any suggestions on how to deal with that?\r\n",
"Hi !\r\n\r\nYou have to point to your local script.\r\nFirst clone the repo and then:\r\n\r\n```python\r\ndataset = load_dataset(\"./datasets/xtreme\", \"PAN-X.en\")\r\n```\r\nThe \"xtreme\" directory contains \"xtreme.py\".\r\n\r\nYou also have to change the features definition in the `_info` method. You could use:\r\n\r\n```python\r\nfeatures = nlp.Features({\r\n \"words\": [nlp.Value(\"string\")],\r\n \"ner_tags\": [nlp.Value(\"string\")],\r\n \"langs\": [nlp.Value(\"string\")],\r\n})\r\n```\r\n\r\nHope this helps !\r\nLet me know if you have other questions.",
"Thanks, I am making progress. I got a new error `NonMatchingSplitsSizesError ` (see traceback below), which I suspect is due to the fact that number of rows in the dataset changed (one row per word --> one row per sentence) as well as the number of bytes due to the slightly updated data structure. \r\n\r\n```python\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='validation', num_bytes=1756492, num_examples=80536, dataset_name='xtreme'), 'recorded': SplitInfo(name='validation', num_bytes=1837109, num_examples=10000, dataset_name='xtreme')}, {'expected': SplitInfo(name='test', num_bytes=1752572, num_examples=80326, dataset_name='xtreme'), 'recorded': SplitInfo(name='test', num_bytes=1833214, num_examples=10000, dataset_name='xtreme')}, {'expected': SplitInfo(name='train', num_bytes=3496832, num_examples=160394, dataset_name='xtreme'), 'recorded': SplitInfo(name='train', num_bytes=3658428, num_examples=20000, dataset_name='xtreme')}]\r\n```\r\nI can fix the error by replacing the values in the `datasets_infos.json` file, which I tested for English. However, to update this for all 40 datasets manually is slightly painful. Is there a better way to update the expected values for all datasets?",
"You can update the json file by calling\r\n```\r\nnlp-cli test ./datasets/xtreme --save_infos --all_configs\r\n```",
"One more thing about features. I mentioned\r\n\r\n```python\r\nfeatures = nlp.Features({\r\n \"words\": [nlp.Value(\"string\")],\r\n \"ner_tags\": [nlp.Value(\"string\")],\r\n \"langs\": [nlp.Value(\"string\")],\r\n})\r\n```\r\n\r\nbut it's actually not consistent with the way we write datasets. Something like this is simpler to read and more consistent with the way we define datasets:\r\n\r\n```python\r\nfeatures = nlp.Features({\r\n \"words\": nlp.Sequence(nlp.Value(\"string\")),\r\n \"ner_tags\": nlp.Sequence(nlp.Value(\"string\")),\r\n \"langs\": nlp.Sequence(nlp.Value(\"string\")),\r\n})\r\n```\r\n\r\nSorry about that",
"Closing this since PR #437 fixed the problem and has been merged to `master`. "
] | 1,595,449,760,000 | 1,596,375,034,000 | 1,596,375,034,000 | MEMBER | null | Hi 🤗 team!
## Description of the problem
Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows:
```python
from nlp import load_dataset
# AmazonPhotos.zip is located in data/
dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data')
dataset_train = dataset['train']
```
However, I am not sure that `load_dataset()` is returning the correct data structure for NER.
Currently, every row in `dataset_train` is of the form
```python
{'word': str, 'ner_tag': str, 'lang': str}
```
but I think we actually want something like
```python
{'words': List[str], 'ner_tags': List[str], 'langs': List[str]}
```
so that each row corresponds to a _sequence_ of words associated with each example. With the current data structure I do not think it is possible to transform `dataset_train` into a form suitable for training because we do not know the boundaries between examples.
Indeed, [this line](https://github.com/google-research/xtreme/blob/522434d1aece34131d997a97ce7e9242a51a688a/third_party/utils_tag.py#L58) in the XTREME repo, processes the texts as lists of sentences, tags, and languages.
## Proposed solution
Replace
```python
with open(filepath) as f:
data = csv.reader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
for id_, row in enumerate(data):
if row:
lang, word = row[0].split(":")[0], row[0].split(":")[1]
tag = row[1]
yield id_, {"word": word, "ner_tag": tag, "lang": lang}
```
from [these lines](https://github.com/huggingface/nlp/blob/ce7d3a1d630b78fe27188d1706f3ea980e8eec43/datasets/xtreme/xtreme.py#L881-L887) of the `_generate_examples()` function with something like
```python
guid_index = 1
with open(filepath, encoding="utf-8") as f:
words = []
ner_tags = []
langs = []
for line in f:
if line.startswith("-DOCSTART-") or line == "" or line == "\n":
if words:
yield guid_index, {"words": words, "ner_tags": ner_tags, "langs": langs}
guid_index += 1
words = []
ner_tags = []
else:
# pan-x data is tab separated
splits = line.split("\t")
# strip out en: prefix
langs.append(splits[0][:2])
words.append(splits[0][3:])
if len(splits) > 1:
labels.append(splits[-1].replace("\n", ""))
else:
# examples have no label in test set
labels.append("O")
```
If you agree, me or @lvwerra would be happy to implement this and create a PR. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/425/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/425/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/424/comments | https://api.github.com/repos/huggingface/datasets/issues/424/events | https://github.com/huggingface/datasets/pull/424 | 663,858,552 | MDExOlB1bGxSZXF1ZXN0NDU1MTk4MTY0 | 424 | Web of science | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,432,311,000 | 1,595,514,478,000 | 1,595,514,476,000 | CONTRIBUTOR | null | this PR adds the WebofScience dataset
#353 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/424/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/424",
"html_url": "https://github.com/huggingface/datasets/pull/424",
"diff_url": "https://github.com/huggingface/datasets/pull/424.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/424.patch",
"merged_at": 1595514476000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/423 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/423/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/423/comments | https://api.github.com/repos/huggingface/datasets/issues/423/events | https://github.com/huggingface/datasets/pull/423 | 663,079,359 | MDExOlB1bGxSZXF1ZXN0NDU0NTU4OTA0 | 423 | Change features vs schema logic | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I had to make `SplitDict` serializable to be able to copy `DatasetInfo` objects properly.\r\nSerialization was also asked in #389 ",
"One thing I forgot to say here, is that we also want to use the features arguments of `load_dataset` (which goes in the builder’s config) to override the default features of a dataset script."
] | 1,595,343,167,000 | 1,595,668,114,000 | 1,595,499,317,000 | MEMBER | null | ## New logic for `nlp.Features` in datasets
Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`.
However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files.
Changes:
- Remove `schema` field in `nlp.Dataset`
- Make `features` the source of truth to read/write examples
- `features` can no longer be `None` in `nlp.Dataset`
- Update `features` after each dataset transform such as `nlp.Dataset.map`
Todo: change the tests to take these changes into account | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/423/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/423/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/423",
"html_url": "https://github.com/huggingface/datasets/pull/423",
"diff_url": "https://github.com/huggingface/datasets/pull/423.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/423.patch",
"merged_at": 1595499316000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/422/comments | https://api.github.com/repos/huggingface/datasets/issues/422/events | https://github.com/huggingface/datasets/pull/422 | 663,028,497 | MDExOlB1bGxSZXF1ZXN0NDU0NTE3MDU2 | 422 | - Corrected encoding for IMDB. | {
"login": "ghazi-f",
"id": 25091538,
"node_id": "MDQ6VXNlcjI1MDkxNTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/25091538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghazi-f",
"html_url": "https://github.com/ghazi-f",
"followers_url": "https://api.github.com/users/ghazi-f/followers",
"following_url": "https://api.github.com/users/ghazi-f/following{/other_user}",
"gists_url": "https://api.github.com/users/ghazi-f/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghazi-f/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghazi-f/subscriptions",
"organizations_url": "https://api.github.com/users/ghazi-f/orgs",
"repos_url": "https://api.github.com/users/ghazi-f/repos",
"events_url": "https://api.github.com/users/ghazi-f/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghazi-f/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,339,219,000 | 1,595,433,773,000 | 1,595,433,773,000 | CONTRIBUTOR | null | The preparation phase (after the download phase) crashed on windows because of charmap encoding not being able to decode certain characters. This change suggested in Issue #347 fixes it for the IMDB dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/422/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/422",
"html_url": "https://github.com/huggingface/datasets/pull/422",
"diff_url": "https://github.com/huggingface/datasets/pull/422.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/422.patch",
"merged_at": 1595433773000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/421/comments | https://api.github.com/repos/huggingface/datasets/issues/421/events | https://github.com/huggingface/datasets/pull/421 | 662,213,864 | MDExOlB1bGxSZXF1ZXN0NDUzNzkzMzQ1 | 421 | Style change | {
"login": "lordtt13",
"id": 35500534,
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordtt13",
"html_url": "https://github.com/lordtt13",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"What about the other PR #419 ?",
"Oh this is the PR where I ran make quality and make style and some previous files from master were changed",
"Oh right ! Let me fix the style myself if you don't mind"
] | 1,595,275,709,000 | 1,595,434,120,000 | 1,595,434,119,000 | CONTRIBUTOR | null | make quality and make style ran on scripts | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/421/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/421",
"html_url": "https://github.com/huggingface/datasets/pull/421",
"diff_url": "https://github.com/huggingface/datasets/pull/421.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/421.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/420/comments | https://api.github.com/repos/huggingface/datasets/issues/420/events | https://github.com/huggingface/datasets/pull/420 | 662,029,782 | MDExOlB1bGxSZXF1ZXN0NDUzNjI5OTk2 | 420 | Better handle nested features | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,263,453,000 | 1,595,319,649,000 | 1,595,318,992,000 | MEMBER | null | Changes:
- added arrow schema to features conversion (it's going to be useful to fix #342 )
- make flatten handle deep features (useful for tfrecords conversion in #339 )
- add tests for flatten and features conversions
- the reader now returns the kwargs to instantiate a Dataset (fix circular dependencies) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/420/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/420",
"html_url": "https://github.com/huggingface/datasets/pull/420",
"diff_url": "https://github.com/huggingface/datasets/pull/420.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/420.patch",
"merged_at": 1595318991000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/419/comments | https://api.github.com/repos/huggingface/datasets/issues/419/events | https://github.com/huggingface/datasets/pull/419 | 661,974,747 | MDExOlB1bGxSZXF1ZXN0NDUzNTgxNzQz | 419 | EmoContext dataset add | {
"login": "lordtt13",
"id": 35500534,
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordtt13",
"html_url": "https://github.com/lordtt13",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,260,125,000 | 1,595,578,921,000 | 1,595,578,920,000 | CONTRIBUTOR | null | EmoContext Dataset add
Signed-off-by: lordtt13 <thakurtanmay72@yahoo.com> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/419/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/419",
"html_url": "https://github.com/huggingface/datasets/pull/419",
"diff_url": "https://github.com/huggingface/datasets/pull/419.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/419.patch",
"merged_at": 1595578920000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/418/comments | https://api.github.com/repos/huggingface/datasets/issues/418/events | https://github.com/huggingface/datasets/issues/418 | 661,914,873 | MDU6SXNzdWU2NjE5MTQ4NzM= | 418 | Addition of google drive links to dl_manager | {
"login": "lordtt13",
"id": 35500534,
"node_id": "MDQ6VXNlcjM1NTAwNTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lordtt13",
"html_url": "https://github.com/lordtt13",
"followers_url": "https://api.github.com/users/lordtt13/followers",
"following_url": "https://api.github.com/users/lordtt13/following{/other_user}",
"gists_url": "https://api.github.com/users/lordtt13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lordtt13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordtt13/subscriptions",
"organizations_url": "https://api.github.com/users/lordtt13/orgs",
"repos_url": "https://api.github.com/users/lordtt13/repos",
"events_url": "https://api.github.com/users/lordtt13/events{/privacy}",
"received_events_url": "https://api.github.com/users/lordtt13/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think the problem is the way you wrote your urls. Try the following structure to see `https://drive.google.com/uc?export=download&id=your_file_id` . \r\n\r\n@lhoestq ",
"Oh sorry, I think `_get_drive_url` is doing that. \r\n\r\nHave you tried to use `dl_manager.download_and_extract(_get_drive_url(_TRAIN_URL)`? it should work with google drive links.\r\n",
"Yes it worked, thank you!"
] | 1,595,256,722,000 | 1,595,259,572,000 | 1,595,259,572,000 | CONTRIBUTOR | null | Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown.
This is the script for me:
```python
class EmoConfig(nlp.BuilderConfig):
"""BuilderConfig for SQUAD."""
def __init__(self, **kwargs):
"""BuilderConfig for EmoContext.
Args:
**kwargs: keyword arguments forwarded to super.
"""
super(EmoConfig, self).__init__(**kwargs)
_TEST_URL = "https://drive.google.com/file/d/1Hn5ytHSSoGOC4sjm3wYy0Dh0oY_oXBbb/view?usp=sharing"
_TRAIN_URL = "https://drive.google.com/file/d/12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X/view?usp=sharing"
class EmoDataset(nlp.GeneratorBasedBuilder):
""" SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text. Version 1.0.0 """
VERSION = nlp.Version("1.0.0")
force = False
def _info(self):
return nlp.DatasetInfo(
description=_DESCRIPTION,
features=nlp.Features(
{
"text": nlp.Value("string"),
"label": nlp.features.ClassLabel(names=["others", "happy", "sad", "angry"]),
}
),
supervised_keys=None,
homepage="https://www.aclweb.org/anthology/S19-2005/",
citation=_CITATION,
)
def _get_drive_url(self, url):
base_url = 'https://drive.google.com/uc?id='
split_url = url.split('/')
return base_url + split_url[5]
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
if(not os.path.exists("emo-train.json") or self.force):
gdown.download(self._get_drive_url(_TRAIN_URL), "emo-train.json", quiet = True)
if(not os.path.exists("emo-test.json") or self.force):
gdown.download(self._get_drive_url(_TEST_URL), "emo-test.json", quiet = True)
return [
nlp.SplitGenerator(
name=nlp.Split.TRAIN,
gen_kwargs={
"filepath": "emo-train.json",
"split": "train",
},
),
nlp.SplitGenerator(
name=nlp.Split.TEST,
gen_kwargs={"filepath": "emo-test.json", "split": "test"},
),
]
def _generate_examples(self, filepath, split):
""" Yields examples. """
with open(filepath, 'rb') as f:
data = json.load(f)
for id_, text, label in zip(data["text"].keys(), data["text"].values(), data["Label"].values()):
yield id_, {
"text": text,
"label": label,
}
```
Can someone help me in adding gdrive links to be used with default dl_manager or adding gdown as another dl_manager, because I'd like to add this dataset to nlp's official database. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/418/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/417/comments | https://api.github.com/repos/huggingface/datasets/issues/417/events | https://github.com/huggingface/datasets/pull/417 | 661,804,054 | MDExOlB1bGxSZXF1ZXN0NDUzNDMyODE5 | 417 | Fix docstrins multiple metrics instances | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,250,539,000 | 1,595,411,460,000 | 1,595,411,459,000 | MEMBER | null | We change the docstrings of `nlp.Metric.compute`, `nlp.Metric.add` and `nlp.Metric.add_batch` depending on which metric is instantiated. However we had issues when instantiating multiple metrics (docstrings were duplicated).
This should fix #304 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/417/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/417",
"html_url": "https://github.com/huggingface/datasets/pull/417",
"diff_url": "https://github.com/huggingface/datasets/pull/417.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/417.patch",
"merged_at": 1595411458000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/416/comments | https://api.github.com/repos/huggingface/datasets/issues/416/events | https://github.com/huggingface/datasets/pull/416 | 661,635,393 | MDExOlB1bGxSZXF1ZXN0NDUzMjg1NTM4 | 416 | Fix xtreme panx directory | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"great, I think I did not download the data the way you do, but yours is more reasonable."
] | 1,595,239,757,000 | 1,595,319,346,000 | 1,595,319,344,000 | MEMBER | null | Fix #412 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/416/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/416",
"html_url": "https://github.com/huggingface/datasets/pull/416",
"diff_url": "https://github.com/huggingface/datasets/pull/416.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/416.patch",
"merged_at": 1595319344000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/415/comments | https://api.github.com/repos/huggingface/datasets/issues/415/events | https://github.com/huggingface/datasets/issues/415 | 660,687,076 | MDU6SXNzdWU2NjA2ODcwNzY= | 415 | Something is wrong with WMT 19 kk-en dataset | {
"login": "ChenghaoMou",
"id": 32014649,
"node_id": "MDQ6VXNlcjMyMDE0NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32014649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChenghaoMou",
"html_url": "https://github.com/ChenghaoMou",
"followers_url": "https://api.github.com/users/ChenghaoMou/followers",
"following_url": "https://api.github.com/users/ChenghaoMou/following{/other_user}",
"gists_url": "https://api.github.com/users/ChenghaoMou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChenghaoMou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChenghaoMou/subscriptions",
"organizations_url": "https://api.github.com/users/ChenghaoMou/orgs",
"repos_url": "https://api.github.com/users/ChenghaoMou/repos",
"events_url": "https://api.github.com/users/ChenghaoMou/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChenghaoMou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [] | 1,595,146,731,000 | 1,595,238,866,000 | null | NONE | null | The translation in the `train` set does not look right:
```
>>>import nlp
>>>from nlp import load_dataset
>>>dataset = load_dataset('wmt19', 'kk-en')
>>>dataset["train"]["translation"][0]
{'kk': 'Trumpian Uncertainty', 'en': 'Трамптық белгісіздік'}
>>>dataset["validation"]["translation"][0]
{'kk': 'Ақша-несие саясатының сценарийін қайта жазсақ', 'en': 'Rewriting the Monetary-Policy Script'}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/415/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/415/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/414/comments | https://api.github.com/repos/huggingface/datasets/issues/414/events | https://github.com/huggingface/datasets/issues/414 | 660,654,013 | MDU6SXNzdWU2NjA2NTQwMTM= | 414 | from_dict delete? | {
"login": "hackerxiaobai",
"id": 22817243,
"node_id": "MDQ6VXNlcjIyODE3MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/22817243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackerxiaobai",
"html_url": "https://github.com/hackerxiaobai",
"followers_url": "https://api.github.com/users/hackerxiaobai/followers",
"following_url": "https://api.github.com/users/hackerxiaobai/following{/other_user}",
"gists_url": "https://api.github.com/users/hackerxiaobai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackerxiaobai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackerxiaobai/subscriptions",
"organizations_url": "https://api.github.com/users/hackerxiaobai/orgs",
"repos_url": "https://api.github.com/users/hackerxiaobai/repos",
"events_url": "https://api.github.com/users/hackerxiaobai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackerxiaobai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though.\r\nRight now if you want to use `from_dict` you have to install the package from the master branch\r\n```\r\npip install git+https://github.com/huggingface/nlp.git\r\n```",
"> `from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though.\r\n> Right now if you want to use `from_dict` you have to install the package from the master branch\r\n> \r\n> ```\r\n> pip install git+https://github.com/huggingface/nlp.git\r\n> ```\r\nOK, thank you.\r\n"
] | 1,595,142,516,000 | 1,595,298,077,000 | 1,595,298,077,000 | NONE | null | AttributeError: type object 'Dataset' has no attribute 'from_dict' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/414/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/413/comments | https://api.github.com/repos/huggingface/datasets/issues/413/events | https://github.com/huggingface/datasets/issues/413 | 660,063,655 | MDU6SXNzdWU2NjAwNjM2NTU= | 413 | Is there a way to download only NQ dev? | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Unfortunately it's not possible to download only the dev set of NQ.\r\n\r\nI think we could add a way to download only the test set by adding a custom configuration to the processing script though.",
"Ok, got it. I think this could be a valuable feature - especially for large datasets like NQ, but potentially also others. \r\nFor us, it will in this case make the difference of using the library or keeping the old downloads of the raw dev datasets. \r\nHowever, I don't know if that fits into your plans with the library and can also understand if you don't want to support this.",
"I don't think we could force this behavior generally since the dataset script authors are free to organize the file download as they want (sometimes the mapping between split and files can be very much nontrivial) but we can add an additional configuration for Natural Question indeed as @lhoestq indicate."
] | 1,595,068,103,000 | 1,596,027,980,000 | null | NONE | null | Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)?
As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data.
I tried
```
dataset = nlp.load_dataset('natural_questions', split="validation", beam_runner="DirectRunner")
```
But this still triggered a big download of presumably the whole dataset. Is there any way of doing this or are splits / slicing options only available after downloading?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/413/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/412/comments | https://api.github.com/repos/huggingface/datasets/issues/412/events | https://github.com/huggingface/datasets/issues/412 | 660,047,139 | MDU6SXNzdWU2NjAwNDcxMzk= | 412 | Unable to load XTREME dataset from disk | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, you have to provide the full path to the downloaded file for example `/home/lewtum/..`",
"I was able to repro. Opening a PR to fix that.\r\nThanks for reporting this issue !",
"Thanks for the rapid fix @lhoestq!"
] | 1,595,066,100,000 | 1,595,319,344,000 | 1,595,319,344,000 | MEMBER | null | Hi 🤗 team!
## Description of the problem
Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark.
I have manually downloaded the `AmazonPhotos.zip` file from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1) and am running into a `FileNotFoundError` when I point to the location of the dataset.
As far as I can tell, the problem is that `AmazonPhotos.zip` decompresses to `panx_dataset` and `load_dataset()` is not looking in the correct path:
```
# path where load_dataset is looking for fr.tar.gz
/root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/
# path where it actually exists
/root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/panx_dataset/
```
## Steps to reproduce the problem
1. Manually download the XTREME benchmark from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1)
2. Run the following code snippet
```python
from nlp import load_dataset
# AmazonPhotos.zip is in the root of the folder
dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./')
```
3. Here is the stack trace
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-4-26786bb5fa93> in <module>
----> 1 dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
464 split_dict = SplitDict(dataset_name=self.name)
465 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 466 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
467 # Checksums verification
468 if verify_infos:
/usr/local/lib/python3.6/dist-packages/nlp/datasets/xtreme/b8c2ed3583a7a7ac60b503576dfed3271ac86757628897e945bd329c43b8a746/xtreme.py in _split_generators(self, dl_manager)
725 panx_dl_dir = dl_manager.extract(panx_path)
726 lang = self.config.name.split(".")[1]
--> 727 lang_folder = dl_manager.extract(os.path.join(panx_dl_dir, lang + ".tar.gz"))
728 return [
729 nlp.SplitGenerator(
/usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in extract(self, path_or_paths)
196 """
197 return map_nested(
--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
199 )
200
/usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)
170 return tuple(mapped)
171 # Singleton
--> 172 return function(data_struct)
173
174
/usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in <lambda>(path)
196 """
197 return map_nested(
--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
199 )
200
/usr/local/lib/python3.6/dist-packages/nlp/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
203 elif urlparse(url_or_filename).scheme == "":
204 # File, but it doesn't exist.
--> 205 raise FileNotFoundError("Local file {} doesn't exist".format(url_or_filename))
206 else:
207 # Something unknown
FileNotFoundError: Local file /root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/fr.tar.gz doesn't exist
```
## OS and hardware
```
- `nlp` version: 0.3.0
- Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/412/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/411/comments | https://api.github.com/repos/huggingface/datasets/issues/411/events | https://github.com/huggingface/datasets/pull/411 | 659,393,398 | MDExOlB1bGxSZXF1ZXN0NDUxMjQxOTQy | 411 | Sbf | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,595,002,785,000 | 1,595,322,826,000 | 1,595,322,825,000 | CONTRIBUTOR | null | This PR adds the Social Bias Frames Dataset (ACL 2020) .
dataset homepage: https://homes.cs.washington.edu/~msap/social-bias-frames/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/411/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/411",
"html_url": "https://github.com/huggingface/datasets/pull/411",
"diff_url": "https://github.com/huggingface/datasets/pull/411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/411.patch",
"merged_at": 1595322825000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/410/comments | https://api.github.com/repos/huggingface/datasets/issues/410/events | https://github.com/huggingface/datasets/pull/410 | 659,242,871 | MDExOlB1bGxSZXF1ZXN0NDUxMTEzMTI3 | 410 | 20newsgroup | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,991,277,000 | 1,595,228,729,000 | 1,595,228,728,000 | CONTRIBUTOR | null | Add 20Newsgroup dataset.
#353 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/410/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/410",
"html_url": "https://github.com/huggingface/datasets/pull/410",
"diff_url": "https://github.com/huggingface/datasets/pull/410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/410.patch",
"merged_at": 1595228728000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/409/comments | https://api.github.com/repos/huggingface/datasets/issues/409/events | https://github.com/huggingface/datasets/issues/409 | 659,128,611 | MDU6SXNzdWU2NTkxMjg2MTE= | 409 | train_test_split error: 'dict' object has no attribute 'deepcopy' | {
"login": "morganmcg1",
"id": 20516801,
"node_id": "MDQ6VXNlcjIwNTE2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/20516801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morganmcg1",
"html_url": "https://github.com/morganmcg1",
"followers_url": "https://api.github.com/users/morganmcg1/followers",
"following_url": "https://api.github.com/users/morganmcg1/following{/other_user}",
"gists_url": "https://api.github.com/users/morganmcg1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morganmcg1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morganmcg1/subscriptions",
"organizations_url": "https://api.github.com/users/morganmcg1/orgs",
"repos_url": "https://api.github.com/users/morganmcg1/repos",
"events_url": "https://api.github.com/users/morganmcg1/events{/privacy}",
"received_events_url": "https://api.github.com/users/morganmcg1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It was fixed in 2ddd18d139d3047c9c3abe96e1e7d05bb360132c.\r\nCould you pull the latest changes from master @morganmcg1 ?",
"Thanks @lhoestq, works fine now!"
] | 1,594,982,188,000 | 1,595,342,092,000 | 1,595,342,092,000 | NONE | null | `train_test_split` is giving me an error when I try and call it:
`'dict' object has no attribute 'deepcopy'`
## To reproduce
```
dataset = load_dataset('glue', 'mrpc', split='train')
dataset = dataset.train_test_split(test_size=0.2)
```
## Full Stacktrace
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-feb740dbec9a> in <module>
1 dataset = load_dataset('glue', 'mrpc', split='train')
----> 2 dataset = dataset.train_test_split(test_size=0.2)
~/anaconda3/envs/fastai2_me/lib/python3.7/site-packages/nlp/arrow_dataset.py in train_test_split(self, test_size, train_size, shuffle, seed, generator, keep_in_memory, load_from_cache_file, train_cache_file_name, test_cache_file_name, writer_batch_size)
1032 "writer_batch_size": writer_batch_size,
1033 }
-> 1034 train_kwargs = cache_kwargs.deepcopy()
1035 train_kwargs["split"] = "train"
1036 test_kwargs = cache_kwargs.deepcopy()
AttributeError: 'dict' object has no attribute 'deepcopy'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/409/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/408/comments | https://api.github.com/repos/huggingface/datasets/issues/408/events | https://github.com/huggingface/datasets/pull/408 | 659,064,144 | MDExOlB1bGxSZXF1ZXN0NDUwOTU1MTE0 | 408 | Add tests datasets gcp | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,977,807,000 | 1,594,978,017,000 | 1,594,978,016,000 | MEMBER | null | Some datasets are available on our google cloud storage in arrow format, so that the users don't need to process the data.
These tests make sure that they're always available. It also makes sure that their scripts are in sync between S3 and the repo.
This should avoid future issues like #407 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/408/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/408",
"html_url": "https://github.com/huggingface/datasets/pull/408",
"diff_url": "https://github.com/huggingface/datasets/pull/408.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/408.patch",
"merged_at": 1594978016000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/407/comments | https://api.github.com/repos/huggingface/datasets/issues/407/events | https://github.com/huggingface/datasets/issues/407 | 658,672,736 | MDU6SXNzdWU2NTg2NzI3MzY= | 407 | MissingBeamOptions for Wikipedia 20200501.en | {
"login": "mitchellgordon95",
"id": 7490438,
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitchellgordon95",
"html_url": "https://github.com/mitchellgordon95",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Fixed. Could you try again @mitchellgordon95 ?\r\nIt was due a file not being updated on S3.\r\n\r\nWe need to make sure all the datasets scripts get updated properly @julien-c ",
"Works for me! Thanks.",
"I found the same issue with almost any language other than English. (For English, it works). Will someone need to update the file on S3 again?",
"This is because only some languages are already preprocessed (en, de, fr, it) and stored on our google storage.\r\nWe plan to have a systematic way to preprocess more wikipedia languages in the future.\r\n\r\nFor the other languages you have to process them on your side using apache beam. That's why the lib asks for a Beam runner."
] | 1,594,943,283,000 | 1,610,451,676,000 | 1,594,995,868,000 | CONTRIBUTOR | null | There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available):
```
nlp.load_dataset('wikipedia', "20200501.en", split='train')
```
And now, having pulled master, I get:
```
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, total: 34.06 GiB) to /home/hltcoe/mgordon/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/76b0b2747b679bb0ee7a1621e50e5a6378477add0c662668a324a5bc07d516dd...
Traceback (most recent call last):
File "scripts/download.py", line 11, in <module>
fire.Fire(download_pretrain)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 138, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 468, in _Fire
target=component.__name__)
File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "scripts/download.py", line 6, in download_pretrain
nlp.load_dataset('wikipedia', "20200501.en", split='train')
File "/exp/mgordon/nlp/src/nlp/load.py", line 534, in load_dataset
save_infos=save_infos,
File "/exp/mgordon/nlp/src/nlp/builder.py", line 460, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/exp/mgordon/nlp/src/nlp/builder.py", line 870, in _download_and_prepare
"\n\t`{}`".format(usage_example)
nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, S
park, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.en', beam_runner='DirectRunner')`
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/407/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/406/comments | https://api.github.com/repos/huggingface/datasets/issues/406/events | https://github.com/huggingface/datasets/issues/406 | 658,581,764 | MDU6SXNzdWU2NTg1ODE3NjQ= | 406 | Faster Shuffling? | {
"login": "mitchellgordon95",
"id": 7490438,
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitchellgordon95",
"html_url": "https://github.com/mitchellgordon95",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think the slowness here probably come from the fact that we are copying from and to python.\r\n\r\n@lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?",
"> @lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?\r\n\r\nI just tried with `writer.write_table` with tables of 1000 elements and it's slower that the solution in #405 \r\n\r\nOn my side (select 10 000 examples):\r\n- Original implementation: 12s\r\n- Batched solution: 100ms\r\n- solution using arrow tables: 350ms\r\n\r\nI'll try with arrays and record batches to see if we can make it work.",
"I tried using `.take` from pyarrow recordbatches but it doesn't improve the speed that much:\r\n```python\r\nimport nlp\r\nimport numpy as np\r\n\r\ndset = nlp.Dataset.from_file(\"dummy_test_select.arrow\") # dummy dataset with 100000 examples like {\"a\": \"h\"*512}\r\nindices = np.random.randint(0, 100_000, 1000_000)\r\n```\r\n\r\n```python\r\n%%time\r\nbatch_size = 10_000\r\nwriter = ArrowWriter(schema=dset.schema, path=\"dummy_path\",\r\n writer_batch_size=1000, disable_nullable=False)\r\nfor i in tqdm(range(0, len(indices), batch_size)):\r\n table = pa.concat_tables(dset._data.slice(int(i), 1) for i in indices[i : min(len(indices), i + batch_size)])\r\n batch = table.to_pydict()\r\n writer.write_batch(batch)\r\nwriter.finalize()\r\n# 9.12s\r\n```\r\n\r\n\r\n```python\r\n%%time\r\nbatch_size = 10_000\r\nwriter = ArrowWriter(schema=dset.schema, path=\"dummy_path\", \r\n writer_batch_size=1000, disable_nullable=False)\r\nfor i in tqdm(range(0, len(indices), batch_size)):\r\n batch_indices = indices[i : min(len(indices), i + batch_size)]\r\n # First, extract only the indices that we need with a mask\r\n mask = [False] * len(dset)\r\n for k in batch_indices:\r\n mask[k] = True\r\n t_batch = dset._data.filter(pa.array(mask))\r\n # Second, build the list of indices for the filtered table, and taking care of duplicates\r\n rev_positions = {}\r\n duplicates = 0\r\n for i, j in enumerate(sorted(batch_indices)):\r\n if j in rev_positions:\r\n duplicates += 1\r\n else:\r\n rev_positions[j] = i - duplicates\r\n rev_map = [rev_positions[j] for j in batch_indices]\r\n # Third, use `.take` from the combined recordbatch\r\n t_combined = t_batch.combine_chunks() # load in memory\r\n recordbatch = t_combined.to_batches()[0]\r\n table = pa.Table.from_arrays(\r\n [recordbatch[c].take(pa.array(rev_map)) for c in range(len(dset._data.column_names))],\r\n schema=writer.schema\r\n )\r\n writer.write_table(table)\r\nwriter.finalize()\r\n# 3.2s\r\n```\r\n",
"Shuffling is now significantly faster thanks to #513 \r\nFeel free to play with it now :)\r\n\r\nClosing this one, but feel free to re-open if you have other questions"
] | 1,594,934,513,000 | 1,599,489,926,000 | 1,599,489,925,000 | CONTRIBUTOR | null | Consider shuffling bookcorpus:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
dataset.shuffle()
```
According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`.
But I can also just write the lines to a text file:
```
batch_size = 100000
with open('tmp.txt', 'w+') as out_f:
for i in tqdm(range(0, len(dataset), batch_size)):
batch = dataset[i:i+batch_size]['text']
print("\n".join(batch), file=out_f)
```
Which completes in a couple minutes, followed by `shuf tmp.txt > tmp2.txt` which completes in under a minute. And finally,
```
dataset = nlp.load_dataset('text', data_files='tmp2.txt')
```
Which completes in under 10 minutes. I read up on Apache Arrow this morning, and it seems like the columnar data format is not especially well-suited to shuffling rows, since moving items around requires a lot of book-keeping.
Is shuffle inherently slow, or am I just using it wrong? And if it is slow, would it make sense to try converting the data to a row-based format on disk and then shuffling? (Instead of calling select with a random permutation, as is currently done.) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/406/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/405/comments | https://api.github.com/repos/huggingface/datasets/issues/405/events | https://github.com/huggingface/datasets/pull/405 | 658,580,192 | MDExOlB1bGxSZXF1ZXN0NDUwNTI1MTc3 | 405 | Make select() faster by batching reads | {
"login": "mitchellgordon95",
"id": 7490438,
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitchellgordon95",
"html_url": "https://github.com/mitchellgordon95",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,934,385,000 | 1,595,005,544,000 | 1,595,004,686,000 | CONTRIBUTOR | null | Here's a benchmark:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
start = time.time()
dataset.select(np.arange(1000), reader_batch_size=1, load_from_cache_file=False)
end = time.time()
print(f'{end - start}')
start = time.time()
dataset.select(np.arange(1000), reader_batch_size=1000, load_from_cache_file=False)
end = time.time()
print(f'{end - start}')
```
Without batching, select takes around 1.27 seconds. With batching, it takes around 0.01 seconds. The slowness was upsetting me because dataset.shuffle() was supposed to take ~27 hours for bookcorpus. Now with the fix it takes ~2.5 hours (which still is pretty slow, but I'll open a separate issue for that). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/405/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/405",
"html_url": "https://github.com/huggingface/datasets/pull/405",
"diff_url": "https://github.com/huggingface/datasets/pull/405.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/405.patch",
"merged_at": 1595004686000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/404/comments | https://api.github.com/repos/huggingface/datasets/issues/404/events | https://github.com/huggingface/datasets/pull/404 | 658,400,987 | MDExOlB1bGxSZXF1ZXN0NDUwMzY4Mjg4 | 404 | Add seed in metrics | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,920,425,000 | 1,595,239,955,000 | 1,595,239,954,000 | MEMBER | null | With #361 we noticed that some metrics were not deterministic.
In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`.
The seed is set only when `compute` is called, and reset afterwards.
Moreover when calling `compute` with the same metric instance (i.e. same experiment_id), the metric will always return the same results given the same inputs. This is the case even if the seed is was not specified by the user, as the previous seed is going to be reused.
However, instantiating twice a metric (two different experiments) without specifying a seed can create different results. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/404/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/404",
"html_url": "https://github.com/huggingface/datasets/pull/404",
"diff_url": "https://github.com/huggingface/datasets/pull/404.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/404.patch",
"merged_at": 1595239954000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/403/comments | https://api.github.com/repos/huggingface/datasets/issues/403/events | https://github.com/huggingface/datasets/pull/403 | 658,325,756 | MDExOlB1bGxSZXF1ZXN0NDUwMzAzNjI2 | 403 | return python objects instead of arrays by default | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,914,712,000 | 1,594,985,821,000 | 1,594,985,820,000 | MEMBER | null | We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists.
I fixed it by using to_pydict/to_pylist instead.
Fix #387
It was mentioned in https://github.com/huggingface/transformers/issues/5729
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/403/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/403",
"html_url": "https://github.com/huggingface/datasets/pull/403",
"diff_url": "https://github.com/huggingface/datasets/pull/403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/403.patch",
"merged_at": 1594985820000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/402/comments | https://api.github.com/repos/huggingface/datasets/issues/402/events | https://github.com/huggingface/datasets/pull/402 | 658,001,288 | MDExOlB1bGxSZXF1ZXN0NDUwMDI2NTE0 | 402 | Search qa | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,890,010,000 | 1,594,909,620,000 | 1,594,909,619,000 | CONTRIBUTOR | null | add SearchQA dataset
#336 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/402/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/402",
"html_url": "https://github.com/huggingface/datasets/pull/402",
"diff_url": "https://github.com/huggingface/datasets/pull/402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/402.patch",
"merged_at": 1594909619000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/401/comments | https://api.github.com/repos/huggingface/datasets/issues/401/events | https://github.com/huggingface/datasets/pull/401 | 657,996,252 | MDExOlB1bGxSZXF1ZXN0NDUwMDIyNTc0 | 401 | add web_questions | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"What does the `nlp-cli dummy_data` command returns ?",
"`test.json` -> `test` \r\nand \r\n`train.json` -> `train`\r\n\r\nas shown by the `nlp-cli dummy_data` command ;-)",
"LGTM for merge @lhoestq - I let you merge if you want to."
] | 1,594,889,699,000 | 1,596,694,580,000 | 1,596,694,579,000 | CONTRIBUTOR | null | add Web Question dataset
#336
Maybe @patrickvonplaten you can help with the dummy_data structure? it still broken | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/401/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/401",
"html_url": "https://github.com/huggingface/datasets/pull/401",
"diff_url": "https://github.com/huggingface/datasets/pull/401.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/401.patch",
"merged_at": 1596694579000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/400/comments | https://api.github.com/repos/huggingface/datasets/issues/400/events | https://github.com/huggingface/datasets/pull/400 | 657,975,600 | MDExOlB1bGxSZXF1ZXN0NDUwMDA1MDU5 | 400 | Web questions | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,888,109,000 | 1,594,889,451,000 | 1,594,888,974,000 | CONTRIBUTOR | null | add the WebQuestion dataset
#336 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/400/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/400",
"html_url": "https://github.com/huggingface/datasets/pull/400",
"diff_url": "https://github.com/huggingface/datasets/pull/400.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/400.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/399/comments | https://api.github.com/repos/huggingface/datasets/issues/399/events | https://github.com/huggingface/datasets/pull/399 | 657,841,433 | MDExOlB1bGxSZXF1ZXN0NDQ5ODkxNTEy | 399 | Spelling mistake | {
"login": "BlancRay",
"id": 9410067,
"node_id": "MDQ6VXNlcjk0MTAwNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9410067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BlancRay",
"html_url": "https://github.com/BlancRay",
"followers_url": "https://api.github.com/users/BlancRay/followers",
"following_url": "https://api.github.com/users/BlancRay/following{/other_user}",
"gists_url": "https://api.github.com/users/BlancRay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BlancRay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BlancRay/subscriptions",
"organizations_url": "https://api.github.com/users/BlancRay/orgs",
"repos_url": "https://api.github.com/users/BlancRay/repos",
"events_url": "https://api.github.com/users/BlancRay/events{/privacy}",
"received_events_url": "https://api.github.com/users/BlancRay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks!"
] | 1,594,874,278,000 | 1,594,882,188,000 | 1,594,882,177,000 | CONTRIBUTOR | null | In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr". | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/399/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/399",
"html_url": "https://github.com/huggingface/datasets/pull/399",
"diff_url": "https://github.com/huggingface/datasets/pull/399.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/399.patch",
"merged_at": 1594882177000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/398/comments | https://api.github.com/repos/huggingface/datasets/issues/398/events | https://github.com/huggingface/datasets/pull/398 | 657,511,962 | MDExOlB1bGxSZXF1ZXN0NDQ5NjE1OTk1 | 398 | Add inline links | {
"login": "Bharat123rox",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bharat123rox",
"html_url": "https://github.com/Bharat123rox",
"followers_url": "https://api.github.com/users/Bharat123rox/followers",
"following_url": "https://api.github.com/users/Bharat123rox/following{/other_user}",
"gists_url": "https://api.github.com/users/Bharat123rox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bharat123rox/subscriptions",
"organizations_url": "https://api.github.com/users/Bharat123rox/orgs",
"repos_url": "https://api.github.com/users/Bharat123rox/repos",
"events_url": "https://api.github.com/users/Bharat123rox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bharat123rox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Do you mind adding a link to the much more extended pages on adding and sharing a dataset in the new documentation?",
"Sure, I will do that too"
] | 1,594,832,644,000 | 1,595,412,862,000 | 1,595,412,862,000 | CONTRIBUTOR | null | Add inline links to `Contributing.md` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/398/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/398",
"html_url": "https://github.com/huggingface/datasets/pull/398",
"diff_url": "https://github.com/huggingface/datasets/pull/398.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/398.patch",
"merged_at": 1595412862000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/397/comments | https://api.github.com/repos/huggingface/datasets/issues/397/events | https://github.com/huggingface/datasets/pull/397 | 657,510,856 | MDExOlB1bGxSZXF1ZXN0NDQ5NjE1MDA4 | 397 | Add contiguous sharding | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,832,578,000 | 1,595,005,171,000 | 1,595,005,171,000 | CONTRIBUTOR | null | This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing.
Usage:
```
nlp.concatenate_datasets([dset.shard(n, i, contiguous=True) for i in range(n)])
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/397/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/397",
"html_url": "https://github.com/huggingface/datasets/pull/397",
"diff_url": "https://github.com/huggingface/datasets/pull/397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/397.patch",
"merged_at": 1595005170000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/396/comments | https://api.github.com/repos/huggingface/datasets/issues/396/events | https://github.com/huggingface/datasets/pull/396 | 657,477,952 | MDExOlB1bGxSZXF1ZXN0NDQ5NTg3MDQ4 | 396 | Fix memory issue when doing select | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,829,704,000 | 1,594,886,852,000 | 1,594,886,851,000 | MEMBER | null | We were passing the `nlp.Dataset` object to get the hash for the new dataset's file name.
Fix #395 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/396/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/396",
"html_url": "https://github.com/huggingface/datasets/pull/396",
"diff_url": "https://github.com/huggingface/datasets/pull/396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/396.patch",
"merged_at": 1594886850000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/395/comments | https://api.github.com/repos/huggingface/datasets/issues/395/events | https://github.com/huggingface/datasets/issues/395 | 657,454,983 | MDU6SXNzdWU2NTc0NTQ5ODM= | 395 | Memory issue when doing select | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,594,827,818,000 | 1,594,886,851,000 | 1,594,886,851,000 | MEMBER | null | As noticed in #389, the following code loads the entire wikipedia in memory.
```python
import nlp
w = nlp.load_dataset("wikipedia", "20200501.en", split="train")
w.select([0])
```
This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that tries to serialize the function with all the wikipedia data with it.
It's not the case with `.map` or `.filter`.
However functions that are based on `.select` like `.shuffle`, `.shard`, `.train_test_split`, `.sort` are affected.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/395/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/394/comments | https://api.github.com/repos/huggingface/datasets/issues/394/events | https://github.com/huggingface/datasets/pull/394 | 657,425,548 | MDExOlB1bGxSZXF1ZXN0NDQ5NTQzNTE0 | 394 | Remove remaining nested dict | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,825,552,000 | 1,594,885,192,000 | 1,594,885,191,000 | CONTRIBUTOR | null | This PR deletes the remaining unnecessary nested dict
#378 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/394/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/394",
"html_url": "https://github.com/huggingface/datasets/pull/394",
"diff_url": "https://github.com/huggingface/datasets/pull/394.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/394.patch",
"merged_at": 1594885191000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/393/comments | https://api.github.com/repos/huggingface/datasets/issues/393/events | https://github.com/huggingface/datasets/pull/393 | 657,330,911 | MDExOlB1bGxSZXF1ZXN0NDQ5NDY1MTAz | 393 | Fix extracted files directory for the DownloadManager | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,817,995,000 | 1,595,005,336,000 | 1,595,005,334,000 | MEMBER | null | The cache dir was often cluttered by extracted files because of the download manager.
For downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to cache_dir/downloads/extracted. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/393/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/393",
"html_url": "https://github.com/huggingface/datasets/pull/393",
"diff_url": "https://github.com/huggingface/datasets/pull/393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/393.patch",
"merged_at": 1595005334000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/392/comments | https://api.github.com/repos/huggingface/datasets/issues/392/events | https://github.com/huggingface/datasets/pull/392 | 657,313,738 | MDExOlB1bGxSZXF1ZXN0NDQ5NDUwOTkx | 392 | Style change detection | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,816,334,000 | 1,595,337,516,000 | 1,595,006,003,000 | CONTRIBUTOR | null | Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents.
- There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels are a subset of PAN20's). These would probably make more sense as separate datasets (like wmt is now)
- I've converted the integer 0,1 values to a boolean
- Using manually downloaded data again. This might be changed at some point following the discussion in https://github.com/huggingface/nlp/pull/349. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/392/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/392",
"html_url": "https://github.com/huggingface/datasets/pull/392",
"diff_url": "https://github.com/huggingface/datasets/pull/392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/392.patch",
"merged_at": 1595006003000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/391/comments | https://api.github.com/repos/huggingface/datasets/issues/391/events | https://github.com/huggingface/datasets/issues/391 | 656,991,432 | MDU6SXNzdWU2NTY5OTE0MzI= | 391 | 🌟 [Metric Request] WOOD score | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2459308248,
"node_id": "MDU6TGFiZWwyNDU5MzA4MjQ4",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20request",
"name": "metric request",
"color": "d4c5f9",
"default": false,
"description": "Requesting to add a new metric"
}
] | open | false | null | [] | null | [] | 1,594,775,797,000 | 1,603,813,408,000 | null | NONE | null | WOOD score paper : https://arxiv.org/pdf/2007.06898.pdf
Abstract :
>Models that surpass human performance on several popular benchmarks display significant degradation in performance on exposure to Out of Distribution (OOD) data. Recent research has shown that models overfit to spurious biases and ‘hack’ datasets, in lieu of learning generalizable features like humans. In order to stop the inflation in model performance – and thus overestimation in AI systems’ capabilities – we propose a simple and novel evaluation metric, WOOD Score, that encourages generalization during evaluation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/391/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/390/comments | https://api.github.com/repos/huggingface/datasets/issues/390/events | https://github.com/huggingface/datasets/pull/390 | 656,956,384 | MDExOlB1bGxSZXF1ZXN0NDQ5MTYxMzY3 | 390 | Concatenate datasets | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks cool :)\r\n\r\nI feel like \r\n```python\r\nconcatenated_dataset = dataset1.concatenate(dataset2)\r\n```\r\ncould be more natural. What do you think ?\r\n\r\nAlso could you also concatenate the `nlp.Dataset._data_files` ?\r\n```python\r\nreturn cls(table, info=info, split=split, data_files=self._data_files + other_dataset._data_files)\r\n```",
"I feel like \"WikiBooks\" would be a multi task dataset that could fit in the #217 discussion.\r\nNot sure concatenate should be the solution for a multi task dataset.",
"Thanks for the suggestion! `dset1.concatenate(dset2)` does feel more natural. Although this seems to be a different \"class\" of transformation function than map() or filter(), acting on two datasets rather than on one. I would prefer the function signature treat both datasets symmetrically.\r\n\r\nPython lists have `list1 + list2` or `list1.extend(list2)`.\r\nNumPy has `np.concatenate((arr1, arr2))`.\r\nPandas has `pd.join((df1, df2))`.\r\nPyTorch has `ConcatDataset((dset1, dset2))`.\r\n\r\nGiven the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work?",
"The multi-task discussion is interesting, thanks for pointing me to that! I'll be focusing on T5 in a few weeks, so I'm sure I'll have many opinions then :). For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text.",
"> Given the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work?\r\n\r\nYep I like this idea. Maybe `nlp.concatenate_datasets()` ?\r\n\r\n> For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text.\r\n\r\nI agree :)",
"Great, just updated!"
] | 1,594,769,077,000 | 1,595,411,398,000 | 1,595,411,398,000 | CONTRIBUTOR | null | I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/390/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/390",
"html_url": "https://github.com/huggingface/datasets/pull/390",
"diff_url": "https://github.com/huggingface/datasets/pull/390.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/390.patch",
"merged_at": 1595411398000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/389/comments | https://api.github.com/repos/huggingface/datasets/issues/389/events | https://github.com/huggingface/datasets/pull/389 | 656,921,768 | MDExOlB1bGxSZXF1ZXN0NDQ5MTMyOTU5 | 389 | Fix pickling of SplitDict | {
"login": "mitchellgordon95",
"id": 7490438,
"node_id": "MDQ6VXNlcjc0OTA0Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitchellgordon95",
"html_url": "https://github.com/mitchellgordon95",
"followers_url": "https://api.github.com/users/mitchellgordon95/followers",
"following_url": "https://api.github.com/users/mitchellgordon95/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchellgordon95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mitchellgordon95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchellgordon95/subscriptions",
"organizations_url": "https://api.github.com/users/mitchellgordon95/orgs",
"repos_url": "https://api.github.com/users/mitchellgordon95/repos",
"events_url": "https://api.github.com/users/mitchellgordon95/events{/privacy}",
"received_events_url": "https://api.github.com/users/mitchellgordon95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"By the way, the reason this is an issue for me is because I want to be able to \"save\" changes made to a dataset by writing something to disk. In this case, I would like to pre-process my dataset once, and then train multiple models on the dataset later without having to re-process the data. \r\n\r\nIs pickling/unpickling the Dataset object the \"sanctioned\" way of doing this? Or is there a better way that I'm missing?",
"I've had success with saving datasets to disk via:\r\n\r\n```python\r\ncache_file = \"/my/dset.cache\"\r\ndset = dset.map(whatever, cache_file_name=cache_file)\r\n# then, later\r\ndset = nlp.Dataset.from_file(cache_file)\r\n```\r\n\r\nThis restores the dataset with all the attributes I need.",
"Thanks @jarednielsen, that makes sense. I'm a little wary of messing with the cache files, since I still don't really understand what's going on under the hood with Apache Arrow. \r\n\r\nRelated question: I'd like to do parallel pre-processing of the dataset. I know how to break the dataset up via sharding, but is there any way to combine the shards back together again once the processing is done? Right now I'm probably just going to iterate over each shard, write the contexts to a txt file, and then cat the txt files, but it feels like there ought to be a nicer way to concatenate datasets.",
"Haha, opened a PR for that functionality about an hour ago: https://github.com/huggingface/nlp/pull/390. Glad we're on the same page :)",
"Datasets are not supposed to be pickled as pickle tries to put all the dataset in memory if I'm not wrong (and write all the data on disk).\r\nThe concatenate method however is a very cool feature, looking forward to having it merged :)",
"Ah, yes, you are correct. The pickle file contains the whole dataset, not just the cache names, which is not quite what I expected.\r\n\r\nI tried adding a warning when pickling a Dataset, to prevent others like me from trying it. Interestingly, however, the warning is raised whenever any function on the dataset is called (select, shard, etc.). \r\n\r\n```\r\nimport nlp\r\nwiki = nlp.load_dataset('wikipedia', split='train')\r\nwiki = wiki.shard(16, 0) # Triggers pickling of dataset\r\n```\r\n\r\nI believe this is because [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626), which gets the function signature, is actually pickling the whole dataset (and thereby serializing all the data to text). I checked by printing that string, and sure enough it was full of Wikipedia articles.\r\n\r\nI don't think the whole pickling thing is worth the effort, so I'll close the PR. But I did want to mention this serialization behavior in case it's not intended.",
"Thanks for reporting. Indeed this line shouldn't serialize the data but only the function itself.\r\n",
"Keeping this open because I would like to keep brainstorming a bit on this.\r\n\r\nOne note on this is that we should have a clean serialization workflow, probably one that could serialize to a few formats (arrow, parquet and tfrecords come to mind).",
"This PR could be useful. My specific use case is `multiprocessing.Pool` for parallel preprocessing (because of the Python tokenization bottleneck at https://github.com/huggingface/transformers/issues/5729). I shard a large dataset, run map on each shard within a multiprocessing pool, and then concatenate them back together. This is only possible if a dataset can be pickled, otherwise the logic is much more complex. There's no reason to make it un-picklable, even if it's not the recommended usage.\r\n\r\n```python\r\nimport nlp\r\nimport multiprocessing\r\n\r\ndef func(ex):\r\n return {\"text\": \"Prefix: \" + ex[\"text\"]}\r\n\r\ndef map_helper(dset):\r\n return dset.map(func)\r\n\r\nn_shards = 16\r\ndset = nlp.load_dataset(\"wikitext-2-raw-v1\", split=\"train\")\r\nwith multiprocessing.Pool(processes=n_shards) as pool:\r\n shards = pool.map(map_helper, [dset.shard(n_shards, i, contiguous=True) for i in range(n_shards)])\r\ndset = nlp.concatenate_datasets(shards)\r\n```\r\n",
"Yes I agree.\r\n#423 just got merged and should allow serialization of `SplitDict`. Could you try it and see if it'ok on your side now ?",
"Closing this, assuming it was fixed in #423."
] | 1,594,763,619,000 | 1,596,551,890,000 | 1,596,551,890,000 | CONTRIBUTOR | null | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/389/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/389",
"html_url": "https://github.com/huggingface/datasets/pull/389",
"diff_url": "https://github.com/huggingface/datasets/pull/389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/389.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/388/comments | https://api.github.com/repos/huggingface/datasets/issues/388/events | https://github.com/huggingface/datasets/issues/388 | 656,707,497 | MDU6SXNzdWU2NTY3MDc0OTc= | 388 | 🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17 | {
"login": "SamuelCahyawijaya",
"id": 2826602,
"node_id": "MDQ6VXNlcjI4MjY2MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelCahyawijaya",
"html_url": "https://github.com/SamuelCahyawijaya",
"followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers",
"following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs",
"repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos",
"events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"similar slow download speed here for nlp.load_dataset('wmt14', 'fr-en')\r\n`\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 658M/658M [1:00:42<00:00, 181kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 918M/918M [1:39:38<00:00, 154kB/s]\r\nDownloading: 2%|▉ | 40.9M/2.37G [04:48<5:03:06, 128kB/s]\r\n`\r\nCould we just download a specific subdataset in 'wmt14', such as 'newstest14'? ",
"> The code runs but the download speed is extremely slow, the same behaviour is not observed on wmt16 and wmt18\r\n\r\nThe original source for the files may provide slow download speeds.\r\nWe can probably host these files ourselves.\r\n\r\n> When trying to download wmt17 zh-en, I got the following error:\r\n> ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz\r\n\r\nLooks like the file`UNv1.0.en-zh.tar.gz` is missing, or the url changed. We need to fix that\r\n\r\n> Could we just download a specific subdataset in 'wmt14', such as 'newstest14'?\r\n\r\nRight now I don't think it's possible. Maybe @patrickvonplaten knows more about it\r\n",
"Yeah, the download speed is sadly always extremely slow :-/. \r\nI will try to check out the `wmt17 zh-en` bug :-) ",
"Maybe this can be used - https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00 "
] | 1,594,741,001,000 | 1,596,639,392,000 | null | NONE | null | 1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code:
```
nlp.load_dataset('wmt14','de-en')
nlp.load_dataset('wmt15','de-en')
nlp.load_dataset('wmt17','de-en')
nlp.load_dataset('wmt19','de-en')
```
The code runs but the download speed is **extremely slow**, the same behaviour is not observed on `wmt16` and `wmt18`
2. When trying to download `wmt17 zh-en`, I got the following error:
> ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/388/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/387/comments | https://api.github.com/repos/huggingface/datasets/issues/387/events | https://github.com/huggingface/datasets/issues/387 | 656,361,357 | MDU6SXNzdWU2NTYzNjEzNTc= | 387 | Conversion through to_pandas output numpy arrays for lists instead of python objects | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"To convert from arrow type we have three options: to_numpy, to_pandas and to_pydict/to_pylist.\r\n\r\n- to_numpy and to_pandas return numpy arrays instead of lists but are very fast.\r\n- to_pydict/to_pylist can be 100x slower and become the bottleneck for reading data, but at least they return lists.\r\n\r\nMaybe we can have to_pydict/to_pylist as the default and use to_numpy or to_pandas when the format (set by `set_format`) is 'numpy' or 'pandas'"
] | 1,594,707,841,000 | 1,594,985,820,000 | 1,594,985,820,000 | MEMBER | null | In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects.
Here is an example:
```python
>>> dataset._data.slice(key, 1).to_pandas().to_dict("list")
{'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [array([ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292,
1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938,
4267, 12223, 21811, 1117, 2554, 119, 102])], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0])], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1])]}
>>> type(dataset._data.slice(key, 1).to_pandas().to_dict("list")['input_ids'][0])
<class 'numpy.ndarray'>
>>> dataset._data.slice(key, 1).to_pydict()
{'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [[101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/387/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/387/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/386/comments | https://api.github.com/repos/huggingface/datasets/issues/386/events | https://github.com/huggingface/datasets/pull/386 | 655,839,067 | MDExOlB1bGxSZXF1ZXN0NDQ4MjQ1NDI4 | 386 | Update dataset loading and features - Add TREC dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just copied the files that are on google storage to follow the new `_relative_data_dir ` format. It should be good to merge now :)\r\n\r\nWell actually it seems there are some merge conflicts to fix first"
] | 1,594,645,818,000 | 1,594,887,478,000 | 1,594,887,478,000 | MEMBER | null | This PR:
- add a template for a new dataset script
- update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is usually a outdated). This makes it in particular easier to iterate when writing a new dataset loading script.
- fix a bug in the `ClassLabel` feature and make it more flexible so that its methods `str2int` and `int2str` can also accept list, numpy arrays and PyTorch/TensorFlow tensors.
- add the TREC-6 dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/386/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/386",
"html_url": "https://github.com/huggingface/datasets/pull/386",
"diff_url": "https://github.com/huggingface/datasets/pull/386.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/386.patch",
"merged_at": 1594887478000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/385/comments | https://api.github.com/repos/huggingface/datasets/issues/385/events | https://github.com/huggingface/datasets/pull/385 | 655,663,997 | MDExOlB1bGxSZXF1ZXN0NDQ4MTAzMjY5 | 385 | Remove unnecessary nested dict | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"We can probably scan the dataset scripts with a regexpr to try to identify this pattern cc @patrickvonplaten maybe",
"@mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.\r\n\r\n```python\r\n#!/usr/bin/env python3\r\n\r\nfrom nlp import prepare_module, DownloadConfig, import_main_class, hf_api\r\nimport tempfile\r\n\r\n\r\ndef scan_for_nested_unnecessary_dict(dataset_name):\r\n\r\n def load_builder_class(dataset_name):\r\n module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))\r\n return import_main_class(module_path)\r\n\r\n def load_configs(dataset_name):\r\n builder_cls = load_builder_class(dataset_name)\r\n if len(builder_cls.BUILDER_CONFIGS) == 0:\r\n return [None]\r\n return builder_cls.BUILDER_CONFIGS\r\n\r\n def scan_features_for_nested_dict(features):\r\n is_sequence = False\r\n if hasattr(features, \"_type\"):\r\n if features._type != 'Sequence':\r\n return False\r\n else:\r\n is_sequence = True\r\n features = features.feature\r\n\r\n if isinstance(features, list):\r\n for value in features:\r\n if scan_features_for_nested_dict(value):\r\n return True\r\n return False\r\n\r\n elif isinstance(features, dict):\r\n for key, value in features.items():\r\n if is_sequence and len(features.keys()) == 1 and hasattr(features[key], \"_type\") and features[key]._type != \"Sequence\":\r\n return True\r\n if scan_features_for_nested_dict(value):\r\n return True\r\n return False\r\n elif hasattr(features, \"_type\"):\r\n return False\r\n else:\r\n raise ValueError(f\"{features} should be either a list, a dict or a feature\")\r\n\r\n configs = load_configs(dataset_name)\r\n\r\n for config in configs:\r\n with tempfile.TemporaryDirectory() as processed_temp_dir:\r\n # create config and dataset\r\n dataset_builder_cls = load_builder_class(dataset_name)\r\n name = config.name if config is not None else None\r\n dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)\r\n\r\n is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)\r\n if is_nested_dict_in_dataset:\r\n print(f\"{dataset_name} with {name} needs refactoring\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n scan_for_nested_unnecessary_dict(\"race\") # prints True\r\n scan_for_nested_unnecessary_dict(\"mlqa\") # prints True\r\n scan_for_nested_unnecessary_dict(\"squad\") # prints Nothing\r\n\r\n # ran the following lines for 1min and seems to work -> didn't check for all datasets though\r\n# api = hf_api.HfApi()\r\n# all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]\r\n# for dataset in all_datasets:\r\n# scan_for_nested_unnecessary_dict(dataset)\r\n```",
"> @mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.\r\n> \r\n> ```python\r\n> #!/usr/bin/env python3\r\n> \r\n> from nlp import prepare_module, DownloadConfig, import_main_class, hf_api\r\n> import tempfile\r\n> \r\n> \r\n> def scan_for_nested_unnecessary_dict(dataset_name):\r\n> \r\n> def load_builder_class(dataset_name):\r\n> module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))\r\n> return import_main_class(module_path)\r\n> \r\n> def load_configs(dataset_name):\r\n> builder_cls = load_builder_class(dataset_name)\r\n> if len(builder_cls.BUILDER_CONFIGS) == 0:\r\n> return [None]\r\n> return builder_cls.BUILDER_CONFIGS\r\n> \r\n> def scan_features_for_nested_dict(features):\r\n> is_sequence = False\r\n> if hasattr(features, \"_type\"):\r\n> if features._type != 'Sequence':\r\n> return False\r\n> else:\r\n> is_sequence = True\r\n> features = features.feature\r\n> \r\n> if isinstance(features, list):\r\n> for value in features:\r\n> if scan_features_for_nested_dict(value):\r\n> return True\r\n> return False\r\n> \r\n> elif isinstance(features, dict):\r\n> for key, value in features.items():\r\n> if is_sequence and len(features.keys()) == 1 and hasattr(features[key], \"_type\") and features[key]._type != \"Sequence\":\r\n> return True\r\n> if scan_features_for_nested_dict(value):\r\n> return True\r\n> return False\r\n> else:\r\n> raise ValueError(f\"{features} should be either a list of a dict\")\r\n> \r\n> configs = load_configs(dataset_name)\r\n> \r\n> for config in configs:\r\n> with tempfile.TemporaryDirectory() as processed_temp_dir:\r\n> # create config and dataset\r\n> dataset_builder_cls = load_builder_class(dataset_name)\r\n> name = config.name if config is not None else None\r\n> dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)\r\n> \r\n> is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)\r\n> if is_nested_dict_in_dataset:\r\n> print(f\"{dataset_name} with {name} needs refactoring\")\r\n> \r\n> \r\n> if __name__ == \"__main__\":\r\n> scan_for_nested_unnecessary_dict(\"race\") # prints True\r\n> scan_for_nested_unnecessary_dict(\"mlqa\") # prints True\r\n> scan_for_nested_unnecessary_dict(\"squad\") # prints Nothing\r\n> \r\n> # ran the following lines for 1min and seems to work -> didn't check for all datasets though\r\n> # api = hf_api.HfApi()\r\n> # all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]\r\n> # for dataset in all_datasets:\r\n> # scan_for_nested_unnecessary_dict(dataset)\r\n> ```\r\n\r\nGreat, I will try it",
"I'm not sure the work on this PR was finished @lhoestq cc @mariamabarham @patrickvonplaten ",
"Sorry for that, apparently there are other datasets that could have unnecessary nested dicts.\r\nWe can have another PR to scan and fix the other datasets.\r\n"
] | 1,594,629,983,000 | 1,594,812,458,000 | 1,594,807,433,000 | CONTRIBUTOR | null | This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/385/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/385",
"html_url": "https://github.com/huggingface/datasets/pull/385",
"diff_url": "https://github.com/huggingface/datasets/pull/385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/385.patch",
"merged_at": 1594807433000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/383/comments | https://api.github.com/repos/huggingface/datasets/issues/383/events | https://github.com/huggingface/datasets/pull/383 | 655,291,201 | MDExOlB1bGxSZXF1ZXN0NDQ3ODI0OTky | 383 | Adding the Linguistic Code-switching Evaluation (LinCE) benchmark | {
"login": "gaguilar",
"id": 5833357,
"node_id": "MDQ6VXNlcjU4MzMzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaguilar",
"html_url": "https://github.com/gaguilar",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions",
"organizations_url": "https://api.github.com/users/gaguilar/orgs",
"repos_url": "https://api.github.com/users/gaguilar/repos",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"received_events_url": "https://api.github.com/users/gaguilar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I am checking the details of the CI log for the failed test, but I don't see how the error relates to the code I added; the error is coming from a config builder different than the `LinceConfig`, and it crashes when `self.config.data_files` because is self.config is None. I would appreciate if someone could help me find out where I could have messed things up :)\r\n\r\nAlso, the real and dummy data tests passed before committing and pushing my changes.\r\n\r\nThanks a lot in advance!\r\n\r\n```\r\n=================================== FAILURES ===================================\r\n____________________ AWSDatasetTest.test_load_dataset_text _____________________\r\n\r\nself = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_text>\r\ndataset_name = 'text'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs)\r\n\r\ntests/test_dataset_common.py:243: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_dataset_common.py:137: in check_load_dataset\r\n try_from_hf_gcs=False,\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:432: in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n../.local/lib/python3.6/site-packages/nlp/builder.py:466: in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <nlp.datasets.text.bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b.text.Text object at 0x7efa744ffb70>\r\ndl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7efb304c52b0>\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" The `datafiles` kwarg in load_dataset() can be a str, List[str], Dict[str,str], or Dict[str,List[str]].\r\n \r\n If str or List[str], then the dataset returns only the 'train' split.\r\n If dict, then keys should be from the `nlp.Split` enum.\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n # Handle case with only one split\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"files\": files})]\r\n else:\r\n # Handle case with several splits and a dict mapping\r\n splits = []\r\n for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:\r\n> if split_name in self.config.data_files:\r\nE TypeError: argument of type 'NoneType' is not iterable\r\n\r\n../.local/lib/python3.6/site-packages/nlp/datasets/text/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b/text.py:24: TypeError\r\n=============================== warnings summary ===============================\r\n... \r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_text\r\n====== 1 failed, 963 passed, 532 skipped, 5 warnings in 166.33s (0:02:46) ======\r\n\r\nExited with code exit status 1\r\n```",
"@lhoestq Hi Quentin, I was wondering if you could give some feedback on this error from the `run_dataset_script_tests` script. It seems that's coming from a different config builder than the one I added, so I am not sure why this error would occur. Thanks in advance!",
"Awesome! Thank you for all your comments! 👌 I will update the PR in a bit with all the required changes 🙂 \r\n\r\nLet me just provide a bit of context for my changes:\r\n\r\nI was referring to the GLUE, XTREME and WNUT_17 dataset scripts to build mine (not sure if the new documentation was available last week). This is where I took the naming convention for the citation and description variables. Also, these scripts didn't have the `BUILDER_CONFIG_CLASS = LinceConfig` line so I commented this out thinking I didn't need that; I tried this line in my attempts to make the real and dummy data tests pass but it was not helping. \r\n\r\nThe problem I was facing was that the tests were passing a default `BuilderConfig` (i.e., `self.config.name` property was set to `'default'` and my custom properties were not available). This means, for example, that within the `def _info(...)` method, I was not able to access the specific fields of my `LinceConfig` class (which is why I have now a global variable `_LINCE_CITATIONS`, to detach the individual citations from the corresponding LinceConfig objects, as well as I am constructing manually the feature infos). This default `BuilderConfig` is why I added the `if not isinstance(self.config, LinceConfig): return []` statement. Otherwise, accessing custom properties like `self.config.colnames` was failing the test because such properties did not exist in the default config (i.e., it was not a `LinceConfig`).\r\n\r\nI will update the PR and see if these problems happen in the CI tests.\r\n\r\nThanks again for the follow-up! @lhoestq ",
"Ok I see !\r\n\r\nTo give you more details: the line `BUILDER_CONFIG_CLASS = LinceConfig` tells the tests how to instantiate a config for this dataset. Therefore if you have this line you should have all the fields of your config available.\r\n\r\nTo fix the errors you get you'll have to, first, have the `BUILDER_CONFIG_CLASS = LinceConfig` line, and second, add default values for the parameters of your config (or the tests functions will be unable to instantiate it by calling `LinceConfig()`.\r\n\r\nAn example of dataset with a custom config with additional filed like this one is [biomrc](https://github.com/huggingface/nlp/blob/master/datasets/biomrc/biomrc.py).\r\nFeel free to give a look at it if you want.",
"Thanks for the reference!\r\n\r\nI just updated the PR with the suggested changes. It seems the CI failed on the same test you said we could ignore, so I guess it's okay :) \r\n\r\nPlease let me know if there is something else I may need to change."
] | 1,594,506,920,000 | 1,594,916,386,000 | 1,594,916,386,000 | CONTRIBUTOR | null | Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).
>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.
The data comes from social media and here's the summary table of tasks per language pair:
| Language Pairs | LID | POS | NER | SA |
|----------------------------------------|-----|-----|-----|----|
| Spanish-English | ✅ | ✅ | ✅ | ✅ |
| Hindi-English | ✅ | ✅ | ✅ | |
| Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | |
| Nepali-English | ✅ | | | |
The tasks are as follows:
* LID: token-level language identification
* POS: part-of-speech tagging
* NER: named entity recognition
* SA: sentiment analysis
With the exception of MSA-EA, the rest of the datasets contain token-level LID labels.
## Usage
For Spanish-English LID, we can load the data as follows:
```
import nlp
data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng')
for split in data:
print(data[split])
```
Here's the output:
```
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)
Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)
```
Here's the list of shortcut names for every dataset available in LinCE:
* `lid_spaeng`
* `lid_hineng`
* `lid_nepeng`
* `lid_msaea`
* `pos_spaeng`
* `pos_hineng`
* `ner_spaeng`
* `ner_hineng`
* `ner_msaea`
* `sa_spaeng`
All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.
## Features
Here is how the features look in the case of language identification (LID) tasks:
| LID Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
For part-of-speech (POS) tagging:
| POS Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `pos` | `list<str>` | List of POS tags (string) of a sentence |
For named entity recognition (NER):
| NER Feature | Type | Description |
|----------------------|---------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `ner` | `list<str>` | List of NER labels (string) of a sentence |
**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.
For sentiment analysis (SA):
| SA Feature | Type | Description |
|---------------------|-------------|-------------------------------------------|
| `idx` | `int` | Dataset index of current sentence |
| `tokens` | `list<str>` | List of tokens (string) of a sentence |
| `lid` | `list<str>` | List of LID labels (string) of a sentence |
| `sa` | `str` | Sentiment label (string) of a sentence |
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/383/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/383",
"html_url": "https://github.com/huggingface/datasets/pull/383",
"diff_url": "https://github.com/huggingface/datasets/pull/383.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/383.patch",
"merged_at": 1594916386000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/382/comments | https://api.github.com/repos/huggingface/datasets/issues/382/events | https://github.com/huggingface/datasets/issues/382 | 655,290,482 | MDU6SXNzdWU2NTUyOTA0ODI= | 382 | 1080 | {
"login": "saq194",
"id": 60942503,
"node_id": "MDQ6VXNlcjYwOTQyNTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/60942503?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saq194",
"html_url": "https://github.com/saq194",
"followers_url": "https://api.github.com/users/saq194/followers",
"following_url": "https://api.github.com/users/saq194/following{/other_user}",
"gists_url": "https://api.github.com/users/saq194/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saq194/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saq194/subscriptions",
"organizations_url": "https://api.github.com/users/saq194/orgs",
"repos_url": "https://api.github.com/users/saq194/repos",
"events_url": "https://api.github.com/users/saq194/events{/privacy}",
"received_events_url": "https://api.github.com/users/saq194/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,506,547,000 | 1,594,507,778,000 | 1,594,507,778,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/382/timeline | null | null | null | false |
|
https://api.github.com/repos/huggingface/datasets/issues/381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/381/comments | https://api.github.com/repos/huggingface/datasets/issues/381/events | https://github.com/huggingface/datasets/issues/381 | 655,277,119 | MDU6SXNzdWU2NTUyNzcxMTk= | 381 | NLp | {
"login": "Spartanthor",
"id": 68147610,
"node_id": "MDQ6VXNlcjY4MTQ3NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/68147610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Spartanthor",
"html_url": "https://github.com/Spartanthor",
"followers_url": "https://api.github.com/users/Spartanthor/followers",
"following_url": "https://api.github.com/users/Spartanthor/following{/other_user}",
"gists_url": "https://api.github.com/users/Spartanthor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Spartanthor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Spartanthor/subscriptions",
"organizations_url": "https://api.github.com/users/Spartanthor/orgs",
"repos_url": "https://api.github.com/users/Spartanthor/repos",
"events_url": "https://api.github.com/users/Spartanthor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Spartanthor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,500,614,000 | 1,594,500,639,000 | 1,594,500,639,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/381/timeline | null | null | null | false |
|
https://api.github.com/repos/huggingface/datasets/issues/378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/378/comments | https://api.github.com/repos/huggingface/datasets/issues/378/events | https://github.com/huggingface/datasets/issues/378 | 655,226,316 | MDU6SXNzdWU2NTUyMjYzMTY= | 378 | [dataset] Structure of MLQA seems unecessary nested | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Same for the RACE dataset: https://github.com/huggingface/nlp/blob/master/datasets/race/race.py\r\n\r\nShould we scan all the datasets to remove this pattern of un-necessary nesting?",
"You're right, I think we don't need to use the nested dictionary. \r\n"
] | 1,594,480,568,000 | 1,594,829,840,000 | 1,594,829,840,000 | MEMBER | null | The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97
Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds?
```python
features=nlp.Features(
{
"context": nlp.Value("string"),
"questions": nlp.features.Sequence({"question": nlp.Value("string")}),
"answers": nlp.features.Sequence(
{"text": nlp.Value("string"), "answer_start": nlp.Value("int32"),}
),
"ids": nlp.features.Sequence({"idx": nlp.Value("string")})
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/378/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/377/comments | https://api.github.com/repos/huggingface/datasets/issues/377/events | https://github.com/huggingface/datasets/issues/377 | 655,215,790 | MDU6SXNzdWU2NTUyMTU3OTA= | 377 | Iyy!!! | {
"login": "ajinomoh",
"id": 68154535,
"node_id": "MDQ6VXNlcjY4MTU0NTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/68154535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ajinomoh",
"html_url": "https://github.com/ajinomoh",
"followers_url": "https://api.github.com/users/ajinomoh/followers",
"following_url": "https://api.github.com/users/ajinomoh/following{/other_user}",
"gists_url": "https://api.github.com/users/ajinomoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ajinomoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ajinomoh/subscriptions",
"organizations_url": "https://api.github.com/users/ajinomoh/orgs",
"repos_url": "https://api.github.com/users/ajinomoh/repos",
"events_url": "https://api.github.com/users/ajinomoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ajinomoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,476,667,000 | 1,594,477,851,000 | 1,594,477,851,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/377/timeline | null | null | null | false |
|
https://api.github.com/repos/huggingface/datasets/issues/376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/376/comments | https://api.github.com/repos/huggingface/datasets/issues/376/events | https://github.com/huggingface/datasets/issues/376 | 655,047,826 | MDU6SXNzdWU2NTUwNDc4MjY= | 376 | to_pandas conversion doesn't always work | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"**Edit**: other topic previously in this message moved to a new issue: https://github.com/huggingface/nlp/issues/387",
"Could you try to update pyarrow to >=0.17.0 ? It should fix the `to_pandas` bug\r\n\r\nAlso I'm not sure that structures like list<struct> are fully supported in the lib (none of the datasets use that).\r\nIt can cause issues when using dataset transforms like `filter` for example"
] | 1,594,416,811,000 | 1,595,239,845,000 | null | MEMBER | null | For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible.
Here is an example using the official SQUAD v2 JSON file.
This example was found while investigating #373.
```python
>>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0.json"]}, download_mode=nlp.GenerateMode.FORCE_REDOWNLOAD, version="1.0.0", field='data')
>>> squad['train']
Dataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 442)
>>> squad['train'][0]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 589, in __getitem__
format_kwargs=self._format_kwargs,
File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 529, in _getitem
outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict("list"))
File "pyarrow/array.pxi", line 559, in pyarrow.lib._PandasConvertible.to_pandas
File "pyarrow/table.pxi", line 1367, in pyarrow.lib.Table._to_pandas
File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 766, in table_to_blockmanager
blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)
File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 1101, in _table_to_blocks
list(extension_columns.keys()))
File "pyarrow/table.pxi", line 881, in pyarrow.lib.table_to_blocks
File "pyarrow/error.pxi", line 105, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>
```
cc @lhoestq would we have a way to detect this from the schema maybe?
Here is the schema for this pretty complex JSON:
```python
>>> squad['train'].schema
title: string
paragraphs: list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>
child 0, item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>
child 0, qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>
child 0, item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>
child 0, question: string
child 1, id: string
child 2, answers: list<item: struct<text: string, answer_start: int64>>
child 0, item: struct<text: string, answer_start: int64>
child 0, text: string
child 1, answer_start: int64
child 3, is_impossible: bool
child 4, plausible_answers: list<item: struct<text: string, answer_start: int64>>
child 0, item: struct<text: string, answer_start: int64>
child 0, text: string
child 1, answer_start: int64
child 1, context: string
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/376/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/375/comments | https://api.github.com/repos/huggingface/datasets/issues/375/events | https://github.com/huggingface/datasets/issues/375 | 655,023,307 | MDU6SXNzdWU2NTUwMjMzMDc= | 375 | TypeError when computing bertscore | {
"login": "willywsm1013",
"id": 13269577,
"node_id": "MDQ6VXNlcjEzMjY5NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/13269577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/willywsm1013",
"html_url": "https://github.com/willywsm1013",
"followers_url": "https://api.github.com/users/willywsm1013/followers",
"following_url": "https://api.github.com/users/willywsm1013/following{/other_user}",
"gists_url": "https://api.github.com/users/willywsm1013/gists{/gist_id}",
"starred_url": "https://api.github.com/users/willywsm1013/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willywsm1013/subscriptions",
"organizations_url": "https://api.github.com/users/willywsm1013/orgs",
"repos_url": "https://api.github.com/users/willywsm1013/repos",
"events_url": "https://api.github.com/users/willywsm1013/events{/privacy}",
"received_events_url": "https://api.github.com/users/willywsm1013/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I am not able to reproduce this issue on my side.\r\nCould you give us more details about the inputs you used ?\r\n\r\nI do get another error though:\r\n```\r\n~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/bert_score/utils.py in bert_cos_score_idf(model, refs, hyps, tokenizer, idf_dict, verbose, batch_size, device, all_layers)\r\n 371 return sorted(list(set(l)), key=lambda x: len(x.split(\" \")))\r\n 372 \r\n--> 373 sentences = dedup_and_sort(refs + hyps)\r\n 374 embs = []\r\n 375 iter_range = range(0, len(sentences), batch_size)\r\n\r\nValueError: operands could not be broadcast together with shapes (0,) (2,)\r\n```\r\nThat's because it gets numpy arrays as input and not lists. See #387 ",
"The other issue was fixed by #403 \r\n\r\nDo you still get this issue @willywsm1013 ?\r\n"
] | 1,594,413,464,000 | 1,599,490,212,000 | null | NONE | null | Hi,
I installed nlp 0.3.0 via pip, and my python version is 3.7.
When I tried to compute bertscore with the code:
```
import nlp
bertscore = nlp.load_metric('bertscore')
# load hyps and refs
...
print (bertscore.compute(hyps, refs, lang='en'))
```
I got the following error.
```
Traceback (most recent call last):
File "bert_score_evaluate.py", line 16, in <module>
print (bertscore.compute(hyps, refs, lang='en'))
File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metric.py", line 200, in compute
output = self._compute(predictions=predictions, references=references, **metrics_kwargs)
File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metrics/bertscore/fb176889831bf0ce995ed197edc94b2e9a83f647a869bb8c9477dbb2d04d0f08/bertscore.py", line 105, in _compute
hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline)
TypeError: get_hash() takes 3 positional arguments but 4 were given
```
It seems like there is something wrong with get_hash() function? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/375/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/374/comments | https://api.github.com/repos/huggingface/datasets/issues/374/events | https://github.com/huggingface/datasets/pull/374 | 654,895,066 | MDExOlB1bGxSZXF1ZXN0NDQ3NTMxMzUy | 374 | Add dataset post processing for faiss indexes | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I changed the `wiki_dpr` script to ignore the last 24 examples for now. Hopefully we'll have the full version soon.\r\nThe datasets_infos.json and the data on GCS are updated.\r\n\r\nAnd I also added a check to make sure we don't have post processing resources in sub-directories.",
"I added a dummy config that can be loaded with:\r\n```python\r\nwiki = load_dataset(\"wiki_dpr\", \"dummy_psgs_w100_no_embeddings\", with_index=True, split=\"train\")\r\n```\r\nIt's only 6MB of arrow files and 30MB of index"
] | 1,594,398,359,000 | 1,594,647,843,000 | 1,594,647,841,000 | MEMBER | null | # Post processing of datasets for faiss indexes
Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries.
## Implementation proposition
- Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change)
- The role of `_post_process` is to apply dataset transforms (filter/map etc.) or indexing functions (add_faiss_index) to modify/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method.
- `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources`
- as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files)
I'd happy to discuss these choices !
## The `wiki_dpr` index
It takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory.
This is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768.
I couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product.
## Example of usage
```python
import nlp
dset = nlp.load_dataset(
"wiki_dpr",
"psgs_w100_with_nq_embeddings",
split="train",
with_index=True
)
print(len(dset), dset.list_indexes()) # (21015300, ['embeddings'])
```
(it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too)
## Demo
You can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers:
https://colab.research.google.com/drive/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/374/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/374",
"html_url": "https://github.com/huggingface/datasets/pull/374",
"diff_url": "https://github.com/huggingface/datasets/pull/374.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/374.patch",
"merged_at": 1594647841000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/373/comments | https://api.github.com/repos/huggingface/datasets/issues/373/events | https://github.com/huggingface/datasets/issues/373 | 654,845,133 | MDU6SXNzdWU2NTQ4NDUxMzM= | 373 | Segmentation fault when loading local JSON dataset as of #372 | {
"login": "vegarab",
"id": 24683907,
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegarab",
"html_url": "https://github.com/vegarab",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"repos_url": "https://api.github.com/users/vegarab/repos",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I've seen this sort of thing before -- it might help to delete the directory -- I've also noticed that there is an error with the json Dataloader for any data I've tried to load. I've replaced it with this, which skips over the data feature population step:\r\n\r\n\r\n```python\r\nimport os\r\n\r\nimport pyarrow.json as paj\r\n\r\nimport nlp as hf_nlp\r\n\r\nfrom nlp import DatasetInfo, BuilderConfig, SplitGenerator, Split, utils\r\nfrom nlp.arrow_writer import ArrowWriter\r\n\r\n\r\nclass JSONDatasetBuilder(hf_nlp.ArrowBasedBuilder):\r\n BUILDER_CONFIG_CLASS = BuilderConfig\r\n\r\n def _info(self):\r\n return DatasetInfo()\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [SplitGenerator(name=Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [Split.TRAIN, Split.VALIDATION, Split.TEST]:\r\n if split_name in self.config.data_files:\r\n files = self.config.data_files[split_name]\r\n if isinstance(files, str):\r\n files = [files]\r\n splits.append(SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n return splits\r\n\r\n def _prepare_split(self, split_generator):\r\n fname = \"{}-{}.arrow\".format(self.name, split_generator.name)\r\n fpath = os.path.join(self._cache_dir, fname)\r\n\r\n writer = ArrowWriter(path=fpath)\r\n\r\n generator = self._generate_tables(**split_generator.gen_kwargs)\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n writer.write_table(table)\r\n num_examples, num_bytes = writer.finalize()\r\n\r\n split_generator.split_info.num_examples = num_examples\r\n split_generator.split_info.num_bytes = num_bytes\r\n\r\n def _generate_tables(self, files):\r\n for i, file in enumerate(files):\r\n pa_table = paj.read_json(\r\n file\r\n )\r\n yield i, pa_table\r\n\r\n```",
"Yes, deleting the directory solves the error whenever I try to rerun.\r\n\r\nBy replacing the json-loader, you mean the cached file in my `site-packages` directory? e.g. `/home/XXX/.cache/lib/python3.7/site-packages/nlp/datasets/json/(...)/json.py` \r\n\r\nWhen I was testing this out before the #372 PR was merged I had issues installing it properly locally. Since the `json.py` script was downloaded instead of actually using the one provided in the local install. Manually updating that file seemed to solve it, but it didn't seem like a proper solution. Especially when having to run this on a remote compute cluster with no access to that directory.",
"I see, diving in the JSON file for SQuAD it's a pretty complex structure.\r\n\r\nThe best solution for you, if you have a dataset really similar to SQuAD would be to copy and modify the SQuAD data processing script. We will probably add soon an option to be able to specify file path to use instead of the automatic URL encoded in the script but in the meantime you can:\r\n- copy the [squad script](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) in a new script for your dataset\r\n- in the new script replace [these `urls_to_download `](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py#L99-L102) by `urls_to_download=self.config.data_files`\r\n- load the dataset with `dataset = load_dataset('path/to/your/new/script', data_files={nlp.Split.TRAIN: \"./datasets/train-v2.0.json\"})`\r\n\r\nThis way you can reuse all the processing logic of the SQuAD loading script.",
"This seems like a more sensible solution! Thanks, @thomwolf. It's been a little daunting to understand what these scripts actually do, due to the level of abstraction and central documentation.\r\n\r\nAm I correct in assuming that the `_generate_examples()` function is the actual procedure for how the data is loaded from file? Meaning that essentially with a file containing another format, that is the only function that requires re-implementation? I'm working with a lot of datasets that, due to licensing and privacy, cannot be published. As this library is so neatly integrated with the transformers library and gives easy access to public sets such as SQUAD and increased performance, it is very neat to be able to load my private sets as well. As of now, I have just been working on scripts for translating all my data into the SQUAD-format before using the json script, but I see that it might not be necessary after all. ",
"Yes `_generate_examples()` is the main entry point. If you change the shape of the returned dictionary you also need to update the `features` in the `_info`.\r\n\r\nI'm currently writing the doc so it should be easier soon to use the library and know how to add your datasets.\r\n",
"Could you try to update pyarrow to >=0.17.0 @vegarab ?\r\nI don't have any segmentation fault with my version of pyarrow (0.17.1)\r\n\r\nI tested with\r\n```python\r\nimport nlp\r\ns = nlp.load_dataset(\"json\", data_files=\"train-v2.0.json\", field=\"data\", split=\"train\")\r\ns[0]\r\n# {'title': 'Normans', 'paragraphs': [{'qas': [{'question': 'In what country is Normandy located?', 'id':...\r\n```",
"Also if you want to have your own dataset script, we now have a new documentation !\r\nSee here:\r\nhttps://huggingface.co/nlp/add_dataset.html",
"@lhoestq \r\nFor some reason, I am not able to reproduce the segmentation fault, on pyarrow==0.16.0. Using the exact same environment and file.\r\n\r\nAnyhow, I discovered that pyarrow>=0.17.0 is required to read in a JSON file where the pandas structs contain lists. Otherwise, pyarrow complains when attempting to cast the struct:\r\n```py\r\nimport nlp\r\n>>> s = nlp.load_dataset(\"json\", data_files=\"datasets/train-v2.0.json\", field=\"data\", split=\"train\")\r\nUsing custom data configuration default\r\n>>> s[0]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 558, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 498, in _getitem\r\n outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict(\"list\"))\r\n File \"pyarrow/array.pxi\", line 559, in pyarrow.lib._PandasConvertible.to_pandas\r\n File \"pyarrow/table.pxi\", line 1367, in pyarrow.lib.Table._to_pandas\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/pyarrow/pandas_compat.py\", line 766, in table_to_blockmanager\r\n blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/pyarrow/pandas_compat.py\", line 1101, in _table_to_blocks\r\n list(extension_columns.keys()))\r\n File \"pyarrow/table.pxi\", line 881, in pyarrow.lib.table_to_blocks\r\n File \"pyarrow/error.pxi\", line 105, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>\r\n>>> s\r\nDataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 35)\r\n```\r\n\r\nUpgrading to >=0.17.0 provides the same dataset structure, but accessing the records is possible without the same exception. \r\n\r\n",
"Very happy to see some extended documentation! ",
"#376 seems to be reporting the same issue as mentioned above. ",
"This issue helped me a lot, thanks.\r\nHope this issue will be fixed soon."
] | 1,594,393,465,000 | 1,608,017,240,000 | null | CONTRIBUTOR | null | The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault.
```
dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
```
causes
```
Using custom data configuration default
Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/XXX/.cache/huggingface/datasets/json/default/0.0.0...
0 tables [00:00, ? tables/s]Segmentation fault (core dumped)
```
where `./datasets/train-v2.0.json` is downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/.
This is consistent with other SQuAD-formatted JSON files.
When attempting to load the dataset again, I get the following:
```
Using custom data configuration default
Traceback (most recent call last):
File "dataloader.py", line 6, in <module>
'json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset
save_infos=save_infos,
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 382, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/XXX/.conda/envs/torch/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 368, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/XXX/.conda/envs/torch/lib/python3.7/os.py", line 223, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/XXX/.cache/huggingface/datasets/json/default/0.0.0.incomplete'
```
(Not sure if you wanted this in the previous issue #369 or not as it was closed.) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/373/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/372/comments | https://api.github.com/repos/huggingface/datasets/issues/372/events | https://github.com/huggingface/datasets/pull/372 | 654,774,420 | MDExOlB1bGxSZXF1ZXN0NDQ3NDMzNTA4 | 372 | Make the json script more flexible | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,386,915,000 | 1,594,392,727,000 | 1,594,392,726,000 | MEMBER | null | Fix https://github.com/huggingface/nlp/issues/359
Fix https://github.com/huggingface/nlp/issues/369
JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file).
In this case, you should indicate using `field=XXX` the name of the field in the JSON structure which contains the records you want to load. The records can be a dict of lists or a list of dicts.
E.g. to load the SQuAD dataset JSON (without using the `squad` specific dataset loading script), in which the data rows are in the `data` field of the JSON dict, you can do:
```python
from nlp import load_dataset
dataset = load_dataset('json', data_files='/PATH/TO/JSON', field='data')
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/372/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/372/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/372",
"html_url": "https://github.com/huggingface/datasets/pull/372",
"diff_url": "https://github.com/huggingface/datasets/pull/372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/372.patch",
"merged_at": 1594392725000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/371/comments | https://api.github.com/repos/huggingface/datasets/issues/371/events | https://github.com/huggingface/datasets/pull/371 | 654,668,242 | MDExOlB1bGxSZXF1ZXN0NDQ3MzQ4NDgw | 371 | Fix cached file path for metrics with different config names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for the fast fix!"
] | 1,594,375,344,000 | 1,594,388,722,000 | 1,594,388,720,000 | MEMBER | null | The config name was not taken into account to build the cached file path.
It should fix #368 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/371/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/371/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/371",
"html_url": "https://github.com/huggingface/datasets/pull/371",
"diff_url": "https://github.com/huggingface/datasets/pull/371.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/371.patch",
"merged_at": 1594388720000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/370/comments | https://api.github.com/repos/huggingface/datasets/issues/370/events | https://github.com/huggingface/datasets/pull/370 | 654,304,193 | MDExOlB1bGxSZXF1ZXN0NDQ3MDU3NTIw | 370 | Allow indexing Dataset via np.ndarray | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like a flaky CI, failed download from S3."
] | 1,594,323,795,000 | 1,594,389,944,000 | 1,594,389,943,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/370/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/370",
"html_url": "https://github.com/huggingface/datasets/pull/370",
"diff_url": "https://github.com/huggingface/datasets/pull/370.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/370.patch",
"merged_at": 1594389943000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/369/comments | https://api.github.com/repos/huggingface/datasets/issues/369/events | https://github.com/huggingface/datasets/issues/369 | 654,186,890 | MDU6SXNzdWU2NTQxODY4OTA= | 369 | can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries | {
"login": "vegarab",
"id": 24683907,
"node_id": "MDQ6VXNlcjI0NjgzOTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vegarab",
"html_url": "https://github.com/vegarab",
"followers_url": "https://api.github.com/users/vegarab/followers",
"following_url": "https://api.github.com/users/vegarab/following{/other_user}",
"gists_url": "https://api.github.com/users/vegarab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vegarab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vegarab/subscriptions",
"organizations_url": "https://api.github.com/users/vegarab/orgs",
"repos_url": "https://api.github.com/users/vegarab/repos",
"events_url": "https://api.github.com/users/vegarab/events{/privacy}",
"received_events_url": "https://api.github.com/users/vegarab/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"I am able to reproduce this with the official SQuAD `train-v2.0.json` file downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/",
"I am facing this issue in transformers library 3.0.2 while reading a csv using datasets.\r\nIs this fixed in latest version? \r\nI updated the latest version 4.0.1 but still getting this error. What could cause this error?"
] | 1,594,311,413,000 | 1,608,073,642,000 | 1,594,392,726,000 | CONTRIBUTOR | null | Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB):
```
dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]})
```
causes
```
Traceback (most recent call last):
File "dataloader.py", line 9, in <module>
["./path/to/file.json"]})
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset
save_infos=save_infos,
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 719, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False):
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/datasets/json/88c1bc5c68489f7eda549ed05a5a738527c613b3e7a4ee3524d9d233353a949b/json.py", line 53, in _generate_tables
file, read_options=self.config.pa_read_options, parse_options=self.config.pa_parse_options,
File "pyarrow/_json.pyx", line 191, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
I haven't been able to find any reports of this specific pyarrow error here or elsewhere. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/369/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/368/comments | https://api.github.com/repos/huggingface/datasets/issues/368/events | https://github.com/huggingface/datasets/issues/368 | 654,087,251 | MDU6SXNzdWU2NTQwODcyNTE= | 368 | load_metric can't acquire lock anymore | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I found that, in the same process (or the same interactive session), if I do\r\n\r\nimport nlp\r\n\r\nm1 = nlp.load_metric('glue', 'mrpc')\r\nm2 = nlp.load_metric('glue', 'sst2')\r\n\r\nI will get the same error `ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id'`."
] | 1,594,303,449,000 | 1,594,388,720,000 | 1,594,388,720,000 | NONE | null | I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this?
Traceback (most recent call last):
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 101, in __init__
self.filelock.acquire(timeout=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/filelock.py", line 278, in acquire
raise Timeout(self._lock_file)
filelock.Timeout: The file lock '/home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock' could not be acquired.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "examples_huggingface_nlp.py", line 268, in <module>
main()
File "examples_huggingface_nlp.py", line 242, in main
dataset, metric = get_dataset_metric(glue_task)
File "examples_huggingface_nlp.py", line 77, in get_dataset_metric
metric = nlp.load_metric('glue', glue_config, experiment_id=1)
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/load.py", line 440, in load_metric
**metric_init_kwargs,
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 104, in __init__
"Cannot acquire lock, caching file might be used by another process, "
ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id' for this run.
I0709 15:54:41.008838 139854118430464 filelock.py:318] Lock 139852058030936 released on /home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/368/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/367/comments | https://api.github.com/repos/huggingface/datasets/issues/367/events | https://github.com/huggingface/datasets/pull/367 | 654,012,984 | MDExOlB1bGxSZXF1ZXN0NDQ2ODIxNTAz | 367 | Update Xtreme to add PAWS-X es | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,296,877,000 | 1,594,298,231,000 | 1,594,298,230,000 | CONTRIBUTOR | null | This PR adds the `PAWS-X.es` in the Xtreme dataset #362 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/367/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/367",
"html_url": "https://github.com/huggingface/datasets/pull/367",
"diff_url": "https://github.com/huggingface/datasets/pull/367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/367.patch",
"merged_at": 1594298230000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/366/comments | https://api.github.com/repos/huggingface/datasets/issues/366/events | https://github.com/huggingface/datasets/pull/366 | 653,954,896 | MDExOlB1bGxSZXF1ZXN0NDQ2NzcyODE2 | 366 | Add quora dataset | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Tests seem to be failing because of pandas",
"Kaggle needs authentification to download datasets. We don't have a way to handle that in the lib for now"
] | 1,594,290,862,000 | 1,594,661,721,000 | 1,594,661,721,000 | CONTRIBUTOR | null | Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs).
Implementation Notes:
- I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test split but I can't find an easy way to download it.
- I've made the questions into a list:
```python
{
"questions": [
{"id":0, "text": "Is this an example question?"},
{"id":1, "text": "Is this a sample question?"},
],
...
}
```
rather than:
```python
{
"question1": "Is this an example question?",
"question2": "Is this a sample question?"
"qid0": 0
"qid1": 1
...
}
```
Not sure if this was the right call.
- Can't find a good citation for this dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/366/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/366",
"html_url": "https://github.com/huggingface/datasets/pull/366",
"diff_url": "https://github.com/huggingface/datasets/pull/366.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/366.patch",
"merged_at": 1594661721000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/365/comments | https://api.github.com/repos/huggingface/datasets/issues/365/events | https://github.com/huggingface/datasets/issues/365 | 653,845,964 | MDU6SXNzdWU2NTM4NDU5NjQ= | 365 | How to augment data ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Using batched map is probably the easiest way at the moment.\r\nWhat kind of augmentation would you like to do ?",
"Some samples in the dataset are too long, I want to divide them in several samples.",
"Using batched map is the way to go then.\r\nWe'll make it clearer in the docs that map could be used for augmentation.\r\n\r\nLet me know if you think there should be another way to do it. Or feel free to close the issue otherwise.",
"It just feels awkward to use map to augment data. Also it means it's not possible to augment data in a non-batched way.\r\n\r\nBut to be honest I have no idea of a good API...",
"Or for non-batched samples, how about returning a tuple ?\r\n\r\n```python\r\ndef aug(sample):\r\n # Simply copy the existing data to have x2 amount of data\r\n return sample, sample\r\n\r\ndataset = dataset.map(aug)\r\n```\r\n\r\nIt feels really natural and easy, but :\r\n\r\n* it means the behavior with batched data is different\r\n* I don't know how doable it is backend-wise\r\n\r\n@lhoestq ",
"As we're working with arrow's columnar format we prefer to play with batches that are dictionaries instead of tuples.\r\nIf we have tuple it implies to re-format the data each time we want to write to arrow, which can lower the speed of map for example.\r\n\r\nIt's also a matter of coherence, as we don't want users to be confused whether they have to return dictionaries for some functions and tuples for others when they're doing batches."
] | 1,594,281,157,000 | 1,594,372,327,000 | 1,594,369,335,000 | NONE | null | Is there any clean way to augment data ?
For now my work-around is to use batched map, like this :
```python
def aug(samples):
# Simply copy the existing data to have x2 amount of data
for k, v in samples.items():
samples[k].extend(v)
return samples
dataset = dataset.map(aug, batched=True)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/365/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/364/comments | https://api.github.com/repos/huggingface/datasets/issues/364/events | https://github.com/huggingface/datasets/pull/364 | 653,821,597 | MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NzM5 | 364 | add MS MARCO dataset | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The dummy data for v2.1 is missing as far as I can see. I think running the dummy data command should work correctly here. ",
"Also, it might be that the structure of the dummy data is wrong - looking at `generate_examples` the structure does not look too easy.",
"The fact that the dummy data for v2.1 is missing shouldn't make the test fails I think. But as you mention the dummy data structure of v1.1 is wrong. I tried to rename files but it does not solve the issue.",
"Is MS mARCO added to nlp library?I am not able to view it?",
"> Is MS mARCO added to nlp library?I am not able to view it?\r\n\r\nHi @parthplc ,the PR is not merged yet. The dummy data structure is still failing. Maybe @patrickvonplaten can help with it.",
"Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever!",
"> Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever!\r\n\r\nthanks"
] | 1,594,278,679,000 | 1,596,694,549,000 | 1,596,694,548,000 | CONTRIBUTOR | null | This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf
Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/364/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/364",
"html_url": "https://github.com/huggingface/datasets/pull/364",
"diff_url": "https://github.com/huggingface/datasets/pull/364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/364.patch",
"merged_at": 1596694548000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/363/comments | https://api.github.com/repos/huggingface/datasets/issues/363/events | https://github.com/huggingface/datasets/pull/363 | 653,821,172 | MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NDIy | 363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | {
"login": "eltoto1219",
"id": 14030663,
"node_id": "MDQ6VXNlcjE0MDMwNjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/14030663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eltoto1219",
"html_url": "https://github.com/eltoto1219",
"followers_url": "https://api.github.com/users/eltoto1219/followers",
"following_url": "https://api.github.com/users/eltoto1219/following{/other_user}",
"gists_url": "https://api.github.com/users/eltoto1219/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eltoto1219/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eltoto1219/subscriptions",
"organizations_url": "https://api.github.com/users/eltoto1219/orgs",
"repos_url": "https://api.github.com/users/eltoto1219/repos",
"events_url": "https://api.github.com/users/eltoto1219/events{/privacy}",
"received_events_url": "https://api.github.com/users/eltoto1219/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you! I just marked this as a draft PR. It probably would be better to create specific Array2D and Array3D classes as needed instead of a generic MultiArray for now, it should simplify the code a lot too so, I'll update it as such. Also i was meaning to reply earlier, but I wanted to thank you for the testing script you sent me earlier since it ended up being tremendously helpful. ",
"Okay, I just converted the MultiArray class to Array2D, and got rid of all those \"globals()\"! \r\n\r\nThe main issues I had were that when including a \"pa.ExtensionType\" as a column, the ordinary methods to batch the data would not work and it would throw me some mysterious error, so I first cleaned up my code to order the row to match the schema (because when including extension types the row is disordered ) and then made each row a pa.Table and then concatenated all the tables. Also each n-dimensional vector class we implement will be size invariant which is some good news. ",
"Okay awesome! I just added your suggestions and changed up my recursive functions. \r\n\r\nHere is the traceback for the when I use the original code in the write_on_file method:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 33, in <module>\r\n File \"/home/eltoto/nlp/src/nlp/arrow_writer.py\", line 214, in finalize\r\n self.write_on_file()\r\n File \"/home/eltoto/nlp/src/nlp/arrow_writer.py\", line 134, in write_on_file\r\n pa_array = pa.array(self.current_rows, type=self._type)\r\n File \"pyarrow/array.pxi\", line 269, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 38, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 106, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>\r\n\r\nshell returned 1\r\n```\r\n\r\nI think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround. \r\n\r\nIn the case that this new method causes bad compression/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(***batch_size***) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below.",
"> I think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround.\r\n\r\nIndeed that's weird.\r\n\r\n> In the case that this new method causes bad compression/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(batch_size) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below.\r\n\r\nThe argument of `pa.Table.to_batches` is not `batch_size` but `max_chunksize`, which means that right now it would have no effects (each chunk is of length 1).\r\n\r\nWe can fix that just by doing `entries.combine_chunks().to_batches(batch_size)`. In that case it would write by chunk of 1000 which is what we want. I don't think it will slow down the writing by much, but we may have to do a benchmark just to make sure. If speed is ok we could even replace the original code to always write chunks this way.\r\n\r\nDo you still have errors that need to be fixed ?",
"@lhoestq Nope all should be good! \r\n\r\nWould you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?",
"> @lhoestq Nope all should be good!\r\n\r\nAwesome :)\r\n\r\nI think it would be good to start to add some tests then.\r\nYou already have `test_multi_array.py` which is a good start, maybe you can place it in /tests and make it a `unittest.TestCase` ?\r\n\r\n> Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?\r\n\r\nThat would be interesting. We don't want reading/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:\r\n- write speed + read speed a dataset with `nlp.Array2D` features\r\n- write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value(\"float32\")))` features\r\n- write speed + read speed a dataset with `nlp.Sequence(nlp.Value(\"float32\"))` features (same data but flatten)\r\nIt will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.\r\n\r\nWhat do you think ?",
"Well actually it looks like we're still having the `print(dataset[0])` error no ?",
"I just tested your code to try to understand better.\r\n\r\n\r\n- First thing you must know is that we've switched from `dataset._data.to_pandas` to `dataset._data.to_pydict` by default when we call `dataset[0]` in #423 . Right now it raises an error but it can be fixed by adding this method to `ExtensionArray2D`:\r\n\r\n```python\r\n def to_pylist(self):\r\n return self.to_numpy().tolist()\r\n```\r\n\r\n- Second, I noticed that `ExtensionArray2D.to_numpy()` always return a (5, 5) shape in your example. I thought `ExtensionArray` was for possibly multiple examples and so I was expecting a shape like (1, 5, 5) for example. Did I miss something ?\r\nTherefore when I apply the fix I mentioned (adding to_pylist), it returns one example per row in each image (in your example of 2 images of shape 5x5, I get `len(dataset._data.to_pydict()[\"image\"]) == 10 # True`)\r\n\r\n[EDIT] I changed the reshape step in `ExtensionArray2D.to_numpy()` by\r\n```python\r\nnumpy_arr = numpy_arr.reshape(len(self), *ExtensionArray2D._construct_shape(self.storage))\r\n```\r\nand it did the job: `len(dataset._data.to_pydict()[\"image\"]) == 2 # True`\r\n\r\n- Finally, I was able to make `to_pandas` work though, by implementing custom array dtype in pandas with arrow conversion (I got inspiration from [here](https://gist.github.com/Eastsun/a59fb0438f65e8643cd61d8c98ec4c08) and [here](https://pandas.pydata.org/pandas-docs/version/1.0.0/development/extending.html#compatibility-with-apache-arrow))\r\n\r\nMaybe you could add me in your repo so I can open a PR to add these changes to your branch ?",
"`combine_chunks` doesn't seem to work btw:\r\n`ArrowNotImplementedError: concatenation of extension<arrow.py_extension_type>`",
"> > @lhoestq Nope all should be good!\r\n> \r\n> Awesome :)\r\n> \r\n> I think it would be good to start to add some tests then.\r\n> You already have `test_multi_array.py` which is a good start, maybe you can place it in /tests and make it a `unittest.TestCase` ?\r\n> \r\n> > Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?\r\n> \r\n> That would be interesting. We don't want reading/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:\r\n> \r\n> * write speed + read speed a dataset with `nlp.Array2D` features\r\n> * write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value(\"float32\")))` features\r\n> * write speed + read speed a dataset with `nlp.Sequence(nlp.Value(\"float32\"))` features (same data but flatten)\r\n> It will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.\r\n> \r\n> What do you think ?\r\n\r\nYa! that should be no problem at all, Ill use the timeit module and get back to you with the results sometime over the weekend.",
"Thank you for all your help getting the pandas and row indexing for the dataset to work! For `print(dataset[0])`, I considered the workaround of doing `print(dataset[\"col_name\"][0])` a temporary solution, but ya, I was never able to figure out how to previously get it to work. I'll add you to my repo right now, let me know if you do not see the invite. Also lastly, it is strange how the to_batches method is not working, so I can check that out while I add some speed tests + add the multi dim test under the unit tests this weekend. ",
"I created the PR :)\r\nI also tested `to_batches` and it works on my side",
"Sorry for the bit of delay! I just added the tests, the PR into my fork, and some speed tests. It should be fairly easy to add more tests if we need. Do you think there is anything else to checkout?",
"Cool thanks for adding the tests :) \r\n\r\nNext step is merge master into this branch.\r\nNot sure I understand what you did in your last commit, but it looks like you discarded all the changes from master ^^'\r\n\r\nWe've done some changes in the features logic on master, so let me know if you need help merging it.\r\n\r\nAs soon as we've merged from master, we'll have to make sure that we have extensive tests and we'll be good to do !\r\nAbout the lxmert dataset, we can probably keep it for another PR as soon as we have working 2d features. What do you think ?",
"We might want to merge this after tomorrow's release though to avoid potential side effects @lhoestq ",
"Yep I'm sure we can have it not for tomorrow's release but for the next one ;)",
"haha, when I tried to rebase, I ran into some conflicts. In that last commit, I restored the features.py from the previous commit on the branch in my fork because upon updating to master, the pandasdtypemanger and pandas extension types disappeared. If you actually could help me with merging in what is needed, that would actually help a lot. \r\n\r\nOther than that, ya let me go ahead and move the dataloader code out of this PR. Perhaps we could discuss in the slack channelk soon about what to do with that because we can either just support the pretraining corpus for lxmert or try to implement the full COCO and visual genome datasets (+VQA +GQA) which im sure people would be pretty happy about. \r\n\r\nAlso we can talk more tests soon too when you are free. \r\n\r\nGoodluck on the release tomorrow guys!",
"Not sure why github thinks there are conflicts here, as I just rebased from the current master branch.\r\nMerging into master locally works on my side without conflicts\r\n```\r\ngit checkout master\r\ngit reset --hard origin/master\r\ngit merge --no-ff eltoto1219/support_multi_dim_tensors_for_images\r\nMerge made by the 'recursive' strategy.\r\n datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py | 89 +++++++++++++++++++++++++++++++++++++\r\n datasets/lxmert_pretraining_beta/test_multi_array.py | 45 +++++++++++++++++++\r\n datasets/lxmert_pretraining_beta/to_arrow_data.py | 371 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n src/nlp/arrow_dataset.py | 24 +++++-----\r\n src/nlp/arrow_writer.py | 22 ++++++++--\r\n src/nlp/features.py | 229 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---\r\n tests/test_array_2d.py | 210 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n 7 files changed, 969 insertions(+), 21 deletions(-)\r\n create mode 100644 datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py\r\n create mode 100644 datasets/lxmert_pretraining_beta/test_multi_array.py\r\n create mode 100644 datasets/lxmert_pretraining_beta/to_arrow_data.py\r\n create mode 100644 tests/test_array_2d.py\r\n```",
"I put everything inside one commit from the master branch but the merge conflicts on github'side were still there for some reason.\r\nClosing and re-opening the PR fixed the conflict check on github's side.",
"Almost done ! It still needs a pass on the docs/comments and maybe a few more tests.\r\n\r\nI had to do several changes for type inference in the ArrowWriter to make it support custom types.",
"Ok this is now ready for review ! Thanks for your awesome work in this @eltoto1219 \r\n\r\nSummary of the changes:\r\n- added new feature type `Array2D`, that can be instantiated like `Array2D(\"float32\")` for example\r\n- added pyarrow extension type `Array2DExtensionType` and array `Array2DExtensionArray` that take care of converting from and to arrow. `Array2DExtensionType`'s storage is a list of list of any pyarrow array.\r\n- added pandas extension type `PandasArrayExtensionType` and array `PandasArrayExtensionArray` for conversion from and to arrow/python objects\r\n- refactor of the `ArrowWriter` write and write_batch functions to support extension types while preserving the type inference behavior.\r\n- added a utility object `TypedSequence` that is helpful to combine extension arrays and type inference inside the writer's methods.\r\n- added speed test for sequences writing (printed as warnings in pytest)\r\n- breaking: set disable_nullable to False by default as pyarrow's type inference returns nullable fields\r\n\r\nAnd there are plenty of new tests, mainly in `test_array2d.py` and `test_arrow_writer.py`.\r\n\r\nNote that there are some collisions in `arrow_dataset.py` with #513 so let's be careful when we'll merge this one.\r\n\r\nI know this is a big PR so feel free to ask questions",
"I'll add Array3D, 4D.. tomorrow but it should take only a few lines. The rest won't change",
"I took your comments into account and I added Array[3-5]D.\r\nI changed the storage type to fixed lengths lists. I had to update the `to_numpy` function because of that. Indeed slicing a FixedLengthListArray returns a view a of the original array, while in the previous case slicing a ListArray copies the storage.\r\n"
] | 1,594,278,630,000 | 1,598,263,175,000 | 1,598,263,175,000 | CONTRIBUTOR | null | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/363/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/363",
"html_url": "https://github.com/huggingface/datasets/pull/363",
"diff_url": "https://github.com/huggingface/datasets/pull/363.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/363.patch",
"merged_at": 1598263175000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/362/comments | https://api.github.com/repos/huggingface/datasets/issues/362/events | https://github.com/huggingface/datasets/issues/362 | 653,766,245 | MDU6SXNzdWU2NTM3NjYyNDU= | 362 | [dateset subset missing] xtreme paws-x | {
"login": "jerryIsHere",
"id": 50871412,
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryIsHere",
"html_url": "https://github.com/jerryIsHere",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You're right, thanks for pointing it out. We will update it "
] | 1,594,271,094,000 | 1,594,298,322,000 | 1,594,298,322,000 | CONTRIBUTOR | null | I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error
It turns out that the subset for Spanish is missing
https://github.com/google-research-datasets/paws/tree/master/pawsx | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/362/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/362/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/361 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/361/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/361/comments | https://api.github.com/repos/huggingface/datasets/issues/361/events | https://github.com/huggingface/datasets/issues/361 | 653,757,376 | MDU6SXNzdWU2NTM3NTczNzY= | 361 | 🐛 [Metrics] ROUGE is non-deterministic | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, can you give a full self-contained example to reproduce this behavior?",
"> Hi, can you give a full self-contained example to reproduce this behavior?\r\n\r\nThere is a notebook in the post ;)",
"> If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.\r\n> \r\n> Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem.\r\n> \r\n> Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run :\r\n> \r\n> > ['0.3350', '0.1470', '0.2329']\r\n> > ['0.3358', '0.1451', '0.2332']\r\n> \r\n> Why ROUGE is not deterministic ?\r\n\r\nThis is because of rouge's `BootstrapAggregator` that uses sampling to get confidence intervals (low, mid, high).\r\nYou can get deterministic scores per sentence pair by using\r\n```python\r\nscore = rouge.compute(rouge_types=[\"rouge1\", \"rouge2\", \"rougeL\"], use_agregator=False)\r\n```\r\nOr you can set numpy's random seed if you still want to use the aggregator.",
"Maybe we can set all the random seeds of numpy/torch etc. while running `metric.compute` ?",
"We should probably indeed!",
"Now if you re-run the notebook, the two printed results are the same @colanim\r\n```\r\n['0.3356', '0.1466', '0.2318']\r\n['0.3356', '0.1466', '0.2318']\r\n```\r\nHowever across sessions, the results may change (as numpy's random seed can be different). You can prevent that by setting your seed:\r\n```python\r\nrouge = nlp.load_metric('rouge', seed=42)\r\n```"
] | 1,594,269,577,000 | 1,595,288,917,000 | 1,595,288,917,000 | NONE | null | If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.
Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem.
Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run :
> ['0.3350', '0.1470', '0.2329']
['0.3358', '0.1451', '0.2332']
---
Why ROUGE is not deterministic ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/361/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/360/comments | https://api.github.com/repos/huggingface/datasets/issues/360/events | https://github.com/huggingface/datasets/issues/360 | 653,687,176 | MDU6SXNzdWU2NTM2ODcxNzY= | 360 | [Feature request] Add dataset.ragged_map() function for many-to-many transformations | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Actually `map(batched=True)` can already change the size of the dataset.\r\nIt can accept examples of length `N` and returns a batch of length `M` (can be null or greater than `N`).\r\n\r\nI'll make that explicit in the doc that I'm currently writing.",
"You're two steps ahead of me :) In my testing, it also works if `M` < `N`.\r\n\r\nA batched map of different length seems to work if you directly overwrite all of the original keys, but fails if any of the original keys are preserved.\r\n\r\nFor example,\r\n```python\r\n# Create a dummy dataset\r\ndset = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")[\"test\"]\r\ndset = dset.map(lambda ex: {\"length\": len(ex[\"text\"]), \"foo\": 1})\r\n\r\n# Do an allreduce on each batch, overwriting both keys\r\ndset.map(lambda batch: {\"length\": [sum(batch[\"length\"])], \"foo\": [1]})\r\n# Dataset(schema: {'length': 'int64', 'foo': 'int64'}, num_rows: 5)\r\n\r\n# Now attempt an allreduce without touching the `foo` key\r\ndset.map(lambda batch: {\"length\": [sum(batch[\"length\"])]})\r\n# This fails with the error message below\r\n```\r\n\r\n```bash\r\n File \"/path/to/nlp/src/nlp/arrow_dataset.py\", line 728, in map\r\n arrow_schema = pa.Table.from_pydict(test_output).schema\r\n File \"pyarrow/io.pxi\", line 1532, in pyarrow.lib.Codec.detect\r\n File \"pyarrow/table.pxi\", line 1503, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow/public-api.pxi\", line 390, in pyarrow.lib.pyarrow_wrap_table\r\n File \"pyarrow/error.pxi\", line 85, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Column 1 named foo expected length 1 but got length 2\r\n```\r\n\r\nAdding the `remove_columns=[\"length\", \"foo\"]` argument to `map()` solves the issue. Leaving the above error for future visitors. Perfect, thank you!"
] | 1,594,256,683,000 | 1,594,323,111,000 | 1,594,323,111,000 | CONTRIBUTOR | null | `dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines.
`dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from the dataset.
However, some dataset transformations are many-to-many. Consider constructing BERT training examples from a dataset of sentences, where you map `["a", "b", "c"] -> ["a[SEP]b", "a[SEP]c", "b[SEP]c", "c[SEP]b", ...]`
I propose a more general `ragged_map()` method that takes in a batch of examples of length `N` and return a batch of examples `M`. This is different from the `map(batched=True)` method, which takes examples of length `N` and returns a batch of length `N`, processing individual examples in parallel. I don't have a clear vision of how this would be implemented efficiently and lazily, but would love to hear the community's feedback on this.
My specific use case is creating an end-to-end ELECTRA data pipeline. I would like to take the raw WikiText data and generate training examples from this using the `ragged_map()` method, then export to TFRecords and train quickly. This would be a reproducible pipeline with no bash scripts. Currently I'm relying on scripts like https://github.com/google-research/electra/blob/master/build_pretraining_dataset.py, which are less general.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/360/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/359/comments | https://api.github.com/repos/huggingface/datasets/issues/359/events | https://github.com/huggingface/datasets/issues/359 | 653,656,279 | MDU6SXNzdWU2NTM2NTYyNzk= | 359 | ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures | {
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, it depends on what it is in your `dataset_builder.py` file. Can you share it?\r\n\r\nIf you are just loading `json` files, you can also directly use the `json` script (which will find the schema/features from your JSON structure):\r\n\r\n```python\r\nfrom nlp import load_dataset\r\nds = load_dataset(\"json\", data_files=rel_datafiles)\r\n```",
"The behavior I'm seeing is from the `json` script. \r\nI hacked this together to overcome the error with the `JSON` dataloader\r\n\r\n```\r\nclass DatasetBuilder(hf_nlp.ArrowBasedBuilder):\r\n BUILDER_CONFIG_CLASS = BuilderConfig\r\n\r\n def _info(self):\r\n return DatasetInfo()\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [SplitGenerator(name=Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [Split.TRAIN, Split.VALIDATION, Split.TEST]:\r\n if split_name in self.config.data_files:\r\n files = self.config.data_files[split_name]\r\n if isinstance(files, str):\r\n files = [files]\r\n splits.append(SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n return splits\r\n\r\n def _prepare_split(self, split_generator):\r\n fname = \"{}-{}.arrow\".format(self.name, split_generator.name)\r\n fpath = os.path.join(self._cache_dir, fname)\r\n\r\n writer = ArrowWriter(path=fpath)\r\n\r\n generator = self._generate_tables(**split_generator.gen_kwargs)\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n writer.write_table(table)\r\n num_examples, num_bytes = writer.finalize()\r\n\r\n split_generator.split_info.num_examples = num_examples\r\n split_generator.split_info.num_bytes = num_bytes\r\n # this is where the error is coming from\r\n # def parse_schema(schema, schema_dict):\r\n # for field in schema:\r\n # if pa.types.is_struct(field.type):\r\n # schema_dict[field.name] = {}\r\n # parse_schema(field.type, schema_dict[field.name])\r\n # elif pa.types.is_list(field.type) and pa.types.is_struct(field.type.value_type):\r\n # schema_dict[field.name] = {}\r\n # parse_schema(field.type.value_type, schema_dict[field.name])\r\n # else:\r\n # schema_dict[field.name] = Value(str(field.type))\r\n # \r\n # parse_schema(writer.schema, features)\r\n # self.info.features = Features(features)\r\n\r\n def _generate_tables(self, files):\r\n for i, file in enumerate(files):\r\n pa_table = paj.read_json(\r\n file\r\n )\r\n yield i, pa_table\r\n```\r\n\r\nSo I basically just don't populate the `self.info.features` though this doesn't seem to cause any problems in my downstream applications. \r\n\r\nThe other workaround I was doing was to just use pyarrow.json to build a table and then to create the Dataset with its constructor or from_table methods. `load_dataset` has nice split logic, so I'd prefer to use that.\r\n\r\n",
"Also noticed that if you for example in a loader script\r\n\r\n```\r\nfrom nlp import ArrowBasedBuilder\r\n\r\nclass MyBuilder(ArrowBasedBuilder):\r\n...\r\n\r\n```\r\nand use that in the subclass, it will be on the module's __dict__ and will be selected before the `MyBuilder` subclass, and it will raise `NotImplementedError` on its `_generate_examples` method... In the code it check for abstract classes but Builder and ArrowBasedBuilder aren't abstract classes, they're regular classes with `@abstract_methods`.",
"Indeed this is part of a more general limitation which is the fact that we should generate and update the `features` from the auto-inferred Arrow schema when they are not provided (also happen when a user change the schema using `map()`, the features should be auto-generated and guessed as much as possible to keep the `features` synced with the underlying Arrow table schema).\r\n\r\nWe will try to solve this soon."
] | 1,594,250,645,000 | 1,594,392,726,000 | 1,594,392,726,000 | NONE | null | I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function.
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-9aecfbee53bd> in <module>
55 from nlp import load_dataset
56
---> 57 ds = load_dataset("../text2struct/model/dataset_builder.py", data_files=rel_datafiles)
58
59
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
481 try:
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator)
736 schema_dict[field.name] = Value(str(field.type))
737
--> 738 parse_schema(writer.schema, features)
739 self.info.features = Features(features)
740
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict)
734 parse_schema(field.type.value_type, schema_dict[field.name])
735 else:
--> 736 schema_dict[field.name] = Value(str(field.type))
737
738 parse_schema(writer.schema, features)
<string> in __init__(self, dtype, id, _type)
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self)
55
56 def __post_init__(self):
---> 57 self.pa_type = string_to_arrow(self.dtype)
58
59 def __call__(self):
~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str)
32 if str(type_str + "_") not in pa.__dict__:
33 raise ValueError(
---> 34 f"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. "
35 f"Please make sure to use a correct data type, see: "
36 f"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions"
ValueError: Neither list<item: string> nor list<item: string>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions
```
If I create the dataset imperatively, using a pyarrow table, the dataset is created correctly. If I override the `_prepare_split` method to avoid calling the validate schema, the dataset can load as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/359/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/358/comments | https://api.github.com/repos/huggingface/datasets/issues/358/events | https://github.com/huggingface/datasets/pull/358 | 653,645,121 | MDExOlB1bGxSZXF1ZXN0NDQ2NTI0NjQ5 | 358 | Starting to add some real doc | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Ok this is starting to be really big so it's probably good to merge this first version of the doc and continue in another PR :)\r\n\r\nThis first version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html"
] | 1,594,248,783,000 | 1,594,720,697,000 | 1,594,720,695,000 | MEMBER | null | Adding a lot of documentation for:
- load a dataset
- explore the dataset object
- process data with the dataset
- add a new dataset script
- share a dataset script
- full package reference
This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html
Also:
- fix a bug in `train_test_split`
- update the `csv` script
- add a verbose argument to the dataset processing methods
Still missing:
- doc for the metrics
- how to directly upload a community provided dataset with the CLI
- clean up more docstrings
- add the `features` argument to `load_dataset` (should be another PR) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/358/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/358",
"html_url": "https://github.com/huggingface/datasets/pull/358",
"diff_url": "https://github.com/huggingface/datasets/pull/358.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/358.patch",
"merged_at": 1594720695000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/357/comments | https://api.github.com/repos/huggingface/datasets/issues/357/events | https://github.com/huggingface/datasets/pull/357 | 653,642,292 | MDExOlB1bGxSZXF1ZXN0NDQ2NTIyMzU2 | 357 | Add hashes to cnn_dailymail | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"repos_url": "https://api.github.com/users/jbragg/repos",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks you to me :)\r\n\r\nCould you also update the json file that goes with the dataset script by doing \r\n```\r\nnlp-cli test ./datasets/cnn_dailymail --save_infos --all_configs\r\n```\r\nIt will update the features metadata and the size of the dataset with your changes.",
"@lhoestq I ran that command.\r\n\r\nThanks for the helpful repository!"
] | 1,594,248,321,000 | 1,594,649,798,000 | 1,594,649,798,000 | CONTRIBUTOR | null | The URL hashes are helpful for comparing results from other sources. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/357/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/357",
"html_url": "https://github.com/huggingface/datasets/pull/357",
"diff_url": "https://github.com/huggingface/datasets/pull/357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/357.patch",
"merged_at": 1594649798000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/356/comments | https://api.github.com/repos/huggingface/datasets/issues/356/events | https://github.com/huggingface/datasets/pull/356 | 653,537,388 | MDExOlB1bGxSZXF1ZXN0NDQ2NDM3MDQ5 | 356 | Add text dataset | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,236,113,000 | 1,594,390,743,000 | 1,594,390,743,000 | CONTRIBUTOR | null | Usage:
```python
from nlp import load_dataset
dset = load_dataset("text", data_files="/path/to/file.txt")["train"]
```
I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes
```bash
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_text
```
but I would like a second set of eyes to ensure I did it right.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/356/reactions",
"total_count": 6,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/356/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/356",
"html_url": "https://github.com/huggingface/datasets/pull/356",
"diff_url": "https://github.com/huggingface/datasets/pull/356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/356.patch",
"merged_at": 1594390743000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/355/comments | https://api.github.com/repos/huggingface/datasets/issues/355/events | https://github.com/huggingface/datasets/issues/355 | 653,451,013 | MDU6SXNzdWU2NTM0NTEwMTM= | 355 | can't load SNLI dataset | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just added the processed files of `snli` on our google storage, so that when you do `load_dataset` it can download the processed files from there :)\r\n\r\nWe are thinking about having available those processed files for more datasets in the future, because sometimes files aren't available (like for `snli`), or the download speed is too slow, or sometimes the files take time to be processed.",
"Closing this one. Feel free to re-open if you have other questions :)",
"Thank you!"
] | 1,594,227,254,000 | 1,595,049,357,000 | 1,594,799,941,000 | CONTRIBUTOR | null | `nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't.
Is there a plan to move these datasets to huggingface servers for a more stable solution?
Btw, here's the stack trace:
```
File "/content/nlp/src/nlp/builder.py", line 432, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/content/nlp/src/nlp/builder.py", line 466, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/content/nlp/src/nlp/datasets/snli/e417f6f2e16254938d977a17ed32f3998f5b23e4fcab0f6eb1d28784f23ea60d/snli.py", line 76, in _split_generators
dl_dir = dl_manager.download_and_extract(_DATA_URL)
File "/content/nlp/src/nlp/utils/download_manager.py", line 217, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in download
lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,
File "/content/nlp/src/nlp/utils/py_utils.py", line 190, in map_nested
return function(data_struct)
File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in <lambda>
lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,
File "/content/nlp/src/nlp/utils/file_utils.py", line 198, in cached_path
local_files_only=download_config.local_files_only,
File "/content/nlp/src/nlp/utils/file_utils.py", line 356, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://nlp.stanford.edu/projects/snli/snli_1.0.zip
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/355/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/354/comments | https://api.github.com/repos/huggingface/datasets/issues/354/events | https://github.com/huggingface/datasets/pull/354 | 653,357,617 | MDExOlB1bGxSZXF1ZXN0NDQ2MjkyMTc4 | 354 | More faiss control | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Ok, so we're getting rid of the `FaissGpuOptions`?\r\n\r\nWe support `device=...` because it's simple, but faiss GPU options can be used in so many ways (you can set different gpu options for the different parts of your index for example) that it's probably better to let the user create and configure its index and then use `custom_index=...`"
] | 1,594,219,520,000 | 1,594,288,494,000 | 1,594,288,491,000 | MEMBER | null | Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/354/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/354",
"html_url": "https://github.com/huggingface/datasets/pull/354",
"diff_url": "https://github.com/huggingface/datasets/pull/354.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/354.patch",
"merged_at": 1594288491000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/353/comments | https://api.github.com/repos/huggingface/datasets/issues/353/events | https://github.com/huggingface/datasets/issues/353 | 653,250,611 | MDU6SXNzdWU2NTMyNTA2MTE= | 353 | [Dataset requests] New datasets for Text Classification | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892884,
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted",
"name": "help wanted",
"color": "008672",
"default": true,
"description": "Extra attention is needed"
},
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Pinging @mariamabarham as well",
"- `nlp` has MR! It's called `rotten_tomatoes`\r\n- SST is part of GLUE, or is that just SST-2?\r\n- `nlp` also has `ag_news`, a popular news classification dataset\r\n\r\nI'd also like to see:\r\n- the Yahoo Answers topic classification dataset\r\n- the Kaggle Fake News classification dataset",
"Thanks @jxmorris12 for pointing this out. \r\n\r\nIn glue we only have SST-2 maybe we can add separately SST-1.\r\n",
"This is the homepage for the Amazon dataset: https://www.kaggle.com/datafiniti/consumer-reviews-of-amazon-products\r\n\r\nIs there an easy way to download kaggle datasets programmatically? If so, I can add this one!",
"Hi @jxmorris12 for now I think our `dl_manager` does not download from Kaggle.\r\n@thomwolf , @lhoestq",
"Pretty sure the quora dataset is the same one I implemented here: https://github.com/huggingface/nlp/pull/366",
"Great list. Any idea if Amazon Reviews has been added?\r\n\r\n- ~40 GB of text (sadly no emoji)\r\n- popular MLM pre-training dataset before bigger datasets like WebText https://arxiv.org/abs/1808.01371\r\n- turns out that binarizing the 1-5 star rating leads to great Pos/Neg/Neutral dataset, T5 paper claims to get very high accuracy (98%!) on this with small amount of finetuning https://arxiv.org/abs/2004.14546\r\n\r\nApologies if it's been included (great to see where) and if not, it's one of the better medium/large NLP dataset for semi-supervised learning, albeit a bit out of date. \r\n\r\nThanks!! \r\n\r\ncc @sshleifer ",
"On the Amazon Reviews dataset, the original UCSD website has noted these are now updated to include product reviews through 2018 -- actually quite recent compared to many other datasets. Almost certainly the largest NLP dataset out there with labels!\r\nhttps://jmcauley.ucsd.edu/data/amazon/ \r\n\r\nAny chance someone has time to onboard this dataset in a HF way?\r\n\r\ncc @sshleifer "
] | 1,594,210,678,000 | 1,603,165,283,000 | null | MEMBER | null | We are missing a few datasets for Text Classification which is an important field.
Namely, it would be really nice to add:
- TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]**
- Yelp-5
- Movie review (Movie Review (MR) dataset [156]) **[done (same as rotten_tomatoes)]**
- SST (Stanford Sentiment Treebank) **[include in glue]**
- Multi-Perspective Question Answering (MPQA) dataset **[require authentication (indeed manual download)]**
- Amazon. This is a popular corpus of product reviews collected from the Amazon website [159]. It contains labels for both binary classification and multi-class (5-class) classification
- 20 Newsgroups. The 20 Newsgroups dataset **[done]**
- Sogou News dataset **[done]**
- Reuters news. The Reuters-21578 dataset [165] **[done]**
- DBpedia. The DBpedia dataset [170]
- Ohsumed. The Ohsumed collection [171] is a subset of the MEDLINE database
- EUR-Lex. The EUR-Lex dataset
- WOS. The Web Of Science (WOS) dataset **[done]**
- PubMed. PubMed [173]
- TREC-QA. TREC-QA
- Quora. The Quora dataset [180]
All these datasets are cited in https://arxiv.org/abs/2004.03705 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/353/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/353/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/352/comments | https://api.github.com/repos/huggingface/datasets/issues/352/events | https://github.com/huggingface/datasets/pull/352 | 653,128,883 | MDExOlB1bGxSZXF1ZXN0NDQ2MTA1Mjky | 352 | 🐛[BugFix]fix seqeval | {
"login": "AlongWY",
"id": 20281571,
"node_id": "MDQ6VXNlcjIwMjgxNTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/20281571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlongWY",
"html_url": "https://github.com/AlongWY",
"followers_url": "https://api.github.com/users/AlongWY/followers",
"following_url": "https://api.github.com/users/AlongWY/following{/other_user}",
"gists_url": "https://api.github.com/users/AlongWY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlongWY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlongWY/subscriptions",
"organizations_url": "https://api.github.com/users/AlongWY/orgs",
"repos_url": "https://api.github.com/users/AlongWY/repos",
"events_url": "https://api.github.com/users/AlongWY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlongWY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think this is good but can you detail a bit the behavior before and after your fix?",
"examples:\r\n\r\ninput: `['B', 'I', 'I', 'O', 'B', 'I']`\r\nbefore: `[('B', 0, 0), ('I', 1, 2), ('B', 4, 4), ('I', 5, 5)]`\r\nafter: `[('_', 0, 2), ('_', 4, 5)]`\r\n\r\ninput: `['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']`\r\nbefore: `[('LOC', 0, 2), ('TIME', 4, 5)]`\r\nafter: `[('ARGM-LOC', 0, 2), ('ARGM-TIME', 4, 5)]`\r\n\r\nThis is my test code:\r\n\r\n```python\r\nfrom metrics.seqeval.seqeval import end_of_chunk, start_of_chunk\r\n\r\n\r\ndef before_get_entities(seq, suffix=False):\r\n \"\"\"Gets entities from sequence.\r\n Args:\r\n seq (list): sequence of labels.\r\n Returns:\r\n list: list of (chunk_type, chunk_start, chunk_end).\r\n \"\"\"\r\n if any(isinstance(s, list) for s in seq):\r\n seq = [item for sublist in seq for item in sublist + ['O']]\r\n\r\n prev_tag = 'O'\r\n prev_type = ''\r\n begin_offset = 0\r\n chunks = []\r\n for i, chunk in enumerate(seq + ['O']):\r\n if suffix:\r\n tag = chunk[-1]\r\n type_ = chunk.split('-')[0]\r\n else:\r\n tag = chunk[0]\r\n type_ = chunk.split('-')[-1]\r\n\r\n if end_of_chunk(prev_tag, tag, prev_type, type_):\r\n chunks.append((prev_type, begin_offset, i - 1))\r\n if start_of_chunk(prev_tag, tag, prev_type, type_):\r\n begin_offset = i\r\n prev_tag = tag\r\n prev_type = type_\r\n\r\n return chunks\r\n\r\n\r\ndef after_get_entities(seq, suffix=False):\r\n \"\"\"Gets entities from sequence.\r\n Args:\r\n seq (list): sequence of labels.\r\n Returns:\r\n list: list of (chunk_type, chunk_start, chunk_end).\r\n \"\"\"\r\n if any(isinstance(s, list) for s in seq):\r\n seq = [item for sublist in seq for item in sublist + ['O']]\r\n\r\n prev_tag = 'O'\r\n prev_type = ''\r\n begin_offset = 0\r\n chunks = []\r\n for i, chunk in enumerate(seq + ['O']):\r\n if suffix:\r\n tag = chunk[-1]\r\n type_ = chunk[:-1].rsplit('-', maxsplit=1)[0] or '_'\r\n else:\r\n tag = chunk[0]\r\n type_ = chunk[1:].split('-', maxsplit=1)[-1] or '_'\r\n\r\n if end_of_chunk(prev_tag, tag, prev_type, type_):\r\n chunks.append((prev_type, begin_offset, i - 1))\r\n if start_of_chunk(prev_tag, tag, prev_type, type_):\r\n begin_offset = i\r\n prev_tag = tag\r\n prev_type = type_\r\n\r\n return chunks\r\n\r\n\r\ndef main():\r\n examples_1 = ['B', 'I', 'I', 'O', 'B', 'I']\r\n print(before_get_entities(examples_1))\r\n print(after_get_entities(examples_1))\r\n examples_2 = ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']\r\n print(before_get_entities(examples_2))\r\n print(after_get_entities(examples_2))\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"And we can get more examples not correct, such as:\r\n\r\ninput: `['B', 'I', 'I-I']`\r\nbefore: `[('B', 0, 0), ('I', 1, 2)]`\r\nafter: `[('_', 0, 1), ('I', 2, 2)]`\r\n\r\ninput: `['B-ARGM-TIME', 'I-ARGM-TIME', 'I-TIME']`\r\nbefore: `[('TIME', 0, 2)]`\r\nafter: `[('ARGM-TIME', 0, 1), ('TIME', 2, 2)]`",
"I think i didn't break any thing. Maybe the checks should be restart?",
"Could you please rebase from master @AlongWY ? This should fix the CI stuff",
"ok, i will do it",
"Indeed the official repo is quite stale. Let's merge it here, thanks @AlongWY "
] | 1,594,199,532,000 | 1,594,888,006,000 | 1,594,888,006,000 | CONTRIBUTOR | null | Fix seqeval process labels such as 'B', 'B-ARGM-LOC' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/352/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/352",
"html_url": "https://github.com/huggingface/datasets/pull/352",
"diff_url": "https://github.com/huggingface/datasets/pull/352.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/352.patch",
"merged_at": 1594888006000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/351/comments | https://api.github.com/repos/huggingface/datasets/issues/351/events | https://github.com/huggingface/datasets/pull/351 | 652,424,048 | MDExOlB1bGxSZXF1ZXN0NDQ1NDk0NTE4 | 351 | add pandas dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,136,287,000 | 1,594,217,716,000 | 1,594,217,715,000 | MEMBER | null | Create a dataset from serialized pandas dataframes.
Usage:
```python
from nlp import load_dataset
dset = load_dataset("pandas", data_files="df.pkl")["train"]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/351/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/351/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/351",
"html_url": "https://github.com/huggingface/datasets/pull/351",
"diff_url": "https://github.com/huggingface/datasets/pull/351.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/351.patch",
"merged_at": 1594217715000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/350/comments | https://api.github.com/repos/huggingface/datasets/issues/350/events | https://github.com/huggingface/datasets/pull/350 | 652,398,691 | MDExOlB1bGxSZXF1ZXN0NDQ1NDczODYz | 350 | add from_pandas and from_dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,594,134,233,000 | 1,594,217,673,000 | 1,594,217,672,000 | MEMBER | null | I added two new methods to the `Dataset` class:
- `from_pandas()` to create a dataset from a pandas dataframe
- `from_dict()` to create a dataset from a dictionary (keys = columns)
It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so.
It is also possible to specify the features types via `features=...` if there are ambiguities (null/nan values), otherwise the arrow schema is infered from the data automatically by pyarrow.
One question that I have right now:
+ Should we also add a `save()` method that would write the dataset on the disk ? Right now if we create a `Dataset` using those two new methods, the data are kept in RAM. Then to reload it we can call the `from_file()` method. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/350/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/350",
"html_url": "https://github.com/huggingface/datasets/pull/350",
"diff_url": "https://github.com/huggingface/datasets/pull/350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/350.patch",
"merged_at": 1594217672000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/349/comments | https://api.github.com/repos/huggingface/datasets/issues/349/events | https://github.com/huggingface/datasets/pull/349 | 652,231,571 | MDExOlB1bGxSZXF1ZXN0NDQ1MzQwMTQ1 | 349 | Hyperpartisan news detection | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you so much for working on this! This is awesome!\r\n\r\nHow much would it help you if we would remove the manual request?\r\n\r\nWe are naturally interested in getting some broad idea of how many people and who are using our dataset. But if you consider hosting the dataset yourself, I would rather remove this small barrier on our side (so that we then still get the download count from your library).",
"This is an interesting aspect indeed!\r\nDo you want to send me an email (see my homepage) and I'll invite you on our slack channel to talk about that?\r\n@ghomasHudson wanna reach out to me as well? I tried to find your email to invite you without success."
] | 1,594,119,997,000 | 1,594,154,847,000 | 1,594,133,831,000 | CONTRIBUTOR | null | Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display.
Implementation notes:
- As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to?
- The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data?
- Should we always subclass `nlp.BuilderConfig`?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/349/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/349",
"html_url": "https://github.com/huggingface/datasets/pull/349",
"diff_url": "https://github.com/huggingface/datasets/pull/349.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/349.patch",
"merged_at": 1594133831000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/348/comments | https://api.github.com/repos/huggingface/datasets/issues/348/events | https://github.com/huggingface/datasets/pull/348 | 652,158,308 | MDExOlB1bGxSZXF1ZXN0NDQ1MjgwNjk3 | 348 | Add OSCAR dataset | {
"login": "pjox",
"id": 635220,
"node_id": "MDQ6VXNlcjYzNTIyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjox",
"html_url": "https://github.com/pjox",
"followers_url": "https://api.github.com/users/pjox/followers",
"following_url": "https://api.github.com/users/pjox/following{/other_user}",
"gists_url": "https://api.github.com/users/pjox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjox/subscriptions",
"organizations_url": "https://api.github.com/users/pjox/orgs",
"repos_url": "https://api.github.com/users/pjox/repos",
"events_url": "https://api.github.com/users/pjox/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).\r\n\r\n ",
"> @pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).\r\n\r\nBut can I do the dummy data without running `python nlp-cli test datasets/<your-dataset-folder> --save_infos --all_configs` first? 🤔 ",
"You make a good point! Do you know how big is it uncompressed?",
"Between 7T and 9T I think.",
"Hi ! I've been busy but I plan to compute the missing metadata soon !\r\nLooking forward to be able to load a memory mapped version of OSCAR :) ",
"> Hi ! I've been busy but I plan to compute the missing metadata soon !\r\n> Looking forward to be able to load a memory mapped version of OSCAR :)\r\n\r\nAmazing! Thanks! 😄 ",
"Hi there, are there any plans to complete this issue soon? I'm planning to use this dataset on a project. Let me know if there's anything I can do to help to finish this 🤗 ",
"Yes it will be added soon :) \r\nRecently the OSCAR data files were moved to another host. We just need to update the script and compute the dataset_infos.json (it will probably take a few days).",
"@lhoestq I've seen in oscar.py that it isn't a dataset script with manual download way. Is that correct? \r\nSome time ago, @pjox had some troubles with his servers providing that dataset 'cause it's really huge. Providing it on an automatic download way seems to be a little bit dangerous for me 😄 ",
"Now thanks to @pjox 's help OSCAR is hosted on HF's S3, which is probably more robust that the previous servers :)\r\n\r\nAlso small update on my side:\r\nI launched the computation of the dataset_infos.json file, it will take a few days.",
"Now it seems to be a good plan for me 🤗 ",
"But is there a plan to provide the OSCAR's unshuffled version too?",
"The one we have on S3 is currently the unshuffled version",
"I've thought that you won't provide the unshuffled version 'cause this comment on oscar.py:\r\n\r\n`# TODO(oscar): Implement unshuffled OSCAR`\r\n\r\n",
"That TODO is normal, I haven't touched the python script in months (I haven't had the time, sorry), but I guess @lhoestq fixed the paths if he's already working on the metadata. In any case from now on, only the unshuffled versions of OSCAR will be distributed through the hf/datasets library as in any case it is the version most people use to train language models.\r\n\r\nIf for any reason, you need the shuffled version it will always be available on the [OSCAR website](https://oscar-corpus.com).\r\n\r\nAlso future versions of OSCAR will be unshuffled only.",
"Should we close this PR now that the other one was merged?",
"Sure.\r\nClosing since #1694 is merged",
"@lhoestq just a little detail, is the Oscar version that HF offers the same one that was available on INRIA? By that I mean, have you done any further filtering or removing of data inside it? Thanks a lot! ",
"Hello @jchwenger, this is exactly the same (unshuffled) version that's available at Inria. Sadly no further filtering is provided, but after the latest OSCAR audit (https://arxiv.org/abs/2103.12028) we're already working on future versions of OSCAR that will be \"filtered\" and that will be available on the OSCAR website and hopefully here as well.",
"@pjox brilliant, in my case I was hoping it would be unfiltered, good news!"
] | 1,594,113,727,000 | 1,620,079,628,000 | 1,612,865,959,000 | CONTRIBUTOR | null | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/348/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/348/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/348",
"html_url": "https://github.com/huggingface/datasets/pull/348",
"diff_url": "https://github.com/huggingface/datasets/pull/348.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/348.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/347/comments | https://api.github.com/repos/huggingface/datasets/issues/347/events | https://github.com/huggingface/datasets/issues/347 | 652,106,567 | MDU6SXNzdWU2NTIxMDY1Njc= | 347 | 'cp950' codec error from load_dataset('xtreme', 'tydiqa') | {
"login": "jerryIsHere",
"id": 50871412,
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryIsHere",
"html_url": "https://github.com/jerryIsHere",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\nTry to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\nSee issues #242 and #307 ",
"It should be in `xtreme.py:L755`:\r\n```python\r\n if self.config.name == \"tydiqa\" or self.config.name.startswith(\"MLQA\") or self.config.name == \"SQuAD\":\r\n with open(filepath) as f:\r\n data = json.load(f)\r\n```\r\n\r\nCould you try to add the encoding parameter:\r\n```python\r\nopen(filepath, encoding='utf-8')\r\n```",
"Hello @jerryIsHere :) Did it work ?\r\nIf so we may change the dataset script to force the utf-8 encoding",
"@lhoestq sorry for being that late, I found 4 copy of xtreme.py. I did the changes as what has been told to all of them.\r\nThe problem is not solved",
"Could you provide a better error message so that we can make sure it comes from the opening of the `tydiqa`'s json files ?\r\n",
"@lhoestq \r\nThe error message is same as before:\r\nException has occurred: UnicodeDecodeError\r\n'cp950' codec can't decode byte 0xe2 in position 111: illegal multibyte sequence\r\n File \"D:\\python\\test\\test.py\", line 3, in <module>\r\n dataset = load_dataset('xtreme', 'tydiqa')\r\n\r\n![image](https://user-images.githubusercontent.com/50871412/87748794-7c216880-c829-11ea-94f0-7caeacb4d865.png)\r\n\r\nI said that I found 4 copy of xtreme.py and add the 「, encoding='utf-8'」 parameter to the open() function\r\nthese python script was found under this directory\r\nC:\\Users\\USER\\AppData\\Local\\Programs\\Python\\Python37\\Lib\\site-packages\\nlp\\datasets\\xtreme\r\n",
"Hi there !\r\nI encountered the same issue with the IMDB dataset on windows. It threw an error about charmap not being able to decode a symbol during the first time I tried to download it. I checked on a remote linux machine I have, and it can't be reproduced.\r\nI added ```encoding='UTF-8'``` to both lines that have ```open``` in ```imdb.py``` (108 and 114) and it worked for me.\r\nThank you !",
"> Hi there !\r\n> I encountered the same issue with the IMDB dataset on windows. It threw an error about charmap not being able to decode a symbol during the first time I tried to download it. I checked on a remote linux machine I have, and it can't be reproduced.\r\n> I added `encoding='UTF-8'` to both lines that have `open` in `imdb.py` (108 and 114) and it worked for me.\r\n> Thank you !\r\n\r\nHello !\r\nGlad you managed to fix this issue on your side.\r\nDo you mind opening a PR for IMDB ?",
"> This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\n> Try to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\n> See issues #242 and #307\r\n\r\nSorry for not responding for about a month.\r\nI have just found that it is necessary to change / add the environment variable as what was told in #242.\r\nEverything works after I add the new environment variable and restart my PC.\r\n\r\nI think the encoding issue for windows isn't limited to the open() function call specific to few dataset, but actually in the entire library, depends on the machine / os you use.",
"Since #481 we shouldn't have other issues with encodings as they need to be set to \"utf-8\" be default.\r\n\r\nClosing this one, but feel free to re-open if you gave other questions"
] | 1,594,109,663,000 | 1,599,490,305,000 | 1,599,490,305,000 | CONTRIBUTOR | null | ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png)
I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps :
https://www.python.org/dev/peps/pep-0263/
I guess the error was triggered by the code " module = importlib.import_module(module_path)" at line 57 in the source code: nlp/src/nlp/load.py / (https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/src/nlp/load.py#L51)
Any ideas?
p.s. tried the same code on colab, that runs perfectly
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/347/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/346/comments | https://api.github.com/repos/huggingface/datasets/issues/346/events | https://github.com/huggingface/datasets/pull/346 | 652,044,151 | MDExOlB1bGxSZXF1ZXN0NDQ1MTg4MTUz | 346 | Add emotion dataset | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I've tried it and am getting the same error as you.\r\n\r\nYou could use the text files rather than the pickle:\r\n```\r\nhttps://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt\r\nhttps://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt\r\nhttps://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt\r\n```\r\n\r\nThen you would get all 3 splits rather than just the train split.",
"Thanks a lot @ghomasHudson - silly me for not spotting that! \r\n\r\nI'll keep the PR open for now since I'm quite close to wrapping it up.",
"Hi @ghomasHudson your suggestion worked like a charm - the PR is now ready for review 😎 ",
"Hello, I probably have a silly question but the labels of the emotion dataset are in the form of numbers and not string, so I can not use the function classification_report because it mixes numbers and string (prediction). How can I access the label in the form of a string and not a number?\r\nThank you in advance.",
"Hi @juliette-sch! Yes, I believe that having the labels as integers is now the default for many classification datasets. You can access the string label via the `ClassLabel.int2str` function ([docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=int2str#datasets.ClassLabel.int2str)), so you could add a new column to the dataset as follows:\r\n\r\n```python\r\nfrom datasets import load_dataset \r\n\r\nemotions = load_dataset(\"emotion\")\r\n\r\ndef label_int2str(row):\r\n return {\"label_name\": emotions[\"train\"].features[\"label\"].int2str(row[\"label\"])}\r\n\r\n# adds a new column called `label_name`\r\nemotions = emotions.map(label_int2str)\r\n```",
"Great, thank you very much @lewtun !"
] | 1,594,103,741,000 | 1,619,162,023,000 | 1,594,651,178,000 | MEMBER | null | Hello 🤗 team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)).
With the current implementation, running
```bash
python nlp-cli test datasets/emotion --save_infos --all_configs
```
throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).
Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`.
Any pointers on what I'm doing wrong would be greatly appreciated!
**Stack trace**
```
INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports.
INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock
INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion
INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b
INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py
INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json
INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json
INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock
INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0)
INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source
Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0...
INFO:nlp.builder:Generating split train
0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490
Traceback (most recent call last):
File "nlp-cli", line 37, in <module>
service.run()
File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run
builder.download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split
for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples
data = pickle.load(f)
_pickle.UnpicklingError: invalid load key, '<'.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/346/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/346",
"html_url": "https://github.com/huggingface/datasets/pull/346",
"diff_url": "https://github.com/huggingface/datasets/pull/346.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/346.patch",
"merged_at": 1594651178000
} | true |