url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.12B
| node_id
stringlengths 18
32
| number
int64 1
3.66k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,644B
| updated_at
int64 1,587B
1,644B
| closed_at
int64 1,587B
1,644B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/823/comments | https://api.github.com/repos/huggingface/datasets/issues/823/events | https://github.com/huggingface/datasets/issues/823 | 739,815,763 | MDU6SXNzdWU3Mzk4MTU3NjM= | 823 | how processing in batch works in datasets | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi I don’t think this is a request for a dataset like you labeled it.\r\n\r\nI also think this would be better suited for the forum at https://discuss.huggingface.co. we try to keep the issue for the repo for bug reports and new features/dataset requests and have usage questions discussed on the forum. Thanks.",
"Hi Thomas,\nwhat I do not get from documentation is that why when you set batched=True,\nthis is processed in batch, while data is not divided to batched\nbeforehand, basically this is a question on the documentation and I do not\nget the batched=True, but sure, if you think this is more appropriate in\nforum I will post it there.\nthanks\nBest\nRabeeh\n\nOn Tue, Nov 10, 2020 at 12:21 PM Thomas Wolf <notifications@github.com>\nwrote:\n\n> Hi I don’t think this is a request for a dataset like you labeled it.\n>\n> I also think this would be better suited for the forum at\n> https://discuss.huggingface.co. we try to keep the issue for the repo for\n> bug reports and new features/dataset requests and have usage questions\n> discussed on the forum. Thanks.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/823#issuecomment-724639476>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHH4FIPFHVVUHANAE4F3SPEO2JANCNFSM4TQQVEXQ>\n> .\n>\n",
"Yes the forum is perfect for that. You can post in the `datasets` section.\r\nThanks a lot!"
] | 1,605,006,677,000 | 1,605,013,870,000 | 1,605,013,869,000 | NONE | null | Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
max_source_length: str = NotImplemented
max_target_length: str = NotImplemented
# TODO: should not be a task item, but cannot see other ways.
tpu_num_cores: int = None
# The arguments set are for all tasks and needs to be kept common.
def __init__(self, config):
self.max_source_length = config['max_source_length']
self.max_target_length = config['max_target_length']
self.tokenizer = config['tokenizer']
self.tpu_num_cores = config['tpu_num_cores']
def _encode(self, batch) -> Dict[str, torch.Tensor]:
batch_encoding = self.tokenizer.prepare_seq2seq_batch(
[x["src_texts"] for x in batch],
tgt_texts=[x["tgt_texts"] for x in batch],
max_length=self.max_source_length,
max_target_length=self.max_target_length,
padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack
return_tensors="pt"
)
return batch_encoding.data
def data_split(self, split):
return self.split_to_data_split[split]
def get_dataset(self, split, n_obs=None):
split = self.data_split(split)
if n_obs is not None:
split = split+"[:{}]".format(n_obs)
dataset = load_dataset(self.task_name, split=split)
dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names)
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])
return dataset
```
I call it like
`AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train)
`
This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks
File "finetune_multitask_trainer.py", line 192, in main
if training_args.do_train else None
File "finetune_multitask_trainer.py", line 191, in <dictcomp>
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda>
dataset = dataset.map(lambda batch: self._encode(batch), batched=True)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode
[x["src_texts"] for x in batch],
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp>
[x["src_texts"] for x in batch],
TypeError: string indices must be integers
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/823/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/822/comments | https://api.github.com/repos/huggingface/datasets/issues/822/events | https://github.com/huggingface/datasets/issues/822 | 739,579,314 | MDU6SXNzdWU3Mzk1NzkzMTQ= | 822 | datasets freezes | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Pytorch is unable to convert strings to tensors unfortunately.\r\nYou can use `set_format(type=\"torch\")` on columns that can be converted to tensors, such as token ids.\r\n\r\nThis makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text columns"
] | 1,604,985,019,000 | 1,605,223,383,000 | null | NONE | null | Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
dataset2 = load_dataset("imdb", split="train[:10]")
dataset2 = dataset2.set_format(type="torch", columns=["text", "label"])
print(len(dataset1))
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/822/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/821/comments | https://api.github.com/repos/huggingface/datasets/issues/821/events | https://github.com/huggingface/datasets/issues/821 | 739,506,859 | MDU6SXNzdWU3Mzk1MDY4NTk= | 821 | `kor_nli` dataset doesn't being loaded properly | {
"login": "sackoh",
"id": 30492059,
"node_id": "MDQ6VXNlcjMwNDkyMDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/30492059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sackoh",
"html_url": "https://github.com/sackoh",
"followers_url": "https://api.github.com/users/sackoh/followers",
"following_url": "https://api.github.com/users/sackoh/following{/other_user}",
"gists_url": "https://api.github.com/users/sackoh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sackoh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sackoh/subscriptions",
"organizations_url": "https://api.github.com/users/sackoh/orgs",
"repos_url": "https://api.github.com/users/sackoh/repos",
"events_url": "https://api.github.com/users/sackoh/events{/privacy}",
"received_events_url": "https://api.github.com/users/sackoh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,973,852,000 | 1,605,535,152,000 | 1,605,535,152,000 | NONE | null | There are two issues from `kor_nli` dataset
1. csv.DictReader failed to split features by tab
- Should not exist `None` value in label feature, but there it is.
```python
kor_nli_train['train'].unique('gold_label')
# ['neutral', 'entailment', 'contradiction', None]
```
- I found a reason why there is `None` values in label feature as following code
```python
from datasets import load_dataset
kor_nli_train = load_dataset('kor_nli', 'multi_nli')
for idx, example in enumerate(kor_nli_train['train']):
if example['gold_label'] is None:
print(idx, example)
break
# 16835 {'gold_label': None, 'sentence1': '그는 전쟁 전에 가벼운 벅스킨 암말을 가지고 달리기 위해 우유처럼 하얀 스터드를 넣었다.\t전쟁 전에 다인종 여성들과 함께 있는 백인 남자가 있었다.\tentailment\n슬림은 재빨리 옷을 입었고, 순간적으로 미지근한 물을 뿌릴 수 있는 아침 세탁물을 기꺼이 가두었다.\t슬림은 직장에 늦었다.\tneutral\n뉴욕에서 그 식사를 해봤는데, 거기서 소고기의 멋진 소고기 부분을 요리하고 바베큐로 만든 널빤지 같은 걸 가져왔는데, 정말 대단해.\t그들이 거기서 요리하는 쇠고기는 역겹다. 거기서 절대 먹지 마라.\tcontradiction\n판매원의 죽음에서 브라이언 데네히... 크리스 켈리\t크리스 켈리는 세일즈맨의 죽음을 언급하지 않는다.\tcontradiction\n그러는 동안 요리사는 그냥 화가 났어.\t스튜가 끓는 동안 요리사는 화가 났다.\tneutral\n마지막 로마의 맹공격 전날 밤, 900명 이상의 유대인 수비수들이 로마인들에게 그들을 사로잡는 승리를 주기 보다는 대량 자살을 저질렀다.\t로마인들이 그들의 포획에 승리하도록 내버려두기 보다는 900명의 유대인 수비수들이 자살했다.\tentailment\n앞으로 발사하라.\t발사.\tneutral\n그리고 당신은 우리 땅이 에이커에 있다는 것을 알고 있다. 우리 사람들은 어떤 것이 얼마나 많은지 이해하지 못할 것이다.\t모든 사람들은 우리의 측정 시스템이 어떻게 작동하는지 알고 이해합니다.\tcontradiction\n주미게스\tJumiyges는 도시의 이름이다.\tneutral\n사람은 자기 민족을 돌봐야 한다...\t사람은 조국에 공감해야 한다.\tentailment\n또한 PDD 63은 정부와 업계가 컴퓨터 기반 공격에 대해 경고하고 방어할 준비를 더 잘할 수 있도록 시스템 취약성, 위협, 침입 및 이상에 대한 정보를 공유하는 메커니즘을 수립하는 것이 중요하다는 것을 인식했습니다.\t정보 전송 프로토콜을 만드는 것은 중요하다.\tentailment\n카페 링 피아자 델라 레퓌블리카 바로 남쪽에는 피렌체가 알려진 짚 제품 때문에 한때 스트로 마켓이라고 불렸던 16세기 로지아인 메르카토 누오보(Mercato Nuovo)가 있다.\t피아자 델라 레퓌블리카에는 카페가 많이 있다.\tentailment\n우리가 여기 있는 한 트린판이 뭘 주웠는지 살펴봐야겠어\t우리는 트린판이 무엇을 주웠는지 보는 데 시간을 낭비하지 않을 것이다.\tcontradiction\n그러나 켈트족의 문화적 기반을 가진 아일랜드 교회는 유럽의 신흥 기독교 세계와는 다르게 발전했고 결국 로마와 중앙집권적 행정으로 대체되었다.\t아일랜드 교회에는 켈트족의 기지가 있었다.\tentailment\n글쎄, 넌 선택의 여지가 없어\t글쎄, 너에겐 많은 선택권이 있어.\tcontradiction\n사실, 공식적인 보장은 없다.\t내가 산 물건에 대한 보증이 없었다.\tneutral\n덜 활기차긴 하지만, 안시와 르 부르젯의 사랑스러운 호수에서도 삶은 똑같이 상쾌하다.\t안시와 르 부르겟에서는 호수에서의 활동이 서두르고 바쁜 분위기를 연출한다.\tcontradiction\n그의 여행 소식이 이미 퍼졌다면 공격 소식도 퍼졌을 테지만 마을에서는 전혀 공황의 기미가 보이지 않았다.\t그는 왜 마을이 당황하지 않았는지 알 수 없었다.\tneutral\n과거에는 죽음의 위협이 토지의 판매를 막는 데 거의 도움이 되지 않았다.\t토지 판매는 어떠한 위협도 교환하지 않고 이루어진다.\tcontradiction\n어느 시점에 이르러 나는 지금 다가오는 새로운 것들과 나오는 많은 새로운 것들이 내가 늙어가고 있다고 말하는 시대로 접어들고 있다.\t나는 여전히 내가 보는 모든 새로운 것을 사랑한다.\tcontradiction\n뉴스위크는 물리학자들이 경기장 행사에서 고속도로의 자동차 교통과 보행자 교통을 개선하기 위해 새떼의 움직임을 연구하고 있다고 말한다.\t고속도로의 자동차 교통 흐름을 개선하는 것은 물리학자들이 새떼를 연구하는 이유 중 하나이다.\tentailment\n얼마나 다른가? 그는 잠시 말을 멈추었다가 말을 이었다.\t그는 그 소녀가 어디에 있는지 알고 싶었다.\tentailment\n글쎄, 그에게 너무 많은 것을 주지마.\t그는 훨씬 더 많은 것을 요구할 것이다.\tneutral\n아무리 그의 창작물이 완벽해 보인다고 해도, 그들을 믿는 것은 아마도 좋은 생각이 아닐 것이다.\'\t도자기를 잘 만든다고 해서 누군가를 믿는 것은 아마 좋지 않을 것이다.\tneutral\n버스틀링 그란 비아(Bustling Gran Via)는 호텔, 상점, 극장, 나이트클럽, 카페 등이 어우러져 산책과 창가를 볼 수 있다.\tGran Via는 호텔, 상점, 극장, 나이트클럽, 카페의 번화한 조합이다.\tentailment\n정부 인쇄소\t그 사무실은 워싱턴에 위치해 있다.\tneutral\n실제 문화 전쟁이 어디 있는지 알고 싶다면 학원을 잊어버리고 실리콘 밸리와 레드몬드를 생각해 보라.\t실제 문화 전쟁은 레드몬드에서 일어난다.\tentailment\n그리고 페니실린을 주지 않기 위해 침대 위에 올려놨어\t그녀의 방에는 페니실린이 없다는 징후가 전혀 없었다.\tcontradiction\nL.A.의 야외 시장을 활보하는 것은 맛있고 저렴한 그루브를 잡고, 끝이 없는 햇빛을 즐기고, 신선한 농산물, 꽃, 향, 그리고 가젯 갈로어를 구입하면서 현지인들과 어울릴 수 있는 훌륭한 방법이다.\tLA의 야외 시장을 돌아다니는 것은 시간 낭비다.\tcontradiction\n안나는 밖으로 나와 안도의 한숨을 내쉬었다. 단 한 번, 그리고 마리후아쉬 맛의 술로 끝내자는 결심이 뒤섞여 있었다.\t안나는 안심하고 마리후아쉬 맛의 술을 다 마시기로 결심했다.\tentailment\n5 월에 Vajpayee는 핵 실험의 성공적인 완료를 발표했는데, 인도인들은 주권의 표시로 선전했지만 이웃 국가와 서구와의 인도 관계를 복잡하게 만들 수 있습니다.\t인도는 성공적인 핵실험을 한 적이 없다.\tcontradiction\n플라노 원에서 보통 얼마나 많은 것을 가지고 있는가?\t저 사람들 중에 플라노 원에 가본 사람 있어?\tcontradiction\n그것의 전체적인 형태의 우아함은 운하 건너편에서 가장 잘 볼 수 있다. 왜냐하면, 로마에 있는 성 베드로처럼, 돔은 길쭉한 본당 뒤로 더 가까운 곳에 사라지기 때문이다.\t성 베드로의 길쭉한 본당은 돔을 가린다.\tentailment\n당신은 수틴이 살에 강박적인 기쁨을 가지고 누드를 그릴 것이라고 생각하겠지만, 아니오; 그는 그의 모든 경력에서 단 한 점만을 그렸고, 그것은 사소한 그림이다.\t그는 그것이 그를 불편하게 만들었기 때문에 하나만 그렸다.\tneutral\n이 인상적인 풍경은 원래 나포 레온이 루브르 박물관의 침실에서 볼 수 있도록 계획되었는데, 그 당시 궁전이었습니다.\t나폴레옹은 그의 모든 궁전에 있는 그의 침실에서 보는 경치에 많은 관심을 가졌다.\tneutral\n그는 우리에게 문 열쇠를 건네주고는 급히 떠났다.\t그는 긴장해서 우리에게 열쇠를 빨리 주었다.\tneutral\n위원회는 또한 최종 규칙을 OMB에 제출했다.\t위원회는 또한 이 규칙을 다른 그룹에 제출했지만 최종 규칙은 OMB가 평가하기 위한 것이 었습니다.\tneutral\n정원가게에 가보면 올리비아의 복제 화합물 같은 유쾌한 이름을 가진 제품들을 찾을 수 있을 겁니다.이 제품이 뿌리를 내리도록 돕기 위해 촬영의 절단된 끝에 덩크슛을 하는 호르몬의 혼합물이죠.\t정원 가꾸기 가게의 제품들은 종종 그들의 목적을 설명하기 위해 기술적으로나 과학적으로 파생된 이름(올리비아의 복제 화합물처럼)을 부여받는다.\tneutral\n스타는 스틸 자신이나 왜 그녀의 이야기를 바꾸었는지에 훨씬 더 관심이 있을 것이다.\t스틸의 이야기는 조금도 변하지 않았다.\tcontradiction\n남편과의 마지막 대결로 맥티어는 노라의 변신을 너무나 능숙하게 예고해 왔기 때문에, 그녀에게는 당황스러울 정도로 갑작스러운 것처럼 보이지만, 우리에게는 감정적으로 불가피해 보인다.\t노라의 변신은 분명하고 필연적이었다.\tcontradiction\n이집트 최남단 도시인 아스완은 오랜 역사를 통해 중요한 역할을 해왔다.\t아스완은 이집트 국경 바로 위에 위치해 있습니다.\tneutral\n그러나 훨씬 더 우아한 건축적 터치는 신성한 춤인 Bharatanatyam에서 수행된 108 가지 기본 포즈를 시바 패널에서 볼 수 있습니다.\t패널에 대한 시바의 묘사는 일반적인 모티브다.\tneutral\n호화롭게 심어진 계단식 정원은 이탈리아 형식의 가장 훌륭한 앙상블 중 하나입니다.\t아름다운 정원과 희귀한 꽃꽂이 모두 이탈리아의 형식적인 스타일을 보여준다.\tneutral\n음, 그랬으면 좋았을 텐데\t나는 그것을 다르게 할 기회를 몹시 갈망한다.\tentailment\n폐허가 된 성의 기슭에 자리잡고 있는 예쁜 중세 도시 케이서스버그는 노벨 평화상 수상자 알버트 슈바이처(1875년)의 출생지로 널리 알려져 있다.\t알버트 슈바이처는 둘 다 케이서스버그 마을에 있었다.\tentailment\n고감도는 문제가 있는 대부분의 환자들이 발견될 것을 보장한다.\t장비 민감도는 문제 탐지와 관련이 없습니다.\tcontradiction\n오늘은 확실히 반바지 같은 날이었어\t오늘 사무실에 있는 모든 사람들은 반바지를 입었다.\tneutral\n못생긴 턱시도를 입고.\t그것은 분홍색과 주황색입니다.\tneutral\n이주 노동 수용소 오 마이 갓 그들은 판지 상자에 산다.\t노동 수용소에는 판지 상자에 사는 이주 노동자들의 사진이 있다.\tneutral\n그래, 그가 전 세계를 여행한 후에 그런 거야\t그것은 사람들의 세계 여행을 따른다.\tentailment\n건너편에 크고 큰 참나무 몇 그루가 있다.\t우리는 여기 오크나 어떤 종류의 미국 나무도 없다.\tcontradiction\nFort-de-France에서 출발하는 자동차나 여객선으로, 당신은 안세 ? 바다 포도가 그늘을 제공하는 쾌적한 갈색 모래 해변과 피크닉 테이블, 어린이 미끄럼틀, 식당이 있는 안느에 도착할 수 있다.\t프랑스 요새에서 자동차나 페리를 타고 안세로 갈 수 있다.\tentailment\n그리고 그것은 앨라배마주가 예상했던 대로 예산에서 50만 달러를 삭감하지 않을 것이라는 것을 의미한다.\t앨라배마 주는 예산 삭감을 하지 않았다. 왜냐하면 그렇게 하는 것에 대한 초기 정당성이 정밀 조사에 맞서지 않았기 때문이다.\tneutral\n알았어 먼저 어 .. 어 .. 노인이나 가족을 요양원에 보내는 것에 대해 어떻게 생각하니?\t가족을 요양원에 보내서 사는 것에 대해 어떻게 생각하는지 알 필요가 없다.\tcontradiction\n나머지는 너에게 달렸어.\t나머지는 너에게 달렸지만 시간이 많지 않다.\tneutral\n음-흠, 3월에 햇볕에 타는 것에 대해 걱정하면 안 된다는 것을 알고 있는 3월이야.\t3월은 그렇게 덥지 않다.\tneutral\n그리고 어, 그런 작은 것들로 다시 시작해봐. 아직 훨씬 싸. 어, 그 특별한 모델 차는 150달러야.\t그 모형차는 4천 달러가 든다.\tcontradiction\n내일 돌아가야 한다면, 칼이 말했다.\t돌아갈 수 없어. 오늘은 안 돼. 내일은 안 돼. 절대 안 돼." 칼이 말했다.', 'sentence2': 'contradiction'}
```
2. (Optional) Preferred to change the name of the features for the compatibility with `run_glue.py` in 🤗 Transformers
- `kor_nli` dataset has same data structure of multi_nli, xnli
- Changing the name of features and the feature type of 'gold_label' to ClassLabel might be helpful
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"premise": datasets.Value("string"),
"hypothesis": datasets.Value("string"),
"label": datasets.features.ClassLabel(names=["entailment", "neutral", "contradiction"]),
}
),
```
If you don't mind, I would like to fix this.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/821/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/820/comments | https://api.github.com/repos/huggingface/datasets/issues/820/events | https://github.com/huggingface/datasets/pull/820 | 739,387,617 | MDExOlB1bGxSZXF1ZXN0NTE4MDYwMjQ0 | 820 | Update quail dataset to v1.3 | {
"login": "ngdodd",
"id": 4889636,
"node_id": "MDQ6VXNlcjQ4ODk2MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ngdodd",
"html_url": "https://github.com/ngdodd",
"followers_url": "https://api.github.com/users/ngdodd/followers",
"following_url": "https://api.github.com/users/ngdodd/following{/other_user}",
"gists_url": "https://api.github.com/users/ngdodd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ngdodd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngdodd/subscriptions",
"organizations_url": "https://api.github.com/users/ngdodd/orgs",
"repos_url": "https://api.github.com/users/ngdodd/repos",
"events_url": "https://api.github.com/users/ngdodd/events{/privacy}",
"received_events_url": "https://api.github.com/users/ngdodd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,958,566,000 | 1,604,999,195,000 | 1,604,999,195,000 | CONTRIBUTOR | null | Updated quail to most recent version, to address the problem originally discussed [here](https://github.com/huggingface/datasets/issues/806). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/820/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/820",
"html_url": "https://github.com/huggingface/datasets/pull/820",
"diff_url": "https://github.com/huggingface/datasets/pull/820.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/820.patch",
"merged_at": 1604999195000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/819/comments | https://api.github.com/repos/huggingface/datasets/issues/819/events | https://github.com/huggingface/datasets/pull/819 | 739,250,624 | MDExOlB1bGxSZXF1ZXN0NTE3OTQ2MjYy | 819 | Make save function use deterministic global vars order | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Sorry, asking for help here, but the dill thread stop around 2013. Is it possible to use dill deterministically? I tried to monkeypatch the solution presented here into dill, but I suppose it requires forking their project.",
"Hi ! What we did was to subclass `dill`'s Pickler to fix the non-deterministic behaviors, and it's been working fine. A fork should also do the job"
] | 1,604,945,523,000 | 1,638,279,249,000 | 1,605,108,051,000 | MEMBER | null | The `dumps` function need to be deterministic for the caching mechanism.
However in #816 I noticed that one of dill's method to recursively check the globals of a function may return the globals in different orders each time it's used. To fix that I sort the globals by key in the `globs` dictionary.
I had to add a rectified `save_function` to the saving functions registry of the Pickler to make it work.
This should fix #816 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/819/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/819",
"html_url": "https://github.com/huggingface/datasets/pull/819",
"diff_url": "https://github.com/huggingface/datasets/pull/819.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/819.patch",
"merged_at": 1605108050000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/818/comments | https://api.github.com/repos/huggingface/datasets/issues/818/events | https://github.com/huggingface/datasets/pull/818 | 739,173,861 | MDExOlB1bGxSZXF1ZXN0NTE3ODgzMzk0 | 818 | Fix type hints pickling in python 3.6 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,939,267,000 | 1,604,999,223,000 | 1,604,999,222,000 | MEMBER | null | Type hints can't be properly pickled in python 3.6. This was causing errors the `run_mlm.py` script from `transformers` with python 3.6
However Cloupickle proposed a [fix](https://github.com/cloudpipe/cloudpickle/pull/318/files) to make it work anyway.
The idea is just to implement the pickling/unpickling of parameterized type hints. There is one detail though: since in python 3.6 we can't use `isinstance` on type hints, then we can't use pickle saving functions registry directly. Therefore we just wrap the `save_global` method of the Pickler.
This should fix https://github.com/huggingface/transformers/issues/8212 for python 3.6 and make `run_mlm.py` support python 3.6
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/818/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/818/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/818",
"html_url": "https://github.com/huggingface/datasets/pull/818",
"diff_url": "https://github.com/huggingface/datasets/pull/818.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/818.patch",
"merged_at": 1604999221000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/817/comments | https://api.github.com/repos/huggingface/datasets/issues/817/events | https://github.com/huggingface/datasets/issues/817 | 739,145,369 | MDU6SXNzdWU3MzkxNDUzNjk= | 817 | Add MRQA dataset | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Done! cf #1117 and #1022"
] | 1,604,937,139,000 | 1,607,096,682,000 | 1,607,096,681,000 | MEMBER | null | ## Adding a Dataset
- **Name:** MRQA
- **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. This dataset was collected as part of MRQA 2019's shared task
- **Paper:** https://arxiv.org/abs/1910.09753
- **Data:** https://github.com/mrqa/MRQA-Shared-Task-2019
- **Motivation:** Out-of-domain generalization is becoming (has become) a de-factor evaluation for NLU systems
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/817/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/817/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/816/comments | https://api.github.com/repos/huggingface/datasets/issues/816/events | https://github.com/huggingface/datasets/issues/816 | 739,102,686 | MDU6SXNzdWU3MzkxMDI2ODY= | 816 | [Caching] Dill globalvars() output order is not deterministic and can cause cache issues. | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"To show the issue:\r\n```\r\npython -c \"from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))\"\r\n```\r\ndoesn't always return the same ouput since `globs` is a dictionary with \"a\" and \"len\" as keys but sometimes not in the same order"
] | 1,604,934,080,000 | 1,605,108,050,000 | 1,605,108,050,000 | MEMBER | null | Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues.
To fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/816/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/815/comments | https://api.github.com/repos/huggingface/datasets/issues/815/events | https://github.com/huggingface/datasets/issues/815 | 738,842,092 | MDU6SXNzdWU3Mzg4NDIwOTI= | 815 | Is dataset iterative or not? | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hello !\r\nCould you give more details ?\r\n\r\nIf you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use \r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\n\r\nIf you want to iter through several datasets you can first concatenate them\r\n```python\r\nfrom datasets import concatenate_datasets\r\n\r\nnew_dataset = concatenate_datasets([dataset1, dataset2])\r\n```\r\nLet me know if this helps !",
"Hi Huggingface/Datasets team,\nI want to use the datasets inside Seq2SeqDataset here\nhttps://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py\nand there I need to return back each line from the datasets and I am not\nsure how to access each line and implement this?\nIt seems it also has get_item attribute? so I was not sure if this is\niterative dataset? or if this is non-iterable datasets?\nthanks.\n\n\n\nOn Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Hello !\n> Could you give more details ?\n>\n> If you mean iter through one dataset then yes, Dataset object does\n> implement the __iter__ method so you can use\n>\n> for example in dataset:\n> # do something\n>\n> If you want to iter through several datasets you can first concatenate them\n>\n> from datasets import concatenate_datasets\n> new_dataset = concatenate_datasets([dataset1, dataset2])\n>\n> Let me know if this helps !\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA>\n> .\n>\n",
"could you tell me please if datasets also has __getitem__ any idea on how\nto integrate it with Seq2SeqDataset is appreciated thanks\n\nOn Mon, Nov 9, 2020 at 10:22 AM Rabeeh Karimi Mahabadi <rabeeh@google.com>\nwrote:\n\n> Hi Huggingface/Datasets team,\n> I want to use the datasets inside Seq2SeqDataset here\n> https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py\n> and there I need to return back each line from the datasets and I am not\n> sure how to access each line and implement this?\n> It seems it also has get_item attribute? so I was not sure if this is\n> iterative dataset? or if this is non-iterable datasets?\n> thanks.\n>\n>\n>\n> On Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com>\n> wrote:\n>\n>> Hello !\n>> Could you give more details ?\n>>\n>> If you mean iter through one dataset then yes, Dataset object does\n>> implement the __iter__ method so you can use\n>>\n>> for example in dataset:\n>> # do something\n>>\n>> If you want to iter through several datasets you can first concatenate\n>> them\n>>\n>> from datasets import concatenate_datasets\n>> new_dataset = concatenate_datasets([dataset1, dataset2])\n>>\n>> Let me know if this helps !\n>>\n>> —\n>> You are receiving this because you authored the thread.\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA>\n>> .\n>>\n>\n",
"`datasets.Dataset` objects implement indeed `__getitem__`. It returns a dictionary with one field per column.\r\n\r\nWe've not added the integration of the datasets library for the seq2seq utilities yet. The current seq2seq utilities are based on text files.\r\n\r\nHowever as soon as you have a `datasets.Dataset` with columns \"tgt_texts\" (str), \"src_texts\" (str), and \"id\" (int) you should be able to implement your own Seq2SeqDataset class that wraps your dataset object. Does that make sense to you ?",
"Hi\nI am sorry for asking it multiple times but I am not getting the dataloader\ntype, could you confirm if the dataset library returns back an iterable\ntype dataloader or a mapping type one where one has access to __getitem__,\nin the former case, one can iterate with __iter__, and how I can configure\nit to return the data back as the iterative type? I am dealing with\nlarge-scale datasets and I do not want to bring all in memory\nthanks for your help\nBest regards\nRabeeh\n\nOn Mon, Nov 9, 2020 at 11:17 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> datasets.Dataset objects implement indeed __getitem__. It returns a\n> dictionary with one field per column.\n>\n> We've not added the integration of the datasets library for the seq2seq\n> utilities yet. The current seq2seq utilities are based on text files.\n>\n> However as soon as you have a datasets.Dataset with columns \"tgt_texts\"\n> (str), \"src_texts\" (str), and \"id\" (int) you should be able to implement\n> your own Seq2SeqDataset class that wraps your dataset object. Does that\n> make sense ?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/815#issuecomment-723915556>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHHYOC22EM7F666BZSOTSO66R3ANCNFSM4TPB7OWA>\n> .\n>\n",
"`datasets.Dataset` objects are both iterative and mapping types: it has both `__iter__` and `__getitem__`\r\nFor example you can do\r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\nor\r\n```python\r\nfor i in range(len(dataset)):\r\n example = dataset[i]\r\n # do something\r\n```\r\nWhen you do that, one and only one example is loaded into memory at a time.",
"Hi there, \r\nHere is what I am trying, this is not working for me in map-style datasets, could you please tell me how to use datasets with being able to access ___getitem__ ? could you assist me please correcting this example? I need map-style datasets which is formed from concatenation of two datasets from your library. thanks \r\n\r\n\r\n```\r\nimport datasets\r\ndataset1 = load_dataset(\"squad\", split=\"train[:10]\")\r\ndataset1 = dataset1.map(lambda example: {\"src_texts\": \"question: {0} context: {1} \".format(\r\n example[\"question\"], example[\"context\"]),\r\n \"tgt_texts\": example[\"answers\"][\"text\"][0]}, remove_columns=dataset1.column_names)\r\ndataset2 = load_dataset(\"imdb\", split=\"train[:10]\")\r\ndataset2 = dataset2.map(lambda example: {\"src_texts\": \"imdb: \" + example[\"text\"],\r\n \"tgt_texts\": str(example[\"label\"])}, remove_columns=dataset2.column_names)\r\ntrain_dataset = datasets.concatenate_datasets([dataset1, dataset2])\r\ntrain_dataset.set_format(type='torch', columns=['src_texts', 'tgt_texts'])\r\ndataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32)\r\nfor id, batch in enumerate(dataloader):\r\n print(batch)\r\n\r\n```",
"closed since I found this response on the issue https://github.com/huggingface/datasets/issues/469"
] | 1,604,913,108,000 | 1,605,005,403,000 | 1,605,005,403,000 | NONE | null | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/815/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/814/comments | https://api.github.com/repos/huggingface/datasets/issues/814/events | https://github.com/huggingface/datasets/issues/814 | 738,500,443 | MDU6SXNzdWU3Mzg1MDA0NDM= | 814 | Joining multiple datasets | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"found a solution here https://discuss.pytorch.org/t/train-simultaneously-on-two-datasets/649/35, closed for now, thanks "
] | 1,604,852,370,000 | 1,604,864,328,000 | 1,604,864,328,000 | NONE | null | Hi
I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/814/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/813/comments | https://api.github.com/repos/huggingface/datasets/issues/813/events | https://github.com/huggingface/datasets/issues/813 | 738,489,852 | MDU6SXNzdWU3Mzg0ODk4NTI= | 813 | How to implement DistributedSampler with datasets | {
"login": "rabeehkarimimahabadi",
"id": 73364383,
"node_id": "MDQ6VXNlcjczMzY0Mzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehkarimimahabadi",
"html_url": "https://github.com/rabeehkarimimahabadi",
"followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers",
"following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs",
"repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos",
"events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks. ",
"Hey @rabeehkarimimahabadi I'm actually looking for the same feature. Did you manage to get somewhere?",
"@rabeehkarimimahabadi need the same feature"
] | 1,604,849,231,000 | 1,635,158,199,000 | null | NONE | null | Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/813/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/812/comments | https://api.github.com/repos/huggingface/datasets/issues/812/events | https://github.com/huggingface/datasets/issues/812 | 738,340,217 | MDU6SXNzdWU3MzgzNDAyMTc= | 812 | Too much logging | {
"login": "dspoka",
"id": 6183050,
"node_id": "MDQ6VXNlcjYxODMwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dspoka",
"html_url": "https://github.com/dspoka",
"followers_url": "https://api.github.com/users/dspoka/followers",
"following_url": "https://api.github.com/users/dspoka/following{/other_user}",
"gists_url": "https://api.github.com/users/dspoka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dspoka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dspoka/subscriptions",
"organizations_url": "https://api.github.com/users/dspoka/orgs",
"repos_url": "https://api.github.com/users/dspoka/repos",
"events_url": "https://api.github.com/users/dspoka/events{/privacy}",
"received_events_url": "https://api.github.com/users/dspoka/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! Thanks for reporting :) \r\nI agree these one should be hidden when the logging level is warning, we'll fix that",
"+1, the amount of logging is excessive.\r\n\r\nMost of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these messages were logged after I already called `datasets.logging.set_verbosity_error()`)\r\n\r\n```\r\nI1109 21:26:01.742688 139785006901056 filelock.py:318] Lock 139778216292192 released on /home/kitaev/.cache/huggingface/datasets/9ed4f2e133395826175a892c70611f68522c7bc61a35476e8b51a31afb76e4bf.e6f3e3f3e3875a07469d1cfd32e16e1d06b149616b11eef2d081c43d515b492d.py.lock\r\nI1109 21:26:01.747898 139785006901056 filelock.py:274] Lock 139778216290176 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748258 139785006901056 filelock.py:318] Lock 139778216290176 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748412 139785006901056 filelock.py:274] Lock 139778215853024 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748497 139785006901056 filelock.py:318] Lock 139778215853024 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:07:17.029001 140301730502464 filelock.py:274] Lock 140289479304360 acquired on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.029341 140301730502464 filelock.py:318] Lock 140289479304360 released on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.058964 140301730502464 filelock.py:274] Lock 140251889388120 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.060933 140301730502464 filelock.py:318] Lock 140251889388120 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.061067 140301730502464 filelock.py:274] Lock 140296072521488 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.069736 140301730502464 metric.py:400] Removing /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow\r\nI1109 21:07:17.069949 140301730502464 filelock.py:318] Lock 140296072521488 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\n```",
"So how to solve this problem?",
"In the latest version of the lib the logs about locks are at the DEBUG level so you won't see them by default.\r\nAlso `set_verbosity_warning` does take into account these logs now.\r\nCan you try to update the lib ?\r\n```\r\npip install --upgrade datasets\r\n```",
"Thanks. For some reason I have to use the older version. Is that possible I can fix this by some surface-level trick?\r\n\r\nI'm still using 1.13 version datasets.",
"On older versions you can use\r\n```python\r\nimport logging\r\n\r\nlogging.getLogger(\"filelock\").setLevel(logging.WARNING)\r\n```",
"Whoa Thank you! It works!"
] | 1,604,793,390,000 | 1,611,671,494,000 | 1,605,546,402,000 | NONE | null | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
[2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
using datasets version = 1.1.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/812/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/811/comments | https://api.github.com/repos/huggingface/datasets/issues/811/events | https://github.com/huggingface/datasets/issues/811 | 738,280,132 | MDU6SXNzdWU3MzgyODAxMzI= | 811 | nlp viewer error | {
"login": "jc-hou",
"id": 30210529,
"node_id": "MDQ6VXNlcjMwMjEwNTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jc-hou",
"html_url": "https://github.com/jc-hou",
"followers_url": "https://api.github.com/users/jc-hou/followers",
"following_url": "https://api.github.com/users/jc-hou/following{/other_user}",
"gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions",
"organizations_url": "https://api.github.com/users/jc-hou/orgs",
"repos_url": "https://api.github.com/users/jc-hou/repos",
"events_url": "https://api.github.com/users/jc-hou/events{/privacy}",
"received_events_url": "https://api.github.com/users/jc-hou/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"and also for 'blog_authorship_corpus'\r\nhttps://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus\r\n![image](https://user-images.githubusercontent.com/30210529/98557329-5c182800-22a4-11eb-9b01-5b910fb8fcd4.png)\r\n",
"Is this the problem of my local computer or ??"
] | 1,604,768,938,000 | 1,605,540,383,000 | null | NONE | null | Hello,
when I select amazon_us_reviews in nlp viewer, it shows error.
https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews
![image](https://user-images.githubusercontent.com/30210529/98447334-4aa81200-2124-11eb-9dca-82c3ab34ccc2.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/811/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/810/comments | https://api.github.com/repos/huggingface/datasets/issues/810/events | https://github.com/huggingface/datasets/pull/810 | 737,878,370 | MDExOlB1bGxSZXF1ZXN0NTE2ODQzMzQ3 | 810 | Fix seqeval metric | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,679,103,000 | 1,604,930,669,000 | 1,604,930,668,000 | MEMBER | null | The current seqeval metric returns the following error when computed:
```
~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8/seqeval.py in _compute(self, predictions, references, suffix)
102 scores = {}
103 for type_name, score in report.items():
--> 104 scores[type_name]["precision"] = score["precision"]
105 scores[type_name]["recall"] = score["recall"]
106 scores[type_name]["f1"] = score["f1-score"]
KeyError: 'LOC'
```
This is because the current code basically tries to do:
```
scores = {}
scores["LOC"]["precision"] = some_value
```
which does not work in python. This PR fixes that while keeping the previous nested structure of results, with the same keys. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/810/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/810",
"html_url": "https://github.com/huggingface/datasets/pull/810",
"diff_url": "https://github.com/huggingface/datasets/pull/810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/810.patch",
"merged_at": 1604930667000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/809/comments | https://api.github.com/repos/huggingface/datasets/issues/809/events | https://github.com/huggingface/datasets/issues/809 | 737,832,701 | MDU6SXNzdWU3Mzc4MzI3MDE= | 809 | Add Google Taskmaster dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now?",
"You are absolutely right :) \r\n\r\nClosed by https://github.com/huggingface/datasets/pull/1193 https://github.com/huggingface/datasets/pull/1197 https://github.com/huggingface/datasets/pull/1213"
] | 1,604,675,441,000 | 1,618,924,166,000 | 1,618,924,166,000 | MEMBER | null | ## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-datasets/Taskmaster
- **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/809/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/808/comments | https://api.github.com/repos/huggingface/datasets/issues/808/events | https://github.com/huggingface/datasets/pull/808 | 737,638,942 | MDExOlB1bGxSZXF1ZXN0NTE2NjQ0NDc0 | 808 | dataset(dgs): initial dataset loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi @AmitMY, \r\n\r\nWere you able to figure this out?",
"I did not.\r\nWith all the limitations this repo currently has, I had to create a repo of my own using tfds to mitigate them. \r\nhttps://github.com/sign-language-processing/datasets/tree/master/sign_language_datasets/datasets/dgs_corpus\r\n\r\nClosing as I don't know how to support this PR further"
] | 1,604,657,683,000 | 1,616,480,335,000 | 1,616,480,335,000 | CONTRIBUTOR | null | When trying to create dummy data I get:
> Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. Make sure you create the file dummy_data.
I am not sure how to manually create the dummy_data (what exactly it should contain)
Also note, this library says:
> ImportError: To be able to use this dataset, you need to install the following dependencies['pympi'] using 'pip install pympi' for instance'
When you actually need to `pip install pympi-ling`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/808/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/808",
"html_url": "https://github.com/huggingface/datasets/pull/808",
"diff_url": "https://github.com/huggingface/datasets/pull/808.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/808.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/807/comments | https://api.github.com/repos/huggingface/datasets/issues/807/events | https://github.com/huggingface/datasets/issues/807 | 737,509,954 | MDU6SXNzdWU3Mzc1MDk5NTQ= | 807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | {
"login": "shexuan",
"id": 25664170,
"node_id": "MDQ6VXNlcjI1NjY0MTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/25664170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shexuan",
"html_url": "https://github.com/shexuan",
"followers_url": "https://api.github.com/users/shexuan/followers",
"following_url": "https://api.github.com/users/shexuan/following{/other_user}",
"gists_url": "https://api.github.com/users/shexuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shexuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shexuan/subscriptions",
"organizations_url": "https://api.github.com/users/shexuan/orgs",
"repos_url": "https://api.github.com/users/shexuan/repos",
"events_url": "https://api.github.com/users/shexuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/shexuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi !\r\nThe url works on my side.\r\n\r\nIs the url working in your navigator ?\r\nAre you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?",
"> Hi !\r\n> The url works on my side.\r\n> \r\n> Is the url working in your navigator ?\r\n> Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n\r\nI tried another server, it's working now. Thanks a lot.\r\n\r\nAnd I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?",
"It seems my network frequently crashed so most time it cannot work.",
"\r\n\r\n\r\n> > Hi !\r\n> > The url works on my side.\r\n> > Is the url working in your navigator ?\r\n> > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> \r\n> I tried another server, it's working now. Thanks a lot.\r\n> \r\n> And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n\r\nI download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? \r\n\r\nThanks :D",
"hello, how did you solve this problems?\r\n\r\n> > > Hi !\r\n> > > The url works on my side.\r\n> > > Is the url working in your navigator ?\r\n> > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > \r\n> > \r\n> > I tried another server, it's working now. Thanks a lot.\r\n> > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> \r\n> I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> \r\n> Thanks :D\r\n\r\nhello, I tried this. but it still failed. how do you fix this error?",
"> hello, how did you solve this problems?\r\n> \r\n> > > > Hi !\r\n> > > > The url works on my side.\r\n> > > > Is the url working in your navigator ?\r\n> > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > \r\n> > > \r\n> > > I tried another server, it's working now. Thanks a lot.\r\n> > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > \r\n> > \r\n> > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> > Thanks :D\r\n> \r\n> hello, I tried this. but it still failed. how do you fix this error?\r\n\r\n你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)`\r\n\r\n",
"> > hello, how did you solve this problems?\r\n> > > > > Hi !\r\n> > > > > The url works on my side.\r\n> > > > > Is the url working in your navigator ?\r\n> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried another server, it's working now. Thanks a lot.\r\n> > > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > > \r\n> > > \r\n> > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> > > Thanks :D\r\n> > \r\n> > \r\n> > hello, I tried this. but it still failed. how do you fix this error?\r\n> \r\n> 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)`\r\n\r\n好的好的!解决了,感谢感谢!!!",
"> \r\n> \r\n> > hello, how did you solve this problems?\r\n> > > > > Hi !\r\n> > > > > The url works on my side.\r\n> > > > > Is the url working in your navigator ?\r\n> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried another server, it's working now. Thanks a lot.\r\n> > > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > > \r\n> > > \r\n> > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`?\r\n> > > Thanks :D\r\n> > \r\n> > \r\n> > hello, I tried this. but it still failed. how do you fix this error?\r\n> \r\n> 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)`\r\n\r\n我照着做了,然后报错。\r\nValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\\dataset_infos.json as a URL or as a local path\r\n\r\n`---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-5-fd2106a3f053> in <module>\r\n----> 1 dataset = load_dataset('C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets/csv.py', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)\r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 588 # Download/copy dataset processing script\r\n 589 module_path, hash = prepare_module(\r\n--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n 591 )\r\n 592 \r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)\r\n 296 local_dataset_infos_path = cached_path(\r\n 297 dataset_infos,\r\n--> 298 download_config=download_config,\r\n 299 )\r\n 300 except (FileNotFoundError, ConnectionError):\r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\utils\\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 316 else:\r\n 317 # Something unknown\r\n--> 318 raise ValueError(\"unable to parse {} as a URL or as a local path\".format(url_or_filename))\r\n 319 \r\n 320 if download_config.extract_compressed_file and output_path is not None:\r\n\r\nValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\\dataset_infos.json as a URL or as a local path\r\n\r\n`",
"I also experienced this issue this morning. Looks like something specific to windows.\r\nI'm working on a fix",
"I opened a PR @wn1652400018",
"> \r\n> \r\n> I opened a PR @wn1652400018\r\n\r\nThanks you!, It works very well."
] | 1,604,644,384,000 | 1,610,328,627,000 | 1,605,331,834,000 | NONE | null | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=False)
print('datasets version: ', datasets.__version__)
print('pytorch version: ', torch.__version__)
print('transformers version: ', transformers.__version__)
# output:
datasets version: 1.1.2
pytorch version: 1.5.0
transformers version: 3.2.0
```
when I load data through `dataset`:
```
dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
```
Error infos:
```
ConnectionError Traceback (most recent call last)
<ipython-input-17-bbdadb9a0c78> in <module>
----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False)
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
588 # Download/copy dataset processing script
589 module_path, hash = prepare_module(
--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
591 )
592
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)
266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version)
267 try:
--> 268 local_path = cached_path(file_path, download_config=download_config)
269 except FileNotFoundError:
270 if script_version is not None:
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
306 user_agent=download_config.user_agent,
307 local_files_only=download_config.local_files_only,
--> 308 use_etag=download_config.use_etag,
309 )
310 elif os.path.exists(url_or_filename):
~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py
```
And I try to connect to the site with requests:
```
import requests
requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
```
Similarly Error occurs:
```
---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
159 conn = connection.create_connection(
--> 160 (self._dns_host, self.port), self.timeout, **extra_kw
161 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
83 if err is not None:
---> 84 raise err
85
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
73 sock.bind(source_address)
---> 74 sock.connect(sa)
75 return sock
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
676 headers=headers,
--> 677 chunked=chunked,
678 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
380 try:
--> 381 self._validate_conn(conn)
382 except (SocketTimeout, BaseSSLError) as e:
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
--> 976 conn.connect()
977
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self)
307 # Add certificate verification
--> 308 conn = self._new_conn()
309 hostname = self.host
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self)
171 raise NewConnectionError(
--> 172 self, "Failed to establish a new connection: %s" % e
173 )
NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
MaxRetryError Traceback (most recent call last)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
448 retries=self.max_retries,
--> 449 timeout=timeout
450 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
724 retries = retries.increment(
--> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
726 )
~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
438 if new_retry.is_exhausted():
--> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause))
440
MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
<ipython-input-20-18cc3eb4a049> in <module>
1 import requests
2
----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py")
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs)
102
103 kwargs.setdefault('allow_redirects', False)
--> 104 return request('head', url, **kwargs)
105
106
~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs)
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
62
63
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
528 }
529 send_kwargs.update(settings)
--> 530 resp = self.send(prep, **send_kwargs)
531
532 return resp
~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs)
641
642 # Send the request
--> 643 r = adapter.send(request, **kwargs)
644
645 # Total elapsed time of the request (approximately)
~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
514 raise SSLError(e, request=request)
515
--> 516 raise ConnectionError(e, request=request)
517
518 except ClosedPoolError as e:
ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/807/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/806/comments | https://api.github.com/repos/huggingface/datasets/issues/806/events | https://github.com/huggingface/datasets/issues/806 | 737,215,430 | MDU6SXNzdWU3MzcyMTU0MzA= | 806 | Quail dataset urls are out of date | {
"login": "ngdodd",
"id": 4889636,
"node_id": "MDQ6VXNlcjQ4ODk2MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4889636?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ngdodd",
"html_url": "https://github.com/ngdodd",
"followers_url": "https://api.github.com/users/ngdodd/followers",
"following_url": "https://api.github.com/users/ngdodd/following{/other_user}",
"gists_url": "https://api.github.com/users/ngdodd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ngdodd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngdodd/subscriptions",
"organizations_url": "https://api.github.com/users/ngdodd/orgs",
"repos_url": "https://api.github.com/users/ngdodd/repos",
"events_url": "https://api.github.com/users/ngdodd/events{/privacy}",
"received_events_url": "https://api.github.com/users/ngdodd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! Thanks for reporting.\r\nWe should fix the urls and use quail 1.3.\r\nIf you want to contribute feel free to fix the urls and open a PR :) ",
"Done! PR [https://github.com/huggingface/datasets/pull/820](https://github.com/huggingface/datasets/pull/820)\r\n\r\nUpdated links and also regenerated the metadata and dummy data for v1.3 in order to pass verifications as described here: [https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset](https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset). ",
"Closing since #820 is merged.\r\nThanks again for fixing the urls :)"
] | 1,604,605,219,000 | 1,605,016,971,000 | 1,605,016,971,000 | CONTRIBUTOR | null | <h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/806/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/805/comments | https://api.github.com/repos/huggingface/datasets/issues/805/events | https://github.com/huggingface/datasets/issues/805 | 737,019,360 | MDU6SXNzdWU3MzcwMTkzNjA= | 805 | On loading a metric from datasets, I get the following error | {
"login": "laibamehnaz",
"id": 36405283,
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laibamehnaz",
"html_url": "https://github.com/laibamehnaz",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object.\r\nCould you update pyarrow and try again ?\r\n```\r\npip install --upgrade pyarrow\r\n```"
] | 1,604,589,278,000 | 1,604,913,155,000 | null | NONE | null | `from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/805/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/805/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/804/comments | https://api.github.com/repos/huggingface/datasets/issues/804/events | https://github.com/huggingface/datasets/issues/804 | 736,858,507 | MDU6SXNzdWU3MzY4NTg1MDc= | 804 | Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') | {
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"cc @yjernite is this expected ?",
"Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208)\r\n\r\nFor the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here:\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md",
"Oh ok, I guess I read the paper too fast 😅, thank you for your answer!"
] | 1,604,576,281,000 | 1,604,931,299,000 | 1,604,931,298,000 | CONTRIBUTOR | null | # The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ?
# How to reproduce
```py
from datasets import load_dataset
kilt_tasks = load_dataset("kilt_tasks")
trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext')
# both in "kilt_tasks"
In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']])
Out[18]: False
# and "trivia_qa"
In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']])
Out[13]: True
# appears to be fine on the train and validation sets.
In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']])
Out[14]: False
In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']])
Out[15]: False
In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']])
Out[16]: True
In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']])
Out[17]: True
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/804/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/803/comments | https://api.github.com/repos/huggingface/datasets/issues/803/events | https://github.com/huggingface/datasets/pull/803 | 736,818,917 | MDExOlB1bGxSZXF1ZXN0NTE1OTY1ODE2 | 803 | fix: typos in tutorial to map KILT and TriviaQA | {
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,572,920,000 | 1,604,999,287,000 | 1,604,999,287,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/803/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/803",
"html_url": "https://github.com/huggingface/datasets/pull/803",
"diff_url": "https://github.com/huggingface/datasets/pull/803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/803.patch",
"merged_at": 1604999287000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/802/comments | https://api.github.com/repos/huggingface/datasets/issues/802/events | https://github.com/huggingface/datasets/pull/802 | 736,296,343 | MDExOlB1bGxSZXF1ZXN0NTE1NTM1MDI0 | 802 | Add XGlue | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Really cool to add XGlue, this will be a nice addition !\r\n\r\nSplits shouldn't depend on the language. There must be configurations for each language, as we're doing for xnli, xtreme, etc.\r\nFor example for XGlue we'll have these configurations: NER.de, NER.en etc."
] | 1,604,510,994,000 | 1,606,838,308,000 | 1,606,838,307,000 | MEMBER | null | Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for
```python
load_dataset("xglue", "ner") # would give the splits 'train', 'validation.en', 'test.en', 'validation.es', 'test.es', ...
```
=> therefore one can load a single language test via
```python
load_dataset("xglue", "ner", split="test.es")
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/802/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/802",
"html_url": "https://github.com/huggingface/datasets/pull/802",
"diff_url": "https://github.com/huggingface/datasets/pull/802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/802.patch",
"merged_at": 1606838307000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/801/comments | https://api.github.com/repos/huggingface/datasets/issues/801/events | https://github.com/huggingface/datasets/issues/801 | 735,790,876 | MDU6SXNzdWU3MzU3OTA4NzY= | 801 | How to join two datasets? | {
"login": "shangw-nvidia",
"id": 66387198,
"node_id": "MDQ6VXNlcjY2Mzg3MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shangw-nvidia",
"html_url": "https://github.com/shangw-nvidia",
"followers_url": "https://api.github.com/users/shangw-nvidia/followers",
"following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}",
"gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions",
"organizations_url": "https://api.github.com/users/shangw-nvidia/orgs",
"repos_url": "https://api.github.com/users/shangw-nvidia/repos",
"events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}",
"received_events_url": "https://api.github.com/users/shangw-nvidia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi this is also my question. thanks ",
"Hi ! Currently the only way to add new fields to a dataset is by using `.map` and picking items from the other dataset\r\n",
"Closing this one. Feel free to re-open if you have other questions about this issue.\r\n\r\nAlso linking another discussion about joining datasets: #853 "
] | 1,604,461,991,000 | 1,608,732,178,000 | 1,608,732,178,000 | NONE | null | Hi,
I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels?
I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/801/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/800/comments | https://api.github.com/repos/huggingface/datasets/issues/800/events | https://github.com/huggingface/datasets/pull/800 | 735,772,775 | MDExOlB1bGxSZXF1ZXN0NTE1MTAyMjc3 | 800 | Update loading_metrics.rst | {
"login": "ayushidalmia",
"id": 5400513,
"node_id": "MDQ6VXNlcjU0MDA1MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5400513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayushidalmia",
"html_url": "https://github.com/ayushidalmia",
"followers_url": "https://api.github.com/users/ayushidalmia/followers",
"following_url": "https://api.github.com/users/ayushidalmia/following{/other_user}",
"gists_url": "https://api.github.com/users/ayushidalmia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ayushidalmia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ayushidalmia/subscriptions",
"organizations_url": "https://api.github.com/users/ayushidalmia/orgs",
"repos_url": "https://api.github.com/users/ayushidalmia/repos",
"events_url": "https://api.github.com/users/ayushidalmia/events{/privacy}",
"received_events_url": "https://api.github.com/users/ayushidalmia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,458,631,000 | 1,605,108,512,000 | 1,605,108,512,000 | CONTRIBUTOR | null | Minor bug | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/800/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/800",
"html_url": "https://github.com/huggingface/datasets/pull/800",
"diff_url": "https://github.com/huggingface/datasets/pull/800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/800.patch",
"merged_at": 1605108512000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/799/comments | https://api.github.com/repos/huggingface/datasets/issues/799/events | https://github.com/huggingface/datasets/pull/799 | 735,551,165 | MDExOlB1bGxSZXF1ZXN0NTE0OTIzNDMx | 799 | switch amazon reviews class label order | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,428,738,000 | 1,604,429,054,000 | 1,604,429,050,000 | CONTRIBUTOR | null | Switches the label order to be more intuitive for amazon reviews, #791. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/799/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/799",
"html_url": "https://github.com/huggingface/datasets/pull/799",
"diff_url": "https://github.com/huggingface/datasets/pull/799.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/799.patch",
"merged_at": 1604429050000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/798/comments | https://api.github.com/repos/huggingface/datasets/issues/798/events | https://github.com/huggingface/datasets/issues/798 | 735,518,805 | MDU6SXNzdWU3MzU1MTg4MDU= | 798 | Cannot load TREC dataset: ConnectionError | {
"login": "kaletap",
"id": 25740957,
"node_id": "MDQ6VXNlcjI1NzQwOTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/25740957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaletap",
"html_url": "https://github.com/kaletap",
"followers_url": "https://api.github.com/users/kaletap/followers",
"following_url": "https://api.github.com/users/kaletap/following{/other_user}",
"gists_url": "https://api.github.com/users/kaletap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaletap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaletap/subscriptions",
"organizations_url": "https://api.github.com/users/kaletap/orgs",
"repos_url": "https://api.github.com/users/kaletap/repos",
"events_url": "https://api.github.com/users/kaletap/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaletap/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! Indeed there's an issue with those links.\r\nWe should probably use the target urls of the redirections instead",
"Hi, the same issue here, could you tell me how to download it through datasets? thanks ",
"Same issue. ",
"Actually it's already fixed on the master branch since #740 \r\nI'll do the 1.1.3 release soon",
"Hi\nthanks, but I did tried to install from the pip install git+... and it does\nnot work for me,. thanks for the help. I have the same issue with wmt16,\n\"ro-en\"\nthanks.\nBest\nRabeeh\n\nOn Mon, Nov 16, 2020 at 10:29 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Actually it's already fixed on the master branch since #740\n> <https://github.com/huggingface/datasets/pull/740>\n> I'll do the 1.1.3 release soon\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/798#issuecomment-727854736>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCEUBJKPOCLABXCKMPDSQDWH3ANCNFSM4TJBUKSA>\n> .\n>\n",
"I just tested on google colab using\r\n```python\r\n!pip install git+https://github.com/huggingface/datasets.git\r\nfrom datasets import load_dataset\r\nload_dataset(\"trec\")\r\n```\r\nand it works.\r\nCan you detail how you got the issue even when using the latest version on master ?\r\n\r\nAlso about wmt we'll look into it, thanks for reporting !",
"I think the new url with .edu is also broken:\r\n```\r\nConnectionError: Couldn't reach https://cogcomp.seas.upenn.edu/Data/QA/QC/train_5500.label\r\n```\r\nCant download the dataset anymore.",
"Hi ! The URL seems to work fine on my side, can you try again ?",
"Forgot to update, i wrote an email to the webmaster of seas.upenn.edu because i couldnt reach the url on any machine. This was the answer:\r\n```\r\nThank you for your report. The server was offline for maintenance and is now available again.\r\n```\r\nGuess all back to normal now 🙂 "
] | 1,604,425,522,000 | 1,637,322,472,000 | null | NONE | null | ## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.`
* Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address
* Increasing max_redirects to 100 doesn't help
Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant.
* datasets.__version__ == '1.1.2'
* requests.__version__ == '2.24.0'
## Error trace
```
>>> import datasets
>>> datasets.__version__
'1.1.2'
>>> dataset = load_dataset("trec", split="train")
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators
dl_files = dl_manager.download_and_extract(_URLs)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested
return function(data_struct)
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
```
I would appreciate some suggestions here. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/798/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/798/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/797/comments | https://api.github.com/repos/huggingface/datasets/issues/797/events | https://github.com/huggingface/datasets/issues/797 | 735,420,332 | MDU6SXNzdWU3MzU0MjAzMzI= | 797 | Token classification labels are strings and we don't have the list of labels | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Indeed. Pinging @stefan-it here if he want to give an expert opinion :)",
"Related is https://github.com/huggingface/datasets/pull/636",
"Should definitely be a ClassLabel 👍 "
] | 1,604,417,610,000 | 1,605,017,231,000 | null | MEMBER | null | Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy access to the underlying labels.
The main problem for preprocessing those datasets is that the list of possible labels is not stored inside the `Dataset` object which makes converting the labels to IDs quite difficult (you either have to know the list of labels in advance or run a full pass through the dataset to get the list of labels, the `unique` method being useless with the type `Sequence[str]`). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/797/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/796/comments | https://api.github.com/repos/huggingface/datasets/issues/796/events | https://github.com/huggingface/datasets/issues/796 | 735,414,881 | MDU6SXNzdWU3MzU0MTQ4ODE= | 796 | Seq2Seq Metrics QOL: Bleu, Rouge | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! Thanks for letting us know your experience :) \r\nWe should at least improve the error messages indeed",
"So what is the right way to add a batch to compute BLEU?",
"prediction = [['Hey', 'how', 'are', 'you', '?']] \r\nreference=[['Hey', 'how', 'are', 'you', '?']]\r\nbleu.compute(predictions=prediction,references=reference)\r\n\r\nalso tried this kind of things lol\r\nI definitely need help too",
"Hi !\r\n\r\nAs described in the documentation for `bleu`:\r\n```\r\nArgs:\r\n predictions: list of translations to score.\r\n Each translation should be tokenized into a list of tokens.\r\n references: list of lists of references for each translation.\r\n Each reference should be tokenized into a list of tokens.\r\n```\r\n\r\nTherefore you can use this metric this way:\r\n```python\r\nfrom datasets import load_metric\r\n\r\npredictions = [\r\n [\"hello\", \"there\", \"general\", \"kenobi\"], # tokenized prediction of the first sample\r\n [\"foo\", \"bar\", \"foobar\"] # tokenized prediction of the second sample\r\n]\r\nreferences = [\r\n [[\"hello\", \"there\", \"general\", \"kenobi\"], [\"hello\", \"there\", \"!\"]], # tokenized references for the first sample (2 references)\r\n [[\"foo\", \"bar\", \"foobar\"]] # tokenized references for the second sample (1 reference)\r\n]\r\n\r\nbleu = load_metric(\"bleu\")\r\nbleu.compute(predictions=predictions, references=references)\r\n# Or you can also add batches before calling compute()\r\n# bleu.add_batch(predictions=predictions, references=references)\r\n# bleu.compute()\r\n```\r\n\r\nHope this helps :)"
] | 1,604,417,189,000 | 1,611,843,228,000 | null | CONTRIBUTOR | null | Putting all my QOL issues here, idt I will have time to propose fixes, but I didn't want these to be lost, in case they are useful. I tried using `rouge` and `bleu` for the first time and wrote down everything I didn't immediately understand:
+ Bleu expects tokenization, can I just kwarg it like sacrebleu?
+ different signatures, means that I would have had to add a lot of conditionals + pre and post processing: if I were going to replace the `calculate_rouge` and `calculate_bleu` functions here: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py#L61
#### What I tried
Rouge experience:
```python
rouge = load_metric('rouge')
rouge.add_batch(['hi im sam'], ['im daniel']) # fails
rouge.add_batch(predictions=['hi im sam'], references=['im daniel']) # works
rouge.compute() # huge messy output, but reasonable. Not worth integrating b/c don't want to rewrite all the postprocessing.
```
BLEU experience:
```python
bleu = load_metric('bleu')
bleu.add_batch(predictions=['hi im sam'], references=['im daniel'])
bleu.add_batch(predictions=[['hi im sam']], references=[['im daniel']])
bleu.add_batch(predictions=[['hi im sam']], references=[['im daniel']])
```
All of these raise `ValueError: Got a string but expected a list instead: 'im daniel'`
#### Doc Typo
This says `dataset=load_metric(...)` which seems wrong, will cause `NameError`
![image](https://user-images.githubusercontent.com/6045025/98004483-ff0d0580-1dbd-11eb-9f35-6f35904611bb.png)
cc @lhoestq, feel free to ignore. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/796/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/796/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/795/comments | https://api.github.com/repos/huggingface/datasets/issues/795/events | https://github.com/huggingface/datasets/issues/795 | 735,198,265 | MDU6SXNzdWU3MzUxOTgyNjU= | 795 | Descriptions of raw and processed versions of wikitext are inverted | {
"login": "fraboniface",
"id": 16835358,
"node_id": "MDQ6VXNlcjE2ODM1MzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/16835358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fraboniface",
"html_url": "https://github.com/fraboniface",
"followers_url": "https://api.github.com/users/fraboniface/followers",
"following_url": "https://api.github.com/users/fraboniface/following{/other_user}",
"gists_url": "https://api.github.com/users/fraboniface/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fraboniface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fraboniface/subscriptions",
"organizations_url": "https://api.github.com/users/fraboniface/orgs",
"repos_url": "https://api.github.com/users/fraboniface/repos",
"events_url": "https://api.github.com/users/fraboniface/events{/privacy}",
"received_events_url": "https://api.github.com/users/fraboniface/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Yes indeed ! Thanks for reporting"
] | 1,604,399,091,000 | 1,605,017,145,000 | null | NONE | null | Nothing of importance, but it looks like the descriptions of wikitext-n-v1 and wikitext-n-raw-v1 are inverted for both n=2 and n=103. I just verified by loading them and the `<unk>` tokens are present in the non-raw versions, which confirms that it's a mere inversion of the descriptions and not of the datasets themselves.
Also it would be nice if those descriptions appeared in the dataset explorer.
https://github.com/huggingface/datasets/blob/87bd0864845ea0a1dd7167918dc5f341bf807bd3/datasets/wikitext/wikitext.py#L52 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/795/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/794/comments | https://api.github.com/repos/huggingface/datasets/issues/794/events | https://github.com/huggingface/datasets/issues/794 | 735,158,725 | MDU6SXNzdWU3MzUxNTg3MjU= | 794 | self.options cannot be converted to a Python object for pickling | {
"login": "hzqjyyx",
"id": 9635713,
"node_id": "MDQ6VXNlcjk2MzU3MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9635713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hzqjyyx",
"html_url": "https://github.com/hzqjyyx",
"followers_url": "https://api.github.com/users/hzqjyyx/followers",
"following_url": "https://api.github.com/users/hzqjyyx/following{/other_user}",
"gists_url": "https://api.github.com/users/hzqjyyx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hzqjyyx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hzqjyyx/subscriptions",
"organizations_url": "https://api.github.com/users/hzqjyyx/orgs",
"repos_url": "https://api.github.com/users/hzqjyyx/repos",
"events_url": "https://api.github.com/users/hzqjyyx/events{/privacy}",
"received_events_url": "https://api.github.com/users/hzqjyyx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! Thanks for reporting that's a bug on master indeed.\r\nWe'll fix that soon"
] | 1,604,395,654,000 | 1,605,807,338,000 | 1,605,807,338,000 | NONE | null | Hi,
Currently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object.
Here is a code snippet
```python
from datasets import load_dataset
from pyarrow.csv import ReadOptions
load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
```
error is `self.options cannot be converted to a Python object for pickling`
Would you mind to take a look? Thanks!
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-28-ab83fec2ded4> in <module>
----> 1 load_dataset("csv", data_files=["out.csv"], read_options=ReadOptions(block_size=16*1024*1024))
/tmp/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
602 hash=hash,
603 features=features,
--> 604 **config_kwargs,
605 )
606
/tmp/datasets/src/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
162 name,
163 custom_features=features,
--> 164 **config_kwargs,
165 )
166
/tmp/datasets/src/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
281 )
282 else:
--> 283 suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
284
285 if builder_config.data_files is not None:
/tmp/datasets/src/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
/tmp/datasets/src/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
/tmp/datasets/src/datasets/utils/py_utils.py in dumps(obj)
365 file = StringIO()
366 with _no_cache_fields(obj):
--> 367 dump(obj, file)
368 return file.getvalue()
369
/tmp/datasets/src/datasets/utils/py_utils.py in dump(obj, file)
337 def dump(obj, file):
338 """pickle an object to a file"""
--> 339 Pickler(file, recurse=True).dump(obj)
340 return
341
~/.local/lib/python3.6/site-packages/dill/_dill.py in dump(self, obj)
444 raise PicklingError(msg)
445 else:
--> 446 StockPickler.dump(self, obj)
447 stack.clear() # clear record of 'recursion-sensitive' pickled objects
448 return
/usr/lib/python3.6/pickle.py in dump(self, obj)
407 if self.proto >= 4:
408 self.framer.start_framing()
--> 409 self.save(obj)
410 self.write(STOP)
411 self.framer.end_framing()
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
474 f = self.dispatch.get(t)
475 if f is not None:
--> 476 f(self, obj) # Call unbound method with explicit self
477 return
478
~/.local/lib/python3.6/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/usr/lib/python3.6/pickle.py in save_dict(self, obj)
819
820 self.memoize(obj)
--> 821 self._batch_setitems(obj.items())
822
823 dispatch[dict] = save_dict
/usr/lib/python3.6/pickle.py in _batch_setitems(self, items)
850 k, v = tmp[0]
851 save(k)
--> 852 save(v)
853 write(SETITEM)
854 # else tmp is empty, and we're done
/usr/lib/python3.6/pickle.py in save(self, obj, save_persistent_id)
494 reduce = getattr(obj, "__reduce_ex__", None)
495 if reduce is not None:
--> 496 rv = reduce(self.proto)
497 else:
498 reduce = getattr(obj, "__reduce__", None)
~/.local/lib/python3.6/site-packages/pyarrow/_csv.cpython-36m-x86_64-linux-gnu.so in pyarrow._csv.ReadOptions.__reduce_cython__()
TypeError: self.options cannot be converted to a Python object for pickling
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/794/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/793/comments | https://api.github.com/repos/huggingface/datasets/issues/793/events | https://github.com/huggingface/datasets/pull/793 | 735,105,907 | MDExOlB1bGxSZXF1ZXN0NTE0NTU2NzY5 | 793 | [Datasets] fix discofuse links | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,390,625,000 | 1,604,391,401,000 | 1,604,391,400,000 | MEMBER | null | The discofuse links were changed: https://github.com/google-research-datasets/discofuse/commit/d27641016eb5b3eb2af03c7415cfbb2cbebe8558.
The old links are broken
I changed the links and created the new dataset_infos.json.
Pinging @thomwolf @lhoestq for notification. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/793/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/793",
"html_url": "https://github.com/huggingface/datasets/pull/793",
"diff_url": "https://github.com/huggingface/datasets/pull/793.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/793.patch",
"merged_at": 1604391400000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/792/comments | https://api.github.com/repos/huggingface/datasets/issues/792/events | https://github.com/huggingface/datasets/issues/792 | 734,693,652 | MDU6SXNzdWU3MzQ2OTM2NTI= | 792 | KILT dataset: empty string in triviaqa input field | {
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md\r\n(Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))"
] | 1,604,338,434,000 | 1,604,572,499,000 | 1,604,572,499,000 | CONTRIBUTOR | null | # What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` version is `1.1.2`
[more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1)
# How to reproduce
```py
In [1]: from datasets import load_dataset
In [4]: dataset = load_dataset("kilt_tasks")
# everything works fine, removed output for a better readibility
Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data.
# empty string in triviaqa input field
In [36]: dataset['train_triviaqa'][0]
Out[36]:
{'id': 'dpql_5197',
'input': '',
'meta': {'left_context': '',
'mention': '',
'obj_surface': {'text': []},
'partial_evidence': {'end_paragraph_id': [],
'meta': [],
'section': [],
'start_paragraph_id': [],
'title': [],
'wikipedia_id': []},
'right_context': '',
'sub_surface': {'text': []},
'subj_aliases': {'text': []},
'template_questions': {'text': []}},
'output': {'answer': ['five £', '5 £', '£5', 'five £'],
'meta': [],
'provenance': [{'bleu_score': [1.0],
'end_character': [248],
'end_paragraph_id': [30],
'meta': [],
'section': ['Section::::Question of legal tender.\n'],
'start_character': [246],
'start_paragraph_id': [30],
'title': ['Banknotes of the pound sterling'],
'wikipedia_id': ['270680']}]}}
In [35]: dataset['train_triviaqa']['input'][:10]
Out[35]: ['', '', '', '', '', '', '', '', '', '']
# same with test set
In [37]: dataset['test_triviaqa']['input'][:10]
Out[37]: ['', '', '', '', '', '', '', '', '', '']
# works fine with natural questions
In [34]: dataset['train_nq']['input'][:10]
Out[34]:
['how i.met your mother who is the mother',
'who had the most wins in the nfl',
'who played mantis guardians of the galaxy 2',
'what channel is the premier league on in france',
"god's not dead a light in the darkness release date",
'who is the current president of un general assembly',
'when do the eclipse supposed to take place',
'what is the name of the sea surrounding dubai',
'who holds the nba record for most points in a career',
'when did the new maze runner movie come out']
```
Stay safe :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/792/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/791/comments | https://api.github.com/repos/huggingface/datasets/issues/791/events | https://github.com/huggingface/datasets/pull/791 | 734,656,518 | MDExOlB1bGxSZXF1ZXN0NTE0MTg0MzU5 | 791 | add amazon reviews | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"@patrickvonplaten Yeah this is adapted from tfds so a lot is just how they wrote the code. Addressed your comments and also simplified the weird `AmazonUSReviewsConfig` definition. Will merge once tests pass.",
"Thanks for checking this one :) \r\nLooks good to me \r\n\r\nJust one question : is there a particular reason to use `names=[\"Y\", \"N\"]` in this order ? Usually the positive label is at index 1 and the negative one at index 0 for binary classification",
"> is there a particular reason to use `names=[\"Y\", \"N\"]` in this order ? Usually the positive label is at index 1 and the negative one at index 0 for binary classification\r\n\r\nHmm that's a good point. I'll submit a quick fix.\r\n\r\n"
] | 1,604,335,377,000 | 1,604,434,506,000 | 1,604,421,837,000 | CONTRIBUTOR | null | Adds the Amazon US Reviews dataset as requested in #353. Converted from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/amazon_us_reviews). cc @clmnt @sshleifer | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/791/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/791/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/791",
"html_url": "https://github.com/huggingface/datasets/pull/791",
"diff_url": "https://github.com/huggingface/datasets/pull/791.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/791.patch",
"merged_at": 1604421837000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/790/comments | https://api.github.com/repos/huggingface/datasets/issues/790/events | https://github.com/huggingface/datasets/issues/790 | 734,470,197 | MDU6SXNzdWU3MzQ0NzAxOTc= | 790 | Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist | {
"login": "shawwn",
"id": 59632,
"node_id": "MDQ6VXNlcjU5NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/59632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shawwn",
"html_url": "https://github.com/shawwn",
"followers_url": "https://api.github.com/users/shawwn/followers",
"following_url": "https://api.github.com/users/shawwn/following{/other_user}",
"gists_url": "https://api.github.com/users/shawwn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shawwn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shawwn/subscriptions",
"organizations_url": "https://api.github.com/users/shawwn/orgs",
"repos_url": "https://api.github.com/users/shawwn/repos",
"events_url": "https://api.github.com/users/shawwn/events{/privacy}",
"received_events_url": "https://api.github.com/users/shawwn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now",
"Closing this one.\r\nFeel free to re-open if you still have issues"
] | 1,604,320,595,000 | 1,605,017,102,000 | 1,605,017,102,000 | NONE | null | I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error.
```sh
git clone https://github.com/huggingface/datasets
cd datasets
virtualenv venv -p python3 --system-site-packages
source venv/bin/activate
pip install -e ".[dev]"
```
![image](https://user-images.githubusercontent.com/59632/97868518-72871800-1cd5-11eb-9cd2-37d4e9d20b39.png)
![image](https://user-images.githubusercontent.com/59632/97868592-977b8b00-1cd5-11eb-8f3c-0c409616149c.png)
Python 3.7.7
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/790/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/789/comments | https://api.github.com/repos/huggingface/datasets/issues/789/events | https://github.com/huggingface/datasets/pull/789 | 734,237,839 | MDExOlB1bGxSZXF1ZXN0NTEzODM1MzE0 | 789 | dataset(ncslgr): add initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi @AmitMY, sorry for leaving you hanging for a minute :) \r\n\r\nWe've developed a new pipeline for adding datasets with a few extra steps, including adding a dataset card. You can find the full process [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)\r\n\r\nWould you be up for adding the tags and description in the README.md so we can merge this cool dataset?",
"@lhoestq should be ready for another review :) ",
"Awesome thank you !\r\n\r\nIt looks like the PR now includes changes from other PR that were previously merged. \r\nFeel free to create another branch and another PR so that we can have a clean diff.\r\n",
"Closing for #958 "
] | 1,604,299,810,000 | 1,606,830,097,000 | 1,606,830,096,000 | CONTRIBUTOR | null | Its a small dataset, but its heavily annotated
https://www.bu.edu/asllrp/ncslgr.html
![image](https://user-images.githubusercontent.com/5757359/97838609-3c539380-1ce9-11eb-885b-a15d4c91ea49.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/789/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/789",
"html_url": "https://github.com/huggingface/datasets/pull/789",
"diff_url": "https://github.com/huggingface/datasets/pull/789.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/789.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/788/comments | https://api.github.com/repos/huggingface/datasets/issues/788/events | https://github.com/huggingface/datasets/issues/788 | 734,136,124 | MDU6SXNzdWU3MzQxMzYxMjQ= | 788 | failed to reuse cache | {
"login": "WangHexie",
"id": 31768052,
"node_id": "MDQ6VXNlcjMxNzY4MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/31768052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WangHexie",
"html_url": "https://github.com/WangHexie",
"followers_url": "https://api.github.com/users/WangHexie/followers",
"following_url": "https://api.github.com/users/WangHexie/following{/other_user}",
"gists_url": "https://api.github.com/users/WangHexie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WangHexie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WangHexie/subscriptions",
"organizations_url": "https://api.github.com/users/WangHexie/orgs",
"repos_url": "https://api.github.com/users/WangHexie/repos",
"events_url": "https://api.github.com/users/WangHexie/events{/privacy}",
"received_events_url": "https://api.github.com/users/WangHexie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,284,956,000 | 1,604,319,975,000 | 1,604,319,975,000 | NONE | null | I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to ******) which logged to terminal shows the path is right to the cache directory, but the files still have to be downloaded again. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/788/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/787/comments | https://api.github.com/repos/huggingface/datasets/issues/787/events | https://github.com/huggingface/datasets/pull/787 | 734,070,162 | MDExOlB1bGxSZXF1ZXN0NTEzNjk5MTQz | 787 | Adding nli_tr dataset | {
"login": "e-budur",
"id": 2246791,
"node_id": "MDQ6VXNlcjIyNDY3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-budur",
"html_url": "https://github.com/e-budur",
"followers_url": "https://api.github.com/users/e-budur/followers",
"following_url": "https://api.github.com/users/e-budur/following{/other_user}",
"gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-budur/subscriptions",
"organizations_url": "https://api.github.com/users/e-budur/orgs",
"repos_url": "https://api.github.com/users/e-budur/repos",
"events_url": "https://api.github.com/users/e-budur/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-budur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thank you @lhoestq for the time you take to review our pull request. We appreciate your help.\r\n\r\nWe've made the changes you described. Hope that it is ready for being merged. Please let me know if you have any additional requests for revisions. "
] | 1,604,267,384,000 | 1,605,207,962,000 | 1,605,207,962,000 | CONTRIBUTOR | null | Hello,
In this pull request, we have implemented the necessary interface to add our recent dataset [NLI-TR](https://github.com/boun-tabi/NLI-TR). The datasets will be presented on a full paper at EMNLP 2020 this month. [[arXiv link] ](https://arxiv.org/pdf/2004.14963.pdf)
The dataset is the neural machine translation of SNLI and MultiNLI datasets into Turkish. So, we followed a similar format with the original datasets hosted in the HuggingFace datasets hub.
Our dataset is designed to be accessed as follows by following the interface of the GLUE dataset that provides multiple datasets in a single interface over the HuggingFace datasets hub.
```
from datasets import load_dataset
multinli_tr = load_dataset("nli_tr", "multinli_tr")
snli_tr = load_dataset("nli_tr", "snli_tr")
```
Thanks for your help in reviewing our pull request. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/787/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/787",
"html_url": "https://github.com/huggingface/datasets/pull/787",
"diff_url": "https://github.com/huggingface/datasets/pull/787.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/787.patch",
"merged_at": 1605207962000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/786/comments | https://api.github.com/repos/huggingface/datasets/issues/786/events | https://github.com/huggingface/datasets/issues/786 | 733,761,717 | MDU6SXNzdWU3MzM3NjE3MTc= | 786 | feat(dataset): multiprocessing _generate_examples | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I agree that would be cool :)\r\nRight now the only distributed dataset builder is based on Apache Beam so you can use distributed processing frameworks like Dataflow, Spark, Flink etc. to build your dataset but it's not really well suited for single-worker parallel processing afaik"
] | 1,604,163,136,000 | 1,604,911,118,000 | null | CONTRIBUTOR | null | forking this out of #741, this issue is only regarding multiprocessing
I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool.
In my use case, I would instead of:
```python
for datum in data:
yield self.load_datum(datum)
```
do:
```python
return pool.map(self.load_datum, data)
```
As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset.
If this was a larger dataset (and many such datasets exist), it would take multiple days to complete.
Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/786/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/785/comments | https://api.github.com/repos/huggingface/datasets/issues/785/events | https://github.com/huggingface/datasets/pull/785 | 733,719,419 | MDExOlB1bGxSZXF1ZXN0NTEzNDMyNTM1 | 785 | feat(aslg_pc12): add dev and test data splits | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! I'm not sure we should make this split decision arbitrarily on our side. Users can split it afterwards to whatever they want using `dataset.train_test_split` for example.\r\nMoreover it looks like there's already papers that use this dataset and propose their own splits ([here](http://xanthippi.ceid.upatras.gr/HealthSign/resources/Publications/sitis_paper_25_10.pdf) 80-20) \r\nWhat do you think ?",
"I was not aware of the `train_test_split` method, thanks!\r\nSoe ven though it contributes to reproducibility, no need to do this split then."
] | 1,604,150,738,000 | 1,605,022,170,000 | 1,605,022,170,000 | CONTRIBUTOR | null | For reproducibility sake, it's best if there are defined dev and test splits.
The original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define:
- 5/7th for train
- 1/7th for dev
- 1/7th for test
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/785/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/785",
"html_url": "https://github.com/huggingface/datasets/pull/785",
"diff_url": "https://github.com/huggingface/datasets/pull/785.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/785.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/784/comments | https://api.github.com/repos/huggingface/datasets/issues/784/events | https://github.com/huggingface/datasets/issues/784 | 733,700,463 | MDU6SXNzdWU3MzM3MDA0NjM= | 784 | Issue with downloading Wikipedia data for low resource language | {
"login": "SamuelCahyawijaya",
"id": 2826602,
"node_id": "MDQ6VXNlcjI4MjY2MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelCahyawijaya",
"html_url": "https://github.com/SamuelCahyawijaya",
"followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers",
"following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs",
"repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos",
"events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hello, maybe you could ty to use another date for the wikipedia dump (see the available [dates](https://dumps.wikimedia.org/jvwiki) here for `jv`) ?",
"@lhoestq\r\n\r\nI've tried `load_dataset('wikipedia', '20200501.zh', beam_runner='DirectRunner')` and got the same `FileNotFoundError` as @SamuelCahyawijaya.\r\n\r\nAlso, using another date (e.g. `load_dataset('wikipedia', '20201120.zh', beam_runner='DirectRunner')`) will give the following error message.\r\n\r\n```\r\nValueError: BuilderConfig 20201120.zh not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']\r\n```\r\n\r\nI am pretty sure that `https://dumps.wikimedia.org/enwiki/20201120/dumpstatus.json` exists.",
"Thanks for reporting I created a PR to make the custom config work (language=\"zh\", date=\"20201120\").",
"@lhoestq Thanks!"
] | 1,604,144,400,000 | 1,624,584,931,000 | 1,606,318,933,000 | NONE | null | Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet
```
jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')
su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner')
```
And I get the following error for these two languages:
Javanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json
```
Sundanese
```
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json
```
I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid.
Any suggestions on how to handle this issue? Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/784/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/784/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/783/comments | https://api.github.com/repos/huggingface/datasets/issues/783/events | https://github.com/huggingface/datasets/pull/783 | 733,536,254 | MDExOlB1bGxSZXF1ZXN0NTEzMzAwODUz | 783 | updated links to v1.3 of quail, fixed the description | {
"login": "annargrs",
"id": 1450322,
"node_id": "MDQ6VXNlcjE0NTAzMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1450322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/annargrs",
"html_url": "https://github.com/annargrs",
"followers_url": "https://api.github.com/users/annargrs/followers",
"following_url": "https://api.github.com/users/annargrs/following{/other_user}",
"gists_url": "https://api.github.com/users/annargrs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/annargrs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/annargrs/subscriptions",
"organizations_url": "https://api.github.com/users/annargrs/orgs",
"repos_url": "https://api.github.com/users/annargrs/repos",
"events_url": "https://api.github.com/users/annargrs/events{/privacy}",
"received_events_url": "https://api.github.com/users/annargrs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"we're using quail 1.3 now thanks.\r\nclosing this one"
] | 1,604,094,453,000 | 1,606,691,119,000 | 1,606,691,118,000 | NONE | null | updated links to v1.3 of quail, fixed the description | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/783/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/783",
"html_url": "https://github.com/huggingface/datasets/pull/783",
"diff_url": "https://github.com/huggingface/datasets/pull/783.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/783.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/782/comments | https://api.github.com/repos/huggingface/datasets/issues/782/events | https://github.com/huggingface/datasets/pull/782 | 733,316,463 | MDExOlB1bGxSZXF1ZXN0NTEzMTE2MTM0 | 782 | Fix metric deletion when attribuets are missing | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,074,570,000 | 1,604,076,473,000 | 1,604,076,472,000 | MEMBER | null | When you call `del` on a metric we want to make sure that the arrow attributes are not already deleted.
I just added `if hasattr(...)` to make sure it doesn't crash | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/782/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/782",
"html_url": "https://github.com/huggingface/datasets/pull/782",
"diff_url": "https://github.com/huggingface/datasets/pull/782.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/782.patch",
"merged_at": 1604076472000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/781/comments | https://api.github.com/repos/huggingface/datasets/issues/781/events | https://github.com/huggingface/datasets/pull/781 | 733,168,609 | MDExOlB1bGxSZXF1ZXN0NTEyOTkyMzQw | 781 | Add XNLI train set | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,604,064,113,000 | 1,604,946,170,000 | 1,604,946,169,000 | MEMBER | null | I added the train set that was built using the translated MNLI.
Now you can load the dataset specifying one language:
```python
from datasets import load_dataset
xnli_en = load_dataset("xnli", "en")
print(xnli_en["train"][0])
# {'hypothesis': 'Product and geography are what make cream skimming work .', 'label': 1, 'premise': 'Conceptually cream skimming has two basic dimensions - product and geography .'}
print(xnli_en["test"][0])
# {'hypothesis': 'I havent spoken to him again.', 'label': 2, 'premise': "Well, I wasn't even thinking about that, but I was so frustrated, and, I ended up talking to him again."}
```
Cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/781/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/781/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/781",
"html_url": "https://github.com/huggingface/datasets/pull/781",
"diff_url": "https://github.com/huggingface/datasets/pull/781.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/781.patch",
"merged_at": 1604946169000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/780/comments | https://api.github.com/repos/huggingface/datasets/issues/780/events | https://github.com/huggingface/datasets/pull/780 | 732,738,647 | MDExOlB1bGxSZXF1ZXN0NTEyNjM0MzI0 | 780 | Add ASNQ dataset | {
"login": "mkserge",
"id": 2992022,
"node_id": "MDQ6VXNlcjI5OTIwMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2992022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkserge",
"html_url": "https://github.com/mkserge",
"followers_url": "https://api.github.com/users/mkserge/followers",
"following_url": "https://api.github.com/users/mkserge/following{/other_user}",
"gists_url": "https://api.github.com/users/mkserge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mkserge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkserge/subscriptions",
"organizations_url": "https://api.github.com/users/mkserge/orgs",
"repos_url": "https://api.github.com/users/mkserge/repos",
"events_url": "https://api.github.com/users/mkserge/events{/privacy}",
"received_events_url": "https://api.github.com/users/mkserge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Very nice !\r\nWhat do the `sentence1` and `sentence2` correspond to exactly ?\r\nAlso maybe you could use the `ClassLabel` feature type for the `label` field (see [snli](https://github.com/huggingface/datasets/blob/master/datasets/snli/snli.py) for example)",
"> What do the `sentence1` and `sentence2` correspond to exactly ?\r\n\r\n`sentence1` is a question, and `sentence2` is a candidate answer sentence. The labels are [1, 2, 3, 4] defining a relation between the answer sentence and the question. For example, label 4 means that the answer sentence is inside the _long_answer_ passage AND that the _short_answer_ is within the answer sentence. All the other labels are the negatives with different characteristics. (the short_answer, long_answer terminology is borrowed from Google's NQ dataset)\r\n\r\nShould I label them simply as `question` and `answer`? I was going more with what I saw in the examples/run_glue.py script, but I realize now there is no restriction around this.\r\n\r\n> Also maybe you could use the `ClassLabel` feature type for the `label` field (see [snli](https://github.com/huggingface/datasets/blob/master/datasets/snli/snli.py) for example)\r\n\r\nI am finding it difficult to assign names to each class, but perhaps it's possible. Here's the description of each class from the paper.\r\n\r\n1. Sentences from the document that are in the long answer but do not contain the annotated short answers. It is possible that these sentences might contain the short answer.\r\n2. Sentences from the document that are not in the long answer but contain the short answer string, that is, such occurrence is purely accidental.\r\n3. Sentences from the document that are neither in the long answer nor contain the short answer.\r\n4. Sentences from the document that are in the long answer and do contain the annotated short answers.\r\n\r\nAny ideas?\r\n\r\n",
"Yes it's better to have explicit feature names. Maybe go with question/answer or question/sentence.\r\nI read in the paper that 1,2 and 3 are considered negative and 4 positive.\r\nWe could have a binary classification label `label` (either positive of negative) and then two boolean fields `short_answser_in_sentence` and `sentence_in_long_answer`. What do you think ?",
"> Yes it's better to have explicit feature names. Maybe go with question/answer or question/sentence.\r\n> I read in the paper that 1,2 and 3 are considered negative and 4 positive.\r\n> We could have a binary classification label `label` (either positive of negative) and then two boolean fields `short_answser_in_sentence` and `sentence_in_long_answer`. What do you think ?\r\n\r\nOk, sounds good. I went with `sentence` to keep it consistent with `short_answer_in_sentence` and `sentence_in_long_answer`. \r\n\r\nI changed it to a ClassLabel with pos and neg classes and added the two above as features. Let me know if this is not what you had in mind.\r\n\r\n"
] | 1,604,014,316,000 | 1,605,000,383,000 | 1,605,000,383,000 | CONTRIBUTOR | null | This pull request adds the ASNQ dataset. It is a dataset for answer sentence selection derived from Google Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). The dataset details can be found in the paper at https://arxiv.org/abs/1911.04118
The dataset is authored by Siddhant Garg, Thuy Vu and Alessandro Moschitti.
_Please note that I have no affiliation with the authors._
Repo: https://github.com/alexa/wqa_tanda
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/780/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/780",
"html_url": "https://github.com/huggingface/datasets/pull/780",
"diff_url": "https://github.com/huggingface/datasets/pull/780.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/780.patch",
"merged_at": 1605000383000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/779/comments | https://api.github.com/repos/huggingface/datasets/issues/779/events | https://github.com/huggingface/datasets/pull/779 | 732,514,887 | MDExOlB1bGxSZXF1ZXN0NTEyNDQzMjY0 | 779 | Feature/fidelity metrics from emnlp2020 evaluating and characterizing human rationales | {
"login": "rathoreanirudh",
"id": 11327413,
"node_id": "MDQ6VXNlcjExMzI3NDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/11327413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rathoreanirudh",
"html_url": "https://github.com/rathoreanirudh",
"followers_url": "https://api.github.com/users/rathoreanirudh/followers",
"following_url": "https://api.github.com/users/rathoreanirudh/following{/other_user}",
"gists_url": "https://api.github.com/users/rathoreanirudh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rathoreanirudh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rathoreanirudh/subscriptions",
"organizations_url": "https://api.github.com/users/rathoreanirudh/orgs",
"repos_url": "https://api.github.com/users/rathoreanirudh/repos",
"events_url": "https://api.github.com/users/rathoreanirudh/events{/privacy}",
"received_events_url": "https://api.github.com/users/rathoreanirudh/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! This looks interesting, thanks for adding it :) \r\n\r\nFor metrics there should only be two features fields: references and predictions.\r\nBoth of them can be defined as you want using nested structures if you need to.\r\nAlso I'm not sure what goes into references and what goes into predictions, could you give more details please ?\r\nAll the other computations parameters (model etc.) are fine though. Maybe explain a bit more what they're used for",
"> Hi ! This looks interesting, thanks for adding it :)\r\n> \r\n> For metrics there should only be two features fields: references and predictions.\r\n> Both of them can be defined as you want using nested structures if you need to.\r\n> Also I'm not sure what goes into references and what goes into predictions, could you give more details please ?\r\n> All the other computations parameters (model etc.) are fine though. Maybe explain a bit more what they're used for\r\n\r\nThe `predictions` are the predicted labels by a model for a particular input. Do you mean making `prob_y_hat` - the probability of the prediction being the predicted label, `prob_y_hat_alpha` - the probability of the prediction being the predicted label when the input is reduced subject to alpha and the `null_difference` is the difference between the probability of the prediction being the predicted label in full information minus the probability in zero information a part of references? Also, I have added the description for other parameters in kwargs_description. I can expand it if that makes sense?",
"I think every value that is generated by the model (so label, prob_y_hat, prob_y_hat_alpha etc.) should be in `predictions`.\r\nFeel free to add more details in the kwargs_description, this is very useful for the end user.",
"Hi @lhoestq , I have updated the code according to your feedback. Please, let me know if it looks good and can be merged now."
] | 1,603,992,674,000 | 1,605,291,082,000 | null | NONE | null | This metric computes fidelity (Yu et al. 2019, DeYoung et al. 2019) and normalized fidelity (Carton et al. 2020). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/779/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/779",
"html_url": "https://github.com/huggingface/datasets/pull/779",
"diff_url": "https://github.com/huggingface/datasets/pull/779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/779.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/778/comments | https://api.github.com/repos/huggingface/datasets/issues/778/events | https://github.com/huggingface/datasets/issues/778 | 732,449,652 | MDU6SXNzdWU3MzI0NDk2NTI= | 778 | Unexpected behavior when loading cached csv file? | {
"login": "dcfidalgo",
"id": 15979778,
"node_id": "MDQ6VXNlcjE1OTc5Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcfidalgo",
"html_url": "https://github.com/dcfidalgo",
"followers_url": "https://api.github.com/users/dcfidalgo/followers",
"following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}",
"gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions",
"organizations_url": "https://api.github.com/users/dcfidalgo/orgs",
"repos_url": "https://api.github.com/users/dcfidalgo/repos",
"events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcfidalgo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! Thanks for reporting.\r\nThe same issue was reported in #730 (but with the encodings instead of the delimiter). It was fixed by #770 .\r\nThe fix will be available in the next release :)",
"Thanks for the prompt reply and terribly sorry for the spam! \r\nLooking forward to the new release! "
] | 1,603,987,570,000 | 1,604,006,487,000 | 1,604,006,487,000 | CONTRIBUTOR | null | I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset.
Small snippet to reproduce the behavior:
```python
import datasets
with open("dummy_data.csv", "w") as file:
file.write("test,this;text\n")
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names)
# ["test", "this;text"]
print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names)
# still ["test", "this;text"]
```
By the way, thanks a lot for this amazing library! :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/778/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/777/comments | https://api.github.com/repos/huggingface/datasets/issues/777/events | https://github.com/huggingface/datasets/pull/777 | 732,376,648 | MDExOlB1bGxSZXF1ZXN0NTEyMzI2ODM2 | 777 | Better error message for uninitialized metric | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,982,570,000 | 1,603,984,706,000 | 1,603,984,704,000 | MEMBER | null | When calling `metric.compute()` without having called `metric.add` or `metric.add_batch` at least once, the error was quite cryptic. I added a better error message
Fix #729 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/777/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/777",
"html_url": "https://github.com/huggingface/datasets/pull/777",
"diff_url": "https://github.com/huggingface/datasets/pull/777.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/777.patch",
"merged_at": 1603984703000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/776/comments | https://api.github.com/repos/huggingface/datasets/issues/776/events | https://github.com/huggingface/datasets/pull/776 | 732,343,550 | MDExOlB1bGxSZXF1ZXN0NTEyMjk5NzQx | 776 | Allow custom split names in text dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Awesome! This will make the behaviour much more intuitive for some non-standard code.\r\n\r\nThanks!"
] | 1,603,980,246,000 | 1,604,065,605,000 | 1,604,064,232,000 | MEMBER | null | The `text` dataset used to return only splits like train, test and validation. Other splits were ignored.
Now any split name is allowed.
I did the same for `json`, `pandas` and `csv`
Fix #735 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/776/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/776/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/776",
"html_url": "https://github.com/huggingface/datasets/pull/776",
"diff_url": "https://github.com/huggingface/datasets/pull/776.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/776.patch",
"merged_at": 1604064232000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/775/comments | https://api.github.com/repos/huggingface/datasets/issues/775/events | https://github.com/huggingface/datasets/pull/775 | 732,287,504 | MDExOlB1bGxSZXF1ZXN0NTEyMjUyODI3 | 775 | Properly delete metrics when a process is killed | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,975,927,000 | 1,603,980,080,000 | 1,603,980,079,000 | MEMBER | null | Tests are flaky when using metrics in distributed setup.
There is because of one test that make sure that using two possibly incompatible metric computation (same exp id) either works or raises the right error.
However if the error is raised, all the processes of the metric are killed, and the open files (arrow + lock files) are not closed correctly. This causes PermissionError on Windows when deleting the temporary directory.
To fix that I added a `finally` clause in the function passed to multiprocess to properly close the files when the process exits. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/775/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/775",
"html_url": "https://github.com/huggingface/datasets/pull/775",
"diff_url": "https://github.com/huggingface/datasets/pull/775.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/775.patch",
"merged_at": 1603980079000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/774/comments | https://api.github.com/repos/huggingface/datasets/issues/774/events | https://github.com/huggingface/datasets/pull/774 | 732,265,741 | MDExOlB1bGxSZXF1ZXN0NTEyMjM0NjA0 | 774 | [ROUGE] Add description to Rouge metric | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,973,972,000 | 1,603,994,150,000 | 1,603,994,148,000 | MEMBER | null | Add information about case sensitivity to ROUGE. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/774/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/774",
"html_url": "https://github.com/huggingface/datasets/pull/774",
"diff_url": "https://github.com/huggingface/datasets/pull/774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/774.patch",
"merged_at": 1603994148000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/773/comments | https://api.github.com/repos/huggingface/datasets/issues/773/events | https://github.com/huggingface/datasets/issues/773 | 731,684,153 | MDU6SXNzdWU3MzE2ODQxNTM= | 773 | Adding CC-100: Monolingual Datasets from Web Crawl Data | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"cc @aconneau ;) ",
"These dataset files are no longer available. https://data.statmt.org/cc-100/ files provided in this link are no longer available. Can anybody fix that issue?\r\n@abhishekkrthakur @yjernite ",
"Hi ! Can you open an issue to report this problem ? This will help keep track of the fix :)",
"Ok"
] | 1,603,909,241,000 | 1,643,203,374,000 | 1,607,941,207,000 | MEMBER | null | ## Adding a Dataset
- **Name:** CC-100: Monolingual Datasets from Web Crawl Data
- **Description:** https://twitter.com/alex_conneau/status/1321507120848625665
- **Paper:** https://arxiv.org/abs/1911.02116
- **Data:** http://data.statmt.org/cc-100/
- **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl.
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/773/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/772/comments | https://api.github.com/repos/huggingface/datasets/issues/772/events | https://github.com/huggingface/datasets/pull/772 | 731,612,430 | MDExOlB1bGxSZXF1ZXN0NTExNjg4ODMx | 772 | Fix metric with cache dir | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,903,393,000 | 1,603,964,084,000 | 1,603,964,083,000 | MEMBER | null | The cache_dir provided by the user was concatenated twice and therefore causing FileNotFound errors.
The tests didn't cover the case of providing `cache_dir=` for metrics because of a stupid issue (it was not using the right parameter).
I remove the double concatenation and I fixed the tests
Fix #728 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/772/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/772",
"html_url": "https://github.com/huggingface/datasets/pull/772",
"diff_url": "https://github.com/huggingface/datasets/pull/772.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/772.patch",
"merged_at": 1603964082000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/771/comments | https://api.github.com/repos/huggingface/datasets/issues/771/events | https://github.com/huggingface/datasets/issues/771 | 731,482,213 | MDU6SXNzdWU3MzE0ODIyMTM= | 771 | Using `Dataset.map` with `n_proc>1` print multiple progress bars | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Yes it allows to monitor the speed of each process. Currently each process takes care of one shard of the dataset.\r\n\r\nAt one point we can consider using streaming batches to a pool of processes instead of sharding the dataset in `num_proc` parts. At that point it will be easy to use only one progress bar"
] | 1,603,894,407,000 | 1,603,894,697,000 | null | MEMBER | null | When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/771/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/770/comments | https://api.github.com/repos/huggingface/datasets/issues/770/events | https://github.com/huggingface/datasets/pull/770 | 731,445,222 | MDExOlB1bGxSZXF1ZXN0NTExNTQ5MTg1 | 770 | Fix custom builder caching | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,891,944,000 | 1,603,964,163,000 | 1,603,964,161,000 | MEMBER | null | The cache directory of a dataset didn't take into account additional parameters that the user could specify such as `features` or any parameter of the builder configuration kwargs (ex: `encoding` for the `text` dataset).
To fix that, the cache directory name now has a suffix that depends on all of them.
Fix #730
Fix #750 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/770/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/770",
"html_url": "https://github.com/huggingface/datasets/pull/770",
"diff_url": "https://github.com/huggingface/datasets/pull/770.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/770.patch",
"merged_at": 1603964161000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/769/comments | https://api.github.com/repos/huggingface/datasets/issues/769/events | https://github.com/huggingface/datasets/issues/769 | 731,257,104 | MDU6SXNzdWU3MzEyNTcxMDQ= | 769 | How to choose proper download_mode in function load_dataset? | {
"login": "jzq2000",
"id": 48550398,
"node_id": "MDQ6VXNlcjQ4NTUwMzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/48550398?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzq2000",
"html_url": "https://github.com/jzq2000",
"followers_url": "https://api.github.com/users/jzq2000/followers",
"following_url": "https://api.github.com/users/jzq2000/following{/other_user}",
"gists_url": "https://api.github.com/users/jzq2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzq2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzq2000/subscriptions",
"organizations_url": "https://api.github.com/users/jzq2000/orgs",
"repos_url": "https://api.github.com/users/jzq2000/repos",
"events_url": "https://api.github.com/users/jzq2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzq2000/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"`download_mode=datasets.GenerateMode.FORCE_REDOWNLOAD` should work.\r\nThis makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing",
"Can we just use `features=...` in `load_dataset` for this @lhoestq?",
"Indeed you should use `features` in this case. \r\n```python\r\nfeatures = Features({'text': Value('string'), 'label': Value('float32')})\r\ndataset = load_dataset('csv', data_files=['sst_test.csv'], features=features)\r\n```\r\nNote that because of an issue with the caching when you change the features (see #750 ) you still need to specify the `FORCE_REDOWNLOAD ` flag. I'm working on a fix for this one"
] | 1,603,876,579,000 | 1,603,881,299,000 | null | NONE | null | Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4
"Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5
```
First I try to use this command to load my csv file .
``` python
dataset=load_dataset('csv', data_files=['sst_test.csv'])
```
It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this.
``` python
import pyarrow as pa
from pyarrow import csv
read_options = csv.ReadOptions(block_size=1024*1024)
parse_options = csv.ParseOptions()
convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()})
dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options,
parse_options=parse_options, convert_options=convert_options)
```
It keeps the same:
```shell
Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210)
```
I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right.
Is it a bug? How to choose proper download_mode to avoid this issue?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/769/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/768/comments | https://api.github.com/repos/huggingface/datasets/issues/768/events | https://github.com/huggingface/datasets/issues/768 | 730,908,060 | MDU6SXNzdWU3MzA5MDgwNjA= | 768 | Add a `lazy_map` method to `Dataset` and `DatasetDict` | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"This is cool! I think some aspects to think about and decide in terms of API are:\r\n- do we allow several methods (chained i guess)\r\n- how do we inspect the currently set method(s)\r\n- how do we control/reset them"
] | 1,603,837,983,000 | 1,603,875,493,000 | null | MEMBER | null | The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random function and get different outputs at each epoch (like data augmentation or randomly masking a part of a sentence for BERT-like objectives). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/768/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/767/comments | https://api.github.com/repos/huggingface/datasets/issues/767/events | https://github.com/huggingface/datasets/issues/767 | 730,771,610 | MDU6SXNzdWU3MzA3NzE2MTA= | 767 | Add option for named splits when using ds.train_test_split | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Yes definitely we should give more flexibility to control the name of the splits outputted by `train_test_split`.\r\n\r\nRelated is the very interesting feedback from @bramvanroy on how we should improve this method: https://discuss.huggingface.co/t/how-to-split-main-dataset-into-train-dev-test-as-datasetdict/1090/5\r\n\r\nAnd in particular that it should advantageously be able to split in 3 splits as well instead of just 2 like we copied from sklearn."
] | 1,603,828,784,000 | 1,605,017,121,000 | null | CONTRIBUTOR | null | ### Feature Request 🚀
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `train_test_split`, as it'll just overwrite my real `test` split that I intended to keep.
### Workaround
this is my hack for dealin with this, for now :slightly_smiling_face:
```python
from datasets import load_dataset
ds = load_dataset('imdb')
ds['train'], ds['validation'] = ds['train'].train_test_split(.1).values()
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/767/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/767/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/766/comments | https://api.github.com/repos/huggingface/datasets/issues/766/events | https://github.com/huggingface/datasets/issues/766 | 730,669,596 | MDU6SXNzdWU3MzA2Njk1OTY= | 766 | [GEM] add DART data-to-text generation dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Is this a duplicate of #924 ?",
"Yup, closing! Haven't been keeping track of the solved issues during the sprint."
] | 1,603,820,044,000 | 1,607,002,638,000 | 1,607,002,638,000 | MEMBER | null | ## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** the dataset will likely be included in the GEM benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/766/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/765/comments | https://api.github.com/repos/huggingface/datasets/issues/765/events | https://github.com/huggingface/datasets/issues/765 | 730,668,332 | MDU6SXNzdWU3MzA2NjgzMzI= | 765 | [GEM] Add DART data-to-text generation dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,819,943,000 | 1,603,820,061,000 | 1,603,820,061,000 | MEMBER | null | ## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **Paper:** https://arxiv.org/abs/2007.02871v1
- **Data:** https://github.com/Yale-LILY/dart
- **Motivation:** It will likely be included in the GEM generation evaluation benchmark
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/765/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/764/comments | https://api.github.com/repos/huggingface/datasets/issues/764/events | https://github.com/huggingface/datasets/pull/764 | 730,617,828 | MDExOlB1bGxSZXF1ZXN0NTEwODkyMTk2 | 764 | Adding Issue Template for Dataset Requests | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,816,628,000 | 1,603,819,526,000 | 1,603,819,525,000 | MEMBER | null | adding .github/ISSUE_TEMPLATE/add-dataset.md | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/764/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/764",
"html_url": "https://github.com/huggingface/datasets/pull/764",
"diff_url": "https://github.com/huggingface/datasets/pull/764.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/764.patch",
"merged_at": 1603819525000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/763/comments | https://api.github.com/repos/huggingface/datasets/issues/763/events | https://github.com/huggingface/datasets/pull/763 | 730,593,631 | MDExOlB1bGxSZXF1ZXN0NTEwODcyMDYx | 763 | Fixed errors in bertscore related to custom baseline | {
"login": "juanjucm",
"id": 36761132,
"node_id": "MDQ6VXNlcjM2NzYxMTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/36761132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juanjucm",
"html_url": "https://github.com/juanjucm",
"followers_url": "https://api.github.com/users/juanjucm/followers",
"following_url": "https://api.github.com/users/juanjucm/following{/other_user}",
"gists_url": "https://api.github.com/users/juanjucm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juanjucm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juanjucm/subscriptions",
"organizations_url": "https://api.github.com/users/juanjucm/orgs",
"repos_url": "https://api.github.com/users/juanjucm/repos",
"events_url": "https://api.github.com/users/juanjucm/events{/privacy}",
"received_events_url": "https://api.github.com/users/juanjucm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,814,915,000 | 1,603,907,965,000 | 1,603,907,965,000 | CONTRIBUTOR | null | [bertscore version 0.3.6 ](https://github.com/Tiiiger/bert_score) added support for custom baseline files. This update added extra argument `baseline_path` to BERTScorer class as well as some extra boolean parameters `use_custom_baseline` in functions like `get_hash(model, num_layers, idf, rescale_with_baseline, use_custom_baseline)`.
This PR fix those matching errors in bertscore metric implementation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/763/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/763",
"html_url": "https://github.com/huggingface/datasets/pull/763",
"diff_url": "https://github.com/huggingface/datasets/pull/763.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/763.patch",
"merged_at": 1603907965000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/762/comments | https://api.github.com/repos/huggingface/datasets/issues/762/events | https://github.com/huggingface/datasets/issues/762 | 730,586,972 | MDU6SXNzdWU3MzA1ODY5NzI= | 762 | [GEM] Add Czech Restaurant data-to-text generation dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,814,447,000 | 1,607,002,664,000 | 1,607,002,664,000 | MEMBER | null | - Paper: https://www.aclweb.org/anthology/W19-8670.pdf
- Data: https://github.com/UFAL-DSG/cs_restaurant_dataset
- The dataset will likely be part of the GEM benchmark | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/762/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/761/comments | https://api.github.com/repos/huggingface/datasets/issues/761/events | https://github.com/huggingface/datasets/issues/761 | 729,898,867 | MDU6SXNzdWU3Mjk4OTg4Njc= | 761 | Downloaded datasets are not usable offline | {
"login": "ghazi-f",
"id": 25091538,
"node_id": "MDQ6VXNlcjI1MDkxNTM4",
"avatar_url": "https://avatars.githubusercontent.com/u/25091538?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghazi-f",
"html_url": "https://github.com/ghazi-f",
"followers_url": "https://api.github.com/users/ghazi-f/followers",
"following_url": "https://api.github.com/users/ghazi-f/following{/other_user}",
"gists_url": "https://api.github.com/users/ghazi-f/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghazi-f/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghazi-f/subscriptions",
"organizations_url": "https://api.github.com/users/ghazi-f/orgs",
"repos_url": "https://api.github.com/users/ghazi-f/repos",
"events_url": "https://api.github.com/users/ghazi-f/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghazi-f/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Yes currently you need an internet connection because the lib tries to check for the etag of the dataset script online to see if you don't have it locally already.\r\n\r\nIf we add a way to store the etag/hash locally after the first download, it would allow users to first download the dataset with an internet connection, and still have it working without an internet connection.\r\n\r\nI'll let you know when we add this feature."
] | 1,603,745,686,000 | 1,603,807,469,000 | null | CONTRIBUTOR | null | I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.
Is this the intended behavior ?
(Sorry, I wrote the the first version of this issue while still on nlp 0.3.0). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/761/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/760/comments | https://api.github.com/repos/huggingface/datasets/issues/760/events | https://github.com/huggingface/datasets/issues/760 | 729,637,917 | MDU6SXNzdWU3Mjk2Mzc5MTc= | 760 | Add meta-data to the HANS dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,603,724,213,000 | 1,607,002,714,000 | 1,607,002,714,000 | MEMBER | null | The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/760/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/759/comments | https://api.github.com/repos/huggingface/datasets/issues/759/events | https://github.com/huggingface/datasets/issues/759 | 729,046,916 | MDU6SXNzdWU3MjkwNDY5MTY= | 759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | {
"login": "AI678",
"id": 63541083,
"node_id": "MDQ6VXNlcjYzNTQxMDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI678",
"html_url": "https://github.com/AI678",
"followers_url": "https://api.github.com/users/AI678/followers",
"following_url": "https://api.github.com/users/AI678/following{/other_user}",
"gists_url": "https://api.github.com/users/AI678/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI678/subscriptions",
"organizations_url": "https://api.github.com/users/AI678/orgs",
"repos_url": "https://api.github.com/users/AI678/repos",
"events_url": "https://api.github.com/users/AI678/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI678/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Are you running the script on a machine with an internet connection ?",
"Yes , I can browse the url through Google Chrome.",
"Does this HEAD request return 200 on your machine ?\r\n```python\r\nimport requests \r\nrequests.head(\"https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py\")\r\n```\r\n\r\nIf it returns 200, could you try again to load the dataset ?",
"Thank you very much for your response.\r\nWhen I run \r\n``` \r\nimport requests \r\nrequests.head(\"https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py\")\r\n```\r\nIt returns 200.\r\n\r\nAnd I try again to load the dataset. I got the following errors again. \r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\", line 608, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 475, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"C:\\Users\\666666\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\cnn_dailymail\\0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\\cnn_dailymail.py\", line 253, in _split_generators\r\n dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\download_manager.py\", line 254, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\download_manager.py\", line 175, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 224, in map_nested\r\n mapped = [\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 225, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 163, in _single_map_nested\r\n return function(data_struct)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 300, in cached_path\r\n output_path = get_from_cache(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 475, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\r\n\r\nConnection error happened but the url was different.\r\n\r\nI add the following code.\r\n```\r\nrequests.head(\"https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\")\r\n```\r\nThis didn't return 200\r\nIt returned like this:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 159, in _new_conn\r\n conn = connection.create_connection(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\util\\connection.py\", line 84, in create_connection\r\n raise err\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\util\\connection.py\", line 74, in create_connection\r\n sock.connect(sa)\r\nTimeoutError: [WinError 10060] \r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 670, in urlopen\r\n httplib_response = self._make_request(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 381, in _make_request\r\n self._validate_conn(conn)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 978, in _validate_conn\r\n conn.connect()\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 309, in connect\r\n conn = self._new_conn()\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 171, in _new_conn\r\n raise NewConnectionError(\r\nurllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000001F6060618E0>: Failed to establish a new connection: [WinError 10060] ",
"Is google drive blocked on your network ?\r\nFor me \r\n```python\r\nrequests.head(\"https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\")\r\n```\r\nreturns 200",
"I can browse the google drive through google chrome. It's weird. I can download the dataset through google drive manually.",
"Could you try to update `requests` maybe ?\r\nIt works with 2.23.0 on my side",
"My ```requests``` is 2.24.0 . It still can't return 200.",
"Is it possible I download the dataset manually from google drive and use it for further test ? How can I do this ? I want to reproduce the model in this link https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16. But I can't download the dataset through load_dataset method . I have tried many times and the connection error always happens .\r\n",
"The head request should definitely work, not sure what's going on on your side.\r\nIf you find a way to make it work, please post it here since other users might encounter the same issue.\r\n\r\nIf you don't manage to fix it you can use `load_dataset` on google colab and then save it using `dataset.save_to_disk(\"path/to/dataset\")`.\r\nThen you can download the directory on your machine and do\r\n```python\r\nfrom datasets import load_from_disk\r\ndataset = load_from_disk(\"path/to/local/dataset\")\r\n```",
"Hi\r\nI want to know if this problem has been solved because I encountered a similar issue. Thanks.\r\n`train_data = datasets.load_dataset(\"xsum\", `split=\"train\")`\r\n`ConnectionError:` Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/xsum/xsum.py`",
"Hi @smile0925 ! Do you have an internet connection ? Are you using some kind of proxy that may block the access to this file ?\r\n\r\nOtherwise you can try to update `datasets` since we introduced retries for http requests in the 1.2.0 version\r\n```\r\npip install --upgrade datasets\r\n```\r\nLet me know if that helps.",
"Hi @lhoestq \r\nOh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.\r\n![image](https://user-images.githubusercontent.com/46243662/106456211-2ca24180-64c8-11eb-831e-47e9b40e7da4.png)\r\n\r\n",
"> Hi @lhoestq\r\n> Oh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.\r\n> ![image](https://user-images.githubusercontent.com/46243662/106456211-2ca24180-64c8-11eb-831e-47e9b40e7da4.png)\r\n\r\nI have the same problem, have you solved it? Many thanks",
"Hi @ZhengxiangShi \r\nYou can first try whether your network can access these files. I need to use VPN to access these files, so I download the files that cannot be accessed to the local in advance, and then use them in the code. Like this,\r\n`train_data = datasets.load_dataset(\"xsum.py\", split=\"train\")`"
] | 1,603,640,097,000 | 1,628,100,609,000 | 1,628,100,609,000 | NONE | null | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 589, in load_dataset
module_path, hash = prepare_module(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 268, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 300, in cached_path
output_path = get_from_cache(
File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 475, in get_from_cache
raise ConnectionError(“Couldn’t reach {}”.format(url))
ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
How can I fix this ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/759/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/758/comments | https://api.github.com/repos/huggingface/datasets/issues/758/events | https://github.com/huggingface/datasets/issues/758 | 728,638,559 | MDU6SXNzdWU3Mjg2Mzg1NTk= | 758 | Process 0 very slow when using num_procs with map to tokenizer | {
"login": "ksjae",
"id": 17930170,
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksjae",
"html_url": "https://github.com/ksjae",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksjae/subscriptions",
"organizations_url": "https://api.github.com/users/ksjae/orgs",
"repos_url": "https://api.github.com/users/ksjae/repos",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksjae/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi ! Thanks for reporting.\r\nIs the distribution of text length of your data evenly distributed across your dataset ? I mean, could it be because the examples in the first part of your dataset are slower to process ?\r\nAlso could how many CPUs can you use for multiprocessing ?\r\n```python\r\nimport multiprocessing\r\nprint(multiprocessing.cpu_count())\r\n```\r\nWhich tokenizer are you using ?",
"Using pre trained HF tokenizer. The result is the same with tokenizer multiprocessing off and on.\r\nI have (absolutely) no idea about the distribution, but since this issue occurs on all of my datasets(regardless of files), I don't think distribution is the problems.\r\n\r\nI can use up to 16 cores.",
"Ok weird, I don't manage to reproduce this issue on my side.\r\nDoes it happen even with `num_proc=2` for example ?\r\nAlso could you provide more details about your OS and the versions of tokenizers/datasets/multiprocess that you're using ?",
"Yes, I can confirm it also happens with ```num_proc=2```.\r\n```\r\ntokenizers 0.9.2\r\ndatasets 1.1.2\r\nmultiprocess 0.70.10\r\n```\r\n```\r\nLinux nipa2020-0629 4.4.0-178-generic #208-Ubuntu SMP Sun Apr 5 23:45:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux\r\n```",
"I can't reproduce on my side unfortunately with the same versions.\r\n\r\nDo you have issues when doing multiprocessing with python ?\r\n```python\r\nfrom tqdm.auto import tqdm\r\nfrom multiprocess import Pool, RLock\r\n\r\ndef process_data(shard):\r\n # implement\r\n\r\nnum_proc = 8\r\nshards = [] # implement, this must be a list of size num_proc\r\n\r\nwith Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n results = [pool.apply_async(process_data, shard=shard) for shard in shards]\r\n transformed_shards = [r.get() for r in results]\r\n```",
"Nah, I'll just wait a few hours. Thank you for helping, though."
] | 1,603,507,220,000 | 1,603,857,586,000 | 1,603,857,585,000 | NONE | null | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), num_proc=8)
dataset.set_format(type='torch', columns=['input_ids'])
dataset.save_to_disk(file_path+'.arrow')
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/758/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/757/comments | https://api.github.com/repos/huggingface/datasets/issues/757/events | https://github.com/huggingface/datasets/issues/757 | 728,241,494 | MDU6SXNzdWU3MjgyNDE0OTQ= | 757 | CUDA out of memory | {
"login": "li1117heex",
"id": 47059217,
"node_id": "MDQ6VXNlcjQ3MDU5MjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/47059217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/li1117heex",
"html_url": "https://github.com/li1117heex",
"followers_url": "https://api.github.com/users/li1117heex/followers",
"following_url": "https://api.github.com/users/li1117heex/following{/other_user}",
"gists_url": "https://api.github.com/users/li1117heex/gists{/gist_id}",
"starred_url": "https://api.github.com/users/li1117heex/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li1117heex/subscriptions",
"organizations_url": "https://api.github.com/users/li1117heex/orgs",
"repos_url": "https://api.github.com/users/li1117heex/repos",
"events_url": "https://api.github.com/users/li1117heex/events{/privacy}",
"received_events_url": "https://api.github.com/users/li1117heex/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Could you provide more details ? What's the code you ran ?",
"```python\r\ntokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/small')\r\n\r\ndef tokenize(batch):\r\n return tokenizer(batch['text'], padding='max_length', truncation=True,max_length=512)\r\n\r\ndataset = load_dataset(\"bookcorpus\",split='train[:1000]').shuffle()\r\ndataset = dataset.map(tokenize, batched=True, batch_size=512)\r\n\r\n# dataset = LineByLineTextDataset(\r\n# tokenizer=tokenizer,\r\n# file_path=\"./wiki1000.txt\",\r\n# block_size=128\r\n# )\r\n\r\ndata_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n)\r\n\r\nconfig=FunnelConfig(\r\n return_dict=True\r\n)\r\n\r\nmodel= FunnelForMaskedLM(config=config)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./checkpoints\",\r\n overwrite_output_dir=True,\r\n do_train=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=16,\r\n per_device_eval_batch_size=16,\r\n save_steps=10000,\r\n logging_dir='./ptlogs'\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset,\r\n)\r\ntrainer.train()\r\n```",
"`RuntimeError: CUDA out of memory. Tried to allocate 954.00 MiB (GPU 0; 15.90 GiB total capacity; 14.35 GiB already allocated; 753.75 MiB free; 14.39 GiB reserved in total by PyTorch)\r\nException raised from malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:272 (most recent call first):`\r\n\r\npart of error output",
"from funnel model to bert model : error still happened\r\n\r\nfrom your dataset to LineByLineTextDataset : error disapeared",
"notice i just loaded 1000 rows of data",
"the error happens when executing loss.backward()",
"Since you're using a data collator you don't need to tokenizer the dataset using `map`. Could you try not to use `map` and only the data collator instead ? The data collator is supposed to pad to the longest sequence in each batch afaik, instead of padding to 512.\r\n\r\nAlso cc @sgugger ",
"Closing this one.\r\nFeel free to re-open if you have other questions about this issue"
] | 1,603,461,420,000 | 1,608,732,389,000 | 1,608,732,389,000 | NONE | null | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/757/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/756/comments | https://api.github.com/repos/huggingface/datasets/issues/756/events | https://github.com/huggingface/datasets/pull/756 | 728,211,373 | MDExOlB1bGxSZXF1ZXN0NTA4OTYwNTc3 | 756 | Start community-provided dataset docs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Oh, really cool @sshleifer!"
] | 1,603,459,061,000 | 1,603,716,920,000 | 1,603,716,919,000 | CONTRIBUTOR | null | Continuation of #736 with clean fork.
#### Old description
This is what I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
In slack @thomwolf called it a user-namespace dataset, but the docs call it community dataset.
I think the first naming is clearer, but I didn't address that here.
I didn't add metadata, will try that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/756/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/756",
"html_url": "https://github.com/huggingface/datasets/pull/756",
"diff_url": "https://github.com/huggingface/datasets/pull/756.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/756.patch",
"merged_at": 1603716919000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/755/comments | https://api.github.com/repos/huggingface/datasets/issues/755/events | https://github.com/huggingface/datasets/pull/755 | 728,203,821 | MDExOlB1bGxSZXF1ZXN0NTA4OTU0NDI2 | 755 | Start community-provided dataset docs V2 | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,458,450,000 | 1,603,458,937,000 | 1,603,458,937,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/755/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/755",
"html_url": "https://github.com/huggingface/datasets/pull/755",
"diff_url": "https://github.com/huggingface/datasets/pull/755.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/755.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/754/comments | https://api.github.com/repos/huggingface/datasets/issues/754/events | https://github.com/huggingface/datasets/pull/754 | 727,863,105 | MDExOlB1bGxSZXF1ZXN0NTA4NjczNzM2 | 754 | Use full released xsum dataset | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"repos_url": "https://api.github.com/users/jbragg/repos",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"@lhoestq I took a shot at addressing your comments but the build scripts seem to be complaining about not being able to open dummy files. How do I resolve those errors without copying the full dataset into the dummy dir?",
"Could you check that the names of the dummy data files are right ?\r\nYou can use \r\n```\r\ndatasets-cli dummy_data ./datasets/xum\r\n```\r\nto print the expected file names",
"Ok @lhoestq looks like I got the tests to pass :)"
] | 1,603,423,789,000 | 1,609,470,716,000 | 1,603,717,018,000 | CONTRIBUTOR | null | #672 Fix xsum to expand coverage and include IDs
Code based on parser from older version of `datasets/xsum/xsum.py`
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/754/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/754",
"html_url": "https://github.com/huggingface/datasets/pull/754",
"diff_url": "https://github.com/huggingface/datasets/pull/754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/754.patch",
"merged_at": 1603717018000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/753/comments | https://api.github.com/repos/huggingface/datasets/issues/753/events | https://github.com/huggingface/datasets/pull/753 | 727,434,935 | MDExOlB1bGxSZXF1ZXN0NTA4MzI4ODM0 | 753 | Fix doc links to viewer | {
"login": "Pierrci",
"id": 5020707,
"node_id": "MDQ6VXNlcjUwMjA3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5020707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pierrci",
"html_url": "https://github.com/Pierrci",
"followers_url": "https://api.github.com/users/Pierrci/followers",
"following_url": "https://api.github.com/users/Pierrci/following{/other_user}",
"gists_url": "https://api.github.com/users/Pierrci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pierrci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pierrci/subscriptions",
"organizations_url": "https://api.github.com/users/Pierrci/orgs",
"repos_url": "https://api.github.com/users/Pierrci/repos",
"events_url": "https://api.github.com/users/Pierrci/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pierrci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,376,416,000 | 1,603,442,531,000 | 1,603,442,531,000 | MEMBER | null | It seems #733 forgot some links in the doc :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/753/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/753",
"html_url": "https://github.com/huggingface/datasets/pull/753",
"diff_url": "https://github.com/huggingface/datasets/pull/753.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/753.patch",
"merged_at": 1603442531000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/752/comments | https://api.github.com/repos/huggingface/datasets/issues/752/events | https://github.com/huggingface/datasets/issues/752 | 726,917,801 | MDU6SXNzdWU3MjY5MTc4MDE= | 752 | Clicking on a metric in the search page points to datasets page giving "Missing dataset" warning | {
"login": "ogabrielluiz",
"id": 24829397,
"node_id": "MDQ6VXNlcjI0ODI5Mzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/24829397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ogabrielluiz",
"html_url": "https://github.com/ogabrielluiz",
"followers_url": "https://api.github.com/users/ogabrielluiz/followers",
"following_url": "https://api.github.com/users/ogabrielluiz/following{/other_user}",
"gists_url": "https://api.github.com/users/ogabrielluiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ogabrielluiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ogabrielluiz/subscriptions",
"organizations_url": "https://api.github.com/users/ogabrielluiz/orgs",
"repos_url": "https://api.github.com/users/ogabrielluiz/repos",
"events_url": "https://api.github.com/users/ogabrielluiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/ogabrielluiz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks for the report, can reproduce. Will fix",
"Fixed now @ogabrielluiz "
] | 1,603,320,983,000 | 1,603,383,582,000 | 1,603,383,582,000 | NONE | null | Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this.
Searching a metric in https://huggingface.co/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https://huggingface.co/datasets/rouge. Clicking on a metric without searching points to the right page.
Thanks for all the great work! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/752/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/751/comments | https://api.github.com/repos/huggingface/datasets/issues/751/events | https://github.com/huggingface/datasets/issues/751 | 726,820,191 | MDU6SXNzdWU3MjY4MjAxOTE= | 751 | Error loading ms_marco v2.1 using load_dataset() | {
"login": "JainSahit",
"id": 30478979,
"node_id": "MDQ6VXNlcjMwNDc4OTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/30478979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JainSahit",
"html_url": "https://github.com/JainSahit",
"followers_url": "https://api.github.com/users/JainSahit/followers",
"following_url": "https://api.github.com/users/JainSahit/following{/other_user}",
"gists_url": "https://api.github.com/users/JainSahit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JainSahit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JainSahit/subscriptions",
"organizations_url": "https://api.github.com/users/JainSahit/orgs",
"repos_url": "https://api.github.com/users/JainSahit/repos",
"events_url": "https://api.github.com/users/JainSahit/events{/privacy}",
"received_events_url": "https://api.github.com/users/JainSahit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"There was a similar issue in #294 \r\nClearing the cache and download again the dataset did the job. Could you try to clear your cache and download the dataset again ?",
"I was able to load the dataset successfully, I'm pretty sure it's just a cache issue that you have.\r\nLet me know if clearing your cache fixes the problem",
"Yes, it indeed was a cache issue!\r\nThanks for reaching out!!"
] | 1,603,310,083,000 | 1,604,539,917,000 | 1,604,539,917,000 | NONE | null | Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
9
10 # Downloading and loading a dataset
---> 11 dataset = load_dataset('ms_marco', 'v2.1')
10 frames
/usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx)
353 """
354 try:
--> 355 obj, end = self.scan_once(s, idx)
356 except StopIteration as err:
357 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 388988661 (char 388988660)
`
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/751/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/750/comments | https://api.github.com/repos/huggingface/datasets/issues/750/events | https://github.com/huggingface/datasets/issues/750 | 726,589,446 | MDU6SXNzdWU3MjY1ODk0NDY= | 750 | load_dataset doesn't include `features` in its hash | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,293,401,000 | 1,603,964,161,000 | 1,603,964,161,000 | MEMBER | null | It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored.
Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of:
```
dataset = load_dataset("glue", "mnli")
features = dataset["train"].features
features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order
dataset = load_dataset("glue", "mnli", features=features)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/750/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/749/comments | https://api.github.com/repos/huggingface/datasets/issues/749/events | https://github.com/huggingface/datasets/issues/749 | 726,366,062 | MDU6SXNzdWU3MjYzNjYwNjI= | 749 | [XGLUE] Adding new dataset | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Amazing! ",
"Small poll @thomwolf @yjernite @lhoestq @JetRunner @qiweizhen .\r\n\r\nAs stated in the XGLUE paper: https://arxiv.org/pdf/2004.01401.pdf , for each of the 11 down-stream tasks training data is only available in English, whereas development and test data is available in multiple different language *cf.* here: \r\n\r\n![Screenshot from 2020-11-04 15-02-17](https://user-images.githubusercontent.com/23423619/98120893-d7499a80-1eae-11eb-9d0b-57dfe5d4ee68.png)\r\n\r\nSo, I'd suggest to have exactly 11 \"language-independent\" configs: \"ner\", \"pos\", ... and give the sample in each dataset in the config a \"language\" label being one of \"ar\", \"bg\", .... => To me this makes more sense than making languaga specific config, *e.g.* \"ner-de\", ...especially because training data is only available in English. Do you guys agree? ",
"In this case we should have named splits, so config `ner` has splits `train`, `validation`, `test-en`, `test-ar`, `test-bg`, etc...\r\n\r\nThis is more in the spirit of the task afaiu, and will avoid making users do the filtering step themselves when testing different models or different configurations of the same model.",
"I see your point! \r\n\r\nI think this would be quite feasible to do and makes sense to me as well! In the paper results are reported per language, so it seems more natural to do it this way. \r\n\r\nGood for me @yjernite ! What do the others think? @lhoestq \r\n",
"I agree with Yacine on this!",
"Okey actually not that easy to add things like `test-de` to `datasets` => this would be the first dataset to have this.\r\nSee: https://github.com/huggingface/datasets/pull/802",
"IMO we should have one config per language. That's what we're doing for xnli, xtreme etc.\r\nHaving split names that depend on the language seems wrong. We should try to avoid split names that are not train/val/test.\r\nSorry for late response on this one",
"@lhoestq agreed on having one config per language, but we also need to be able to have different split names and people are going to want to use hyphens, so we should at the very least warn them why it's failing :) E.g. for ANLI with different stages of data (currently using underscores) or https://www.tau-nlp.org/commonsenseqa with their train-sanity or dev-sanity splits",
"Yes sure ! Could you open a separate issue for that ?",
"Really cool dataset 👍 btw. does Transformers support all 11 tasks 🤔 would be awesome to have a xglue script (like the \"normal\" glue one)",
"Just to make sure this is what we want here. If we add one config per language, \r\n\r\nthis means that this dataset ends up with well over 100 different configs most of which will have the same `train` split. The train split is always in English. Also, I'm not sure whether it's better for the user to be honest. \r\n\r\nI think it could be quite confusing for the user to have\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner-de\", split=\"train\")\r\n```\r\n\r\nin English even though it's `ner-de`.\r\n\r\nTo be honest, I'd prefer:\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test-de\")\r\ntest_dataset_fr = load_dataset(\"xglue\", \"ner\", split=\"test-fr\")\r\n```\r\n\r\nhere",
"Oh yes right I didn't notice the train set was always in english sorry.\r\nMoreover it seems that the way this dataset is used is to pick a pretrained multilingual model, fine-tune it on the english train set and then evaluate on each test set (one per language).\r\nSo to better fit the usual usage of this dataset, I agree that it's better to have one test split per language. \r\n\r\nSomething like your latest example patrick is fine imo :\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test.de\")\r\n```\r\n\r\nI just replace test-de with test.de since `-` is not allowed for split names (it has to follow the `\\w+` regex), and usually we specify the language after a point. ",
"Closing since XGLUE has been added in #802 , thanks patrick :) "
] | 1,603,277,496,000 | 1,609,927,376,000 | 1,609,927,375,000 | MEMBER | null | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/749/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/748/comments | https://api.github.com/repos/huggingface/datasets/issues/748/events | https://github.com/huggingface/datasets/pull/748 | 726,196,589 | MDExOlB1bGxSZXF1ZXN0NTA3MzAyNjE3 | 748 | New version of CompGuessWhat?! with refined annotations | {
"login": "aleSuglia",
"id": 1479733,
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aleSuglia",
"html_url": "https://github.com/aleSuglia",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"No worries. Always happy to help and thanks for your support in fixing the issue :)"
] | 1,603,263,341,000 | 1,603,270,362,000 | 1,603,269,979,000 | CONTRIBUTOR | null | This pull request introduces a few fixes to the annotations for VisualGenome in the CompGuessWhat?! original split. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/748/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/748",
"html_url": "https://github.com/huggingface/datasets/pull/748",
"diff_url": "https://github.com/huggingface/datasets/pull/748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/748.patch",
"merged_at": 1603269979000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/747/comments | https://api.github.com/repos/huggingface/datasets/issues/747/events | https://github.com/huggingface/datasets/pull/747 | 725,884,704 | MDExOlB1bGxSZXF1ZXN0NTA3MDQ3MDE4 | 747 | Add Quail question answering dataset | {
"login": "sai-prasanna",
"id": 3595526,
"node_id": "MDQ6VXNlcjM1OTU1MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sai-prasanna",
"html_url": "https://github.com/sai-prasanna",
"followers_url": "https://api.github.com/users/sai-prasanna/followers",
"following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}",
"gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions",
"organizations_url": "https://api.github.com/users/sai-prasanna/orgs",
"repos_url": "https://api.github.com/users/sai-prasanna/repos",
"events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}",
"received_events_url": "https://api.github.com/users/sai-prasanna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,222,394,000 | 1,603,269,315,000 | 1,603,269,315,000 | CONTRIBUTOR | null | QuAIL is a multi-domain RC dataset featuring news, blogs, fiction and user stories. Each domain is represented by 200 texts, which gives us a 4-way data split. The texts are 300-350 word excerpts from CC-licensed texts that were hand-picked so as to make sense to human readers without larger context. Domain diversity mitigates the issue of possible overlap between training and test data of large pre-trained models, which the current SOTA systems are based on. For instance, BERT is trained on Wikipedia + BookCorpus, and was tested on Wikipedia-based SQuAD (Devlin, Chang, Lee, & Toutanova, 2019).
https://text-machine-lab.github.io/blog/2020/quail/ @annargrs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/747/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/747",
"html_url": "https://github.com/huggingface/datasets/pull/747",
"diff_url": "https://github.com/huggingface/datasets/pull/747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/747.patch",
"merged_at": 1603269315000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/746/comments | https://api.github.com/repos/huggingface/datasets/issues/746/events | https://github.com/huggingface/datasets/pull/746 | 725,627,235 | MDExOlB1bGxSZXF1ZXN0NTA2ODMzNDMw | 746 | dataset(ngt): add ngt dataset initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,603,202,698,000 | 1,616,480,378,000 | 1,616,480,378,000 | CONTRIBUTOR | null | Currently only making the paths to the annotation ELAN (eaf) file and videos available.
This is the first accessible way to download this dataset, which is not manual file-by-file.
Only downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB in total, saved in `mpg` format.
I do not intend to actually store these as an uncompressed array of frames, because it will be huge.
Future updates may add pose estimation files for all videos, making it easier to work with this data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/746/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/746",
"html_url": "https://github.com/huggingface/datasets/pull/746",
"diff_url": "https://github.com/huggingface/datasets/pull/746.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/746.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/745/comments | https://api.github.com/repos/huggingface/datasets/issues/745/events | https://github.com/huggingface/datasets/pull/745 | 725,589,352 | MDExOlB1bGxSZXF1ZXN0NTA2ODAxMTI0 | 745 | Fix emotion description | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hello, I probably have a silly question but the labels of the emotion dataset are in the form of numbers and not string, so I can not use the function classification_report because it mixes numbers and string (prediction). How can I access the label in the form of a string and not a number? \r\nThank you in advance."
] | 1,603,200,519,000 | 1,619,102,851,000 | 1,603,269,507,000 | MEMBER | null | Fixes the description of the emotion dataset to reflect the class names observed in the data, not the ones described in the paper.
I also took the liberty to make use of `ClassLabel` for the emotion labels. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/745/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/745",
"html_url": "https://github.com/huggingface/datasets/pull/745",
"diff_url": "https://github.com/huggingface/datasets/pull/745.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/745.patch",
"merged_at": 1603269507000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/744/comments | https://api.github.com/repos/huggingface/datasets/issues/744/events | https://github.com/huggingface/datasets/issues/744 | 724,918,448 | MDU6SXNzdWU3MjQ5MTg0NDg= | 744 | Dataset Explorer Doesn't Work for squad_es and squad_it | {
"login": "gaotongxiao",
"id": 22607038,
"node_id": "MDQ6VXNlcjIyNjA3MDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/22607038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaotongxiao",
"html_url": "https://github.com/gaotongxiao",
"followers_url": "https://api.github.com/users/gaotongxiao/followers",
"following_url": "https://api.github.com/users/gaotongxiao/following{/other_user}",
"gists_url": "https://api.github.com/users/gaotongxiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gaotongxiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaotongxiao/subscriptions",
"organizations_url": "https://api.github.com/users/gaotongxiao/orgs",
"repos_url": "https://api.github.com/users/gaotongxiao/repos",
"events_url": "https://api.github.com/users/gaotongxiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/gaotongxiao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Oups wrong click.\r\nThis one is for you @srush"
] | 1,603,136,052,000 | 1,603,730,177,000 | 1,603,730,177,000 | NONE | null | https://huggingface.co/nlp/viewer/?dataset=squad_es
https://huggingface.co/nlp/viewer/?dataset=squad_it
Both pages show "OSError: [Errno 28] No space left on device". | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/744/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/743/comments | https://api.github.com/repos/huggingface/datasets/issues/743/events | https://github.com/huggingface/datasets/issues/743 | 724,703,980 | MDU6SXNzdWU3MjQ3MDM5ODA= | 743 | load_dataset for CSV files not working | {
"login": "iliemihai",
"id": 2815308,
"node_id": "MDQ6VXNlcjI4MTUzMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2815308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliemihai",
"html_url": "https://github.com/iliemihai",
"followers_url": "https://api.github.com/users/iliemihai/followers",
"following_url": "https://api.github.com/users/iliemihai/following{/other_user}",
"gists_url": "https://api.github.com/users/iliemihai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliemihai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliemihai/subscriptions",
"organizations_url": "https://api.github.com/users/iliemihai/orgs",
"repos_url": "https://api.github.com/users/iliemihai/repos",
"events_url": "https://api.github.com/users/iliemihai/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliemihai/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thank you !\r\nCould you provide a csv file that reproduces the error ?\r\nIt doesn't have to be one of your dataset. As long as it reproduces the error\r\nThat would help a lot !",
"I think another good example is the following:\r\n`\r\nfrom datasets import load_dataset\r\n`\r\n`\r\ndataset = load_dataset(\"csv\", data_files=[\"./sts-dev.csv\"], delimiter=\"\\t\", column_names=[\"one\", \"two\", \"three\", \"four\", \"score\", \"sentence1\", \"sentence2\"], script_version=\"master\")`\r\n`\r\n\r\nDisplayed error `CSV parse error: Expected 7 columns, got 6` even tough I put 7 columns. First four columns from the csv don't have a name, so I've named them by default. The csv file is the .dev file from STSb benchmark dataset.\r\n\r\n",
"Hi, seems I also can't read csv file. I was trying with a dummy csv with only three rows.\r\n\r\n```\r\ntext,label\r\nI hate google,negative\r\nI love Microsoft,positive\r\nI don't like you,negative\r\n```\r\nI was using the HuggingFace image in Paperspace Gradient (datasets==1.1.3). The following code doesn't work:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\")\r\n```\r\nIt outputs the following:\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset csv/default-3b6254ff4dd403e5 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/csv/default-3b6254ff4dd403e5/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2...\r\nDataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-3b6254ff4dd403e5/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2. Subsequent calls will reuse this data.\r\n```\r\nBut `len(dataset)` gives `1` and I can't access rows with indexing `dataset[0]` (it gives `KeyError: 0`).\r\n\r\nHowever, loading from pandas dataframe is working.\r\n```\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\ndf = pd.read_csv('test_data.csv')\r\ndataset = Dataset.from_pandas(df)\r\n```\r\n\r\n",
"This is because load_dataset without `split=` returns a dictionary of split names (train/validation/test) to dataset.\r\nYou can do\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\")\r\nprint(dataset[\"train\"][0])\r\n```\r\n\r\nOr if you want to directly get the train split:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\", split=\"train\")\r\nprint(dataset[0])\r\n```\r\n",
"Good point\r\n\r\nDesign question for us, though: should `load_dataset` when no split is specified and only one split is present in the dataset (common use case with CSV/text/JSON datasets) return a `Dataset` instead of a `DatsetDict`? I feel like it's often what the user is expecting. I break a bit the paradigm of a unique return type but since this library is designed for widespread DS people more than CS people usage I would tend to think that UX should take precedence over CS reasons. What do you think?",
"In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.\r\nI'm ok with returning the dataset object if no split specifications are given for text/json/csv/pandas.\r\n\r\nFor the other datasets ton the other hand the user doesn't know in advance the splits so I would keep the dictionary by default. What do you think ?",
"Thanks for your quick response! I'm fine with specifying the split as @lhoestq suggested. My only concern is when I'm loading from python dict or pandas, the library returns a dataset instead of a dictionary of datasets when no split is specified. I know that they use a different function `Dataset.from_dict` or `Dataset.from_pandas` but the text/csv files use `load_dataset()`. However, to the user, they do the same task and we probably expect them to have the same behavior.",
"```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='./amazon_data/Video_Games_5.csv', delimiter=\",\", split=['train', 'test'])\r\n```\r\nI was running the above line, but got this error.\r\n\r\n```ValueError: Unknown split \"test\". Should be one of ['train'].```\r\n\r\nThe data is amazon product data. I load the Video_Games_5.json.gz data into pandas and save it as csv file. and then load the csv file using the above code. I thought, ```split=['train', 'test']``` would split the data into train and test. did I misunderstood?\r\n\r\nThank you!\r\n\r\n",
"Hi ! the `split` argument in `load_dataset` is used to select the splits you want among the available splits.\r\nHowever when loading a csv with a single file as you did, only a `train` split is available by default.\r\n\r\nIndeed since `data_files='./amazon_data/Video_Games_5.csv'` is equivalent to `data_files={\"train\": './amazon_data/Video_Games_5.csv'}`, you can get a dataset with \r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='./amazon_data/Video_Games_5.csv', delimiter=\",\", split=\"train\")\r\n```\r\n\r\nAnd then to get both a train and test split you can do\r\n```python\r\ndataset = dataset.train_test_split()\r\nprint(dataset.keys())\r\n# ['train', 'test']\r\n```\r\n\r\n\r\nAlso note that a csv dataset may have several available splits if it is defined this way:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files={\r\n \"train\": './amazon_data/Video_Games_5_train.csv',\r\n \"test\": './amazon_data/Video_Games_5_test.csv'\r\n})\r\nprint(dataset.keys())\r\n# ['train', 'test']\r\n```\r\n",
"> In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.\r\n> I'm ok with returning the dataset object if no split specifications are given for text/json/csv/pandas.\r\n> \r\n> For the other datasets ton the other hand the user doesn't know in advance the splits so I would keep the dictionary by default. What do you think ?\r\n\r\nYes maybe this would be good. I think having to select 'train' from the resulting object why the user gave no split information is a confusing and not intuitive behavior.",
"> Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.\r\n> \r\n> `from datasets import load_dataset`\r\n> `dataset = load_dataset(\"csv\", data_files=[\"./sample_data.csv\"], delimiter=\"\\t\", column_names=[\"title\", \"text\"], script_version=\"master\")`\r\n> \r\n> Displayed error:\r\n> `... ArrowInvalid: CSV parse error: Expected 2 columns, got 1`\r\n\r\nI'm also facing the same issue when trying to load from a csv file locally:\r\n\r\n```python\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('csv', data_files='sample_data.csv')\r\n```\r\n\r\nError when executed from Google Colab:\r\n```python\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-34-79a8d4f65ed6> in <module>()\r\n 1 from nlp import load_dataset\r\n----> 2 dataset = load_dataset('csv', data_files='sample_data.csv')\r\n\r\n9 frames\r\n/usr/local/lib/python3.7/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 535 try:\r\n 536 # Prepare split will record examples associated to the split\r\n--> 537 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 538 except OSError:\r\n 539 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/builder.py in _prepare_split(self, split_generator)\r\n 863 \r\n 864 generator = self._generate_tables(**split_generator.gen_kwargs)\r\n--> 865 for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n 866 writer.write_table(table)\r\n 867 num_examples, num_bytes = writer.finalize()\r\n\r\n/usr/local/lib/python3.7/dist-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)\r\n 213 def __iter__(self, *args, **kwargs):\r\n 214 try:\r\n--> 215 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 216 # return super(tqdm...) will not catch exception\r\n 217 yield obj\r\n\r\n/usr/local/lib/python3.7/dist-packages/tqdm/std.py in __iter__(self)\r\n 1102 fp_write=getattr(self.fp, 'write', sys.stderr.write))\r\n 1103 \r\n-> 1104 for obj in iterable:\r\n 1105 yield obj\r\n 1106 # Update and possibly print the progressbar.\r\n\r\n/usr/local/lib/python3.7/dist-packages/nlp/datasets/csv/ede98314803c971fef04bcee45d660c62f3332e8a74491e0b876106f3d99bd9b/csv.py in _generate_tables(self, files)\r\n 78 read_options=self.config.pa_read_options,\r\n 79 parse_options=self.config.pa_parse_options,\r\n---> 80 convert_options=self.config.convert_options,\r\n 81 )\r\n 82 yield i, pa_table\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: CSV parse error: Expected 1 columns, got 8\r\n```\r\n\r\nVersion:\r\n```\r\nnlp==0.4.0\r\n```",
"Hi @kauvinlucas\r\n\r\nYou can use the latest versions of `datasets` to do this.\r\nTo do so, just `pip install datasets` instead of `nlp` (the library was renamed) and then\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='sample_data.csv')",
"Hi \r\nI'm having a different problem with loading local csv. \r\n```Python\r\nfrom datasets import load_dataset \r\ndataset = load_dataset('csv', data_files='sample.csv') \r\n``` \r\n\r\ngives `ValueError: Specified named and prefix; you can only specify one.` error \r\n\r\nversions: \r\n- datasets: 1.1.3 \r\n- python: 3.9.6 \r\n- pyarrow: 2.0.0 ",
"Oh.. I figured it out. According to issue #[42387](https://github.com/pandas-dev/pandas/issues/42387) from pandas, this new version does not accept None for both parameters (which was being done by the repo I'm testing). Dowgrading Pandas==1.0.4 and Python==3.8 worked",
"Hi, \r\nI got an `OSError: Cannot find data file. ` when I tried to use load_dataset with tsv files. I have checked the paths, and they are correct. \r\n\r\nversions\r\n- python: 3.7.9\r\n- datasets: 1.1.3\r\n- pyarrow: 2.0.0\r\n- transformers: 4.2.2\r\n\r\n~~~\r\ndata_files = {\"train\": \"train.tsv\", \"test\",: \"test.tsv\"}\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n~~~\r\n\r\nThe entire Error message is on below:\r\n\r\n```08/14/2021 16:55:44 - INFO - __main__ - load a local file for train: /project/media-framing/transformer4/data/0/val/label1.tsv\r\n08/14/2021 16:55:44 - INFO - __main__ - load a local file for test: /project/media-framing/transformer4/data/unlabel/test.tsv\r\nUsing custom data configuration default\r\nDownloading and preparing dataset csv/default-00a4200ae8507533 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /usr4/cs542sp/hey1/.cache/huggingface/datasets/csv/default-00a4200ae8507533/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2...\r\nTraceback (most recent call last):\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 592, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 944, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 307, in finalize\r\n self.stream.close()\r\n File \"pyarrow/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"run_glue.py\", line 484, in <module>\r\n main()\r\n File \"run_glue.py\", line 243, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/load.py\", line 610, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 515, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 594, in _download_and_prepare\r\n raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\nOSError: Cannot find data file. ```",
"Hi ! It looks like the error stacktrace doesn't match with your code snippet.\r\n\r\nWhat error do you get when running this ?\r\n```\r\ndata_files = {\"train\": \"train.tsv\", \"test\",: \"test.tsv\"}\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n```\r\ncan you check that both tsv files are in the same folder as the current working directory of your shell ?",
"Hi @lhoestq, Below is the entire error message after I move both tsv files to the same directory. It's the same with I got before.\r\n```\r\n/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.)\r\n return torch._C._cuda_getDeviceCount() > 0\r\n08/29/2021 22:56:43 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False\r\n08/29/2021 22:56:43 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=/projectnb/media-framing/pred_result/label1/, overwrite_output_dir=True, do_train=True, do_eval=False, do_predict=True, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=8.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs/Aug29_22-56-43_scc1, logging_first_step=False, logging_steps=500, save_steps=3000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=/projectnb/media-framing/pred_result/label1/, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, _n_gpu=0)\r\n08/29/2021 22:56:43 - INFO - __main__ - load a local file for train: /project/media-framing/transformer4/temp_train.tsv\r\n08/29/2021 22:56:43 - INFO - __main__ - load a local file for test: /project/media-framing/transformer4/temp_test.tsv\r\n08/29/2021 22:56:43 - WARNING - datasets.builder - Using custom data configuration default-df627c23ac0e98ec\r\nDownloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /usr4/cs542sp/hey1/.cache/huggingface/datasets/csv/default-df627c23ac0e98ec/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff...\r\nTraceback (most recent call last):\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 693, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 1166, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 428, in finalize\r\n self.stream.close()\r\n File \"pyarrow/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"run_glue.py\", line 487, in <module>\r\n main()\r\n File \"run_glue.py\", line 244, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/load.py\", line 852, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 616, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 699, in _download_and_prepare\r\n + str(e)\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nerror closing file\r\n```",
"Hi !\r\nCan you try running this into a python shell directly ?\r\n\r\n```python\r\nimport os\r\nfrom datasets import load_dataset\r\n\r\ndata_files = {\"train\": \"train.tsv\", \"test\": \"test.tsv\"}\r\nassert all(os.path.isfile(data_file) for data_file in data_files.values()), \"Couln't find files\"\r\n\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\nprint(\"success !\")\r\n```\r\n\r\nThis way all the code from `run_glue.py` doesn't interfere with our tests :)",
"Hi @lhoestq, \r\n\r\nBelow is what I got from terminal after I copied and run your code. I think the files themselves are good since there is no assertion error. \r\n\r\n```\r\nUsing custom data configuration default-df627c23ac0e98ec\r\nDownloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /usr4/cs542sp/hey1/.cache/huggingface/datasets/csv/default-df627c23ac0e98ec/0.0.0/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff...\r\nTraceback (most recent call last):\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 693, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 1166, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 428, in finalize\r\n self.stream.close()\r\n File \"pyarrow/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"test.py\", line 7, in <module>\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/load.py\", line 852, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 616, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/datasets/builder.py\", line 699, in _download_and_prepare\r\n + str(e)\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nerror closing file\r\n```",
"Hi, could this be a permission error ? I think it fails to close the arrow file that contains the data from your CSVs in the cache.\r\n\r\nBy default datasets are cached in `~/.cache/huggingface/datasets`, could you check that you have the right permissions ?\r\nYou can also try to change the cache directory by passing `cache_dir=\"path/to/my/cache/dir\"` to `load_dataset`.",
"Thank you!! @lhoestq\r\n\r\nFor some reason, I don't have the default path for datasets to cache, maybe because I work from a remote system. The issue solved after I pass the `cache_dir` argument to the function. Thank you very much!!"
] | 1,603,119,231,000 | 1,631,212,006,000 | null | CONTRIBUTOR | null | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInvalid: CSV parse error: Expected 2 columns, got 1
`
I should mention that when I've tried to read data from `https://github.com/lhoestq/transformers/tree/custom-dataset-in-rag-retriever/examples/rag/test_data/my_knowledge_dataset.csv` it worked without a problem. I've read that there might be some problems with /r character, so I've removed them from the custom dataset, but the problem still remains.
I've added a colab reproducing the bug, but unfortunately I cannot provide the dataset.
https://colab.research.google.com/drive/1Qzu7sC-frZVeniiWOwzoCe_UHZsrlxu8?usp=sharing
Are there any work around for it ?
Thank you | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/743/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/742/comments | https://api.github.com/repos/huggingface/datasets/issues/742/events | https://github.com/huggingface/datasets/pull/742 | 724,509,974 | MDExOlB1bGxSZXF1ZXN0NTA1ODgzNjI3 | 742 | Add OCNLI, a new CLUE dataset | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks :) merging it"
] | 1,603,105,593,000 | 1,603,383,589,000 | 1,603,383,588,000 | MEMBER | null | OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for
Chinese Natural Language Inference, collected following closely the procedures of MNLI,
but with enhanced strategies aiming for more challenging inference pairs. We want to
emphasize we did not use human/machine translation in creating the dataset, and thus
our Chinese texts are original and not translated. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/742/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/742",
"html_url": "https://github.com/huggingface/datasets/pull/742",
"diff_url": "https://github.com/huggingface/datasets/pull/742.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/742.patch",
"merged_at": 1603383587000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/741/comments | https://api.github.com/repos/huggingface/datasets/issues/741/events | https://github.com/huggingface/datasets/issues/741 | 723,924,275 | MDU6SXNzdWU3MjM5MjQyNzU= | 741 | Creating dataset consumes too much memory | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks for reporting.\r\nIn theory since the dataset script is just made to yield examples to write them into an arrow file, it's not supposed to create memory issues.\r\n\r\nCould you please try to run this exact same loop in a separate script to see if it's not an issue with `PIL` ?\r\nYou can just copy paste what's inside `_generate_examples` and remove all the code for `datasets` (remove yield).\r\n\r\nIf the RAM usage stays low after 600 examples it means that it comes from some sort of memory leak in the library, or with pyarrow.",
"Here's an equivalent loading code:\r\n```python\r\nimages_path = \"PHOENIX-2014-T-release-v3/PHOENIX-2014-T/features/fullFrame-210x260px/train\"\r\n\r\nfor dir_path in tqdm(os.listdir(images_path)):\r\n frames_path = os.path.join(images_path, dir_path)\r\n np_frames = []\r\n for frame_name in os.listdir(frames_path):\r\n frame_path = os.path.join(frames_path, frame_name)\r\n im = Image.open(frame_path)\r\n np_frames.append(np.asarray(im))\r\n im.close()\r\n```\r\n\r\nThe process takes 0.3% of memory, even after 1000 examples on the small machine with 120GB RAM.\r\n\r\nI guess something in the datasets library doesn't release the reference to the objects I'm yielding, but no idea how to test for this",
"I've had similar issues with Arrow once. I'll investigate...\r\n\r\nFor now maybe we can simply use the images paths in the dataset you want to add. I don't expect to fix this memory issue until 1-2 weeks unfortunately. Then we can just update the dataset with the images. What do you think ?",
"If it's just 1-2 weeks, I think it's best if we wait. I don't think it is very urgent to add it, and it will be much more useful with the images loaded rather than not (the images are low resolution, and thus papers using this dataset actually fit the entire video into memory anyway)\r\n\r\nI'll keep working on other datasets in the meanwhile :) ",
"Ok found the issue. This is because the batch size used by the writer is set to 10 000 elements by default so it would load your full dataset in memory (the writer has a buffer that flushes only after each batch). Moreover to write in Apache Arrow we have to use python objects so what's stored inside the ArrowWriter's buffer is actually python integers (32 bits).\r\n\r\nLowering the batch size to 10 should do the job.\r\n\r\nI will add a flag to the DatasetBuilder class of dataset scripts, so that we can customize the batch size.",
"Thanks, that's awesome you managed to find the problem.\r\n\r\nAbout the 32 bits - really? there isn't a way to serialize the numpy array somehow? 32 bits would take 4 times the memory / disk space needed to store these videos.\r\n\r\nPlease let me know when the batch size is customizable and I'll try again!",
"The 32 bit integrers are only used in the writer's buffer because Arrow doesn't take numpy arrays correctly as input. On disk it's stored as uint8 in arrow format ;)",
"> I don't expect to fix this memory issue until 1-2 weeks unfortunately.\r\n\r\nHi @lhoestq \r\nnot to rush of course, but I was wondering if you have a new timeline so I know how to plan my work around this :) ",
"Hi ! Next week for sure :) ",
"Alright it should be good now.\r\nYou just have to specify `_writer_batch_size = 10` for example as a class attribute of the dataset builder class.",
"I added it, but still it consumes as much memory\r\n\r\nhttps://github.com/huggingface/datasets/pull/722/files#diff-2e0d865dd4a60dedd1861d6f8c5ed281ded71508467908e1e0b1dbe7d2d420b1R66\r\n\r\nDid I not do it correctly?",
"Yes you did it right.\r\nDid you rebase to include the changes of #828 ?\r\n\r\nEDIT: looks like you merged from master in the PR. Not sure why you still have an issue then, I will investigate",
"Hi @lhoestq, any update on this?\r\nPerhaps even a direction I could try myself?",
"Sorry for the delay, I was busy with the dataset sprint and the incredible amount of contributions to the library ^^'\r\n\r\nWhat you can try to do to find what's wrong is check at which frequency the arrow writer writes all the examples from its in-memory buffer on disk. This happens [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L257-L258) in the code.\r\n\r\nThe idea is that `write_on_file` writes the examples every `writer_batch_size` examples and clear the buffer `self. current_rows`. As soon as `writer_batch_size` is small enough you shouldn't have memory issues in theory.\r\n\r\nLet me know if you have questions or if I can help.\r\n\r\nSince the dataset sprint is over and I will also be done with all the PRs soon I will be able to go back at it and take a look.",
"Thanks. I gave it a try and no success. I'm not sure what's happening there",
"I had the same issue. It works for me by setting `DEFAULT_WRITER_BATCH_SIZE = 10` of my dataset builder class. (And not `_writer_batch_size` as previously mentioned). I guess this is because `_writer_batch_size` is overwritten in `__init__` (see [here](https://github.com/huggingface/datasets/blob/0e2563e5d5c2fc193ea27d7c24607bb35607f2d5/src/datasets/builder.py#L934))",
"Yes the class attribute you can change is `DEFAULT_WRITER_BATCH_SIZE`.\r\nOtherwise in `load_dataset` you can specify `writer_batch_size=`",
"Ok thanks for the tips. Maybe the documentation should be updated accordingly https://huggingface.co/docs/datasets/add_dataset.html.",
"Thanks for reporting this mistake in the docs.\r\nI just fixed it at https://github.com/huggingface/datasets/commit/85cf7ff920c90ca2e12bedca12b36d2a043c3da2"
] | 1,603,001,226,000 | 1,617,097,628,000 | null | CONTRIBUTOR | null | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examples. """
filepath = os.path.join(base_path, "annotations", "manual", "PHOENIX-2014-T." + split + ".corpus.csv")
images_path = os.path.join(base_path, "features", "fullFrame-210x260px", split)
with open(filepath, "r", encoding="utf-8") as f:
data = csv.DictReader(f, delimiter="|", quoting=csv.QUOTE_NONE)
for row in data:
frames_path = os.path.join(images_path, row["video"])[:-7]
np_frames = []
for frame_name in os.listdir(frames_path):
frame_path = os.path.join(frames_path, frame_name)
im = Image.open(frame_path)
np_frames.append(np.asarray(im))
im.close()
yield row["name"], {"video": np_frames}
```
The dataset creation process goes out of memory on a machine with 500GB RAM.
I was under the impression that the "generator" here is exactly for that, to avoid memory constraints.
However, even if you want the entire dataset in memory, it would be in the worst case
`260x210x3 x 400 max length x 7000 samples` in bytes (uint8) = 458.64 gigabytes
So I'm not sure why it's taking more than 500GB.
And the dataset creation fails after 170 examples on a machine with 120gb RAM, and after 672 examples on a machine with 500GB RAM.
---
## Info that might help:
Iterating over examples is extremely slow.
![image](https://user-images.githubusercontent.com/5757359/96359590-3c666780-111d-11eb-9347-1f833ad982a9.png)
If I perform this iteration in my own, custom loop (Without saving to file), it runs at 8-9 examples/sec
And you can see at this state it is using 94% of the memory:
![image](https://user-images.githubusercontent.com/5757359/96359606-7afc2200-111d-11eb-8c11-0afbdba1a6a3.png)
And it is only using one CPU core, which is probably why it's so slow:
![image](https://user-images.githubusercontent.com/5757359/96359630-a3841c00-111d-11eb-9ba0-7fd3cdf51d26.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/741/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/740/comments | https://api.github.com/repos/huggingface/datasets/issues/740/events | https://github.com/huggingface/datasets/pull/740 | 723,047,958 | MDExOlB1bGxSZXF1ZXN0NTA0NzAyNTc0 | 740 | Fix TREC urls | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,602,839,488,000 | 1,603,097,677,000 | 1,603,097,676,000 | MEMBER | null | The old TREC urls are now redirections.
I updated the urls to the new ones, since we don't support redirections for downloads.
Fix #737 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/740/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/740",
"html_url": "https://github.com/huggingface/datasets/pull/740",
"diff_url": "https://github.com/huggingface/datasets/pull/740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/740.patch",
"merged_at": 1603097675000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/739/comments | https://api.github.com/repos/huggingface/datasets/issues/739/events | https://github.com/huggingface/datasets/pull/739 | 723,044,066 | MDExOlB1bGxSZXF1ZXN0NTA0Njk5NTY3 | 739 | Add wiki dpr multiset embeddings | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"I still have to compute the dataset_infos, and build + host the indexes",
"update: I'm computing the metadata, will update the PR soon",
"Finally all green and ready to merge :)"
] | 1,602,839,149,000 | 1,606,399,370,000 | 1,606,399,369,000 | MEMBER | null | There are two DPR encoders, one trained on Natural Questions and one trained on a multiset/hybrid dataset.
Previously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset/hybrid dataset.
In the configuration you can now specify `embeddings_name="nq"` or `embeddings_name="multiset"` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/739/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/739",
"html_url": "https://github.com/huggingface/datasets/pull/739",
"diff_url": "https://github.com/huggingface/datasets/pull/739.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/739.patch",
"merged_at": 1606399369000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/738/comments | https://api.github.com/repos/huggingface/datasets/issues/738/events | https://github.com/huggingface/datasets/pull/738 | 723,033,923 | MDExOlB1bGxSZXF1ZXN0NTA0NjkxNjM4 | 738 | Replace seqeval code with original classification_report for simplicity | {
"login": "Hironsan",
"id": 6737785,
"node_id": "MDQ6VXNlcjY3Mzc3ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6737785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hironsan",
"html_url": "https://github.com/Hironsan",
"followers_url": "https://api.github.com/users/Hironsan/followers",
"following_url": "https://api.github.com/users/Hironsan/following{/other_user}",
"gists_url": "https://api.github.com/users/Hironsan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hironsan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hironsan/subscriptions",
"organizations_url": "https://api.github.com/users/Hironsan/orgs",
"repos_url": "https://api.github.com/users/Hironsan/repos",
"events_url": "https://api.github.com/users/Hironsan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hironsan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hello,\r\n\r\nI ran https://github.com/huggingface/transformers/blob/master/examples/token-classification/run.sh\r\n\r\nAnd received this error:\r\n```\r\n100%|██████████| 407/407 [21:37<00:00, 3.44s/it]Traceback (most recent call last):\r\n File \"run_ner.py\", line 445, in <module>\r\n main()\r\n File \"run_ner.py\", line 398, in main\r\n results = trainer.evaluate()\r\n File \"/data/2021/transformers/src/transformers/trainer.py\", line 1470, in evaluate\r\n metric_key_prefix=metric_key_prefix,\r\n File \"/data/2021/transformers/src/transformers/trainer.py\", line 1622, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"run_ner.py\", line 345, in compute_metrics\r\n results = metric.compute(predictions=true_predictions, references=true_labels)\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/metric.py\", line 398, in compute\r\n output = self._compute(predictions=predictions, references=references, **kwargs)\r\n File \"/root/.cache/huggingface/modules/datasets_modules/metrics/seqeval/81eda1ff004361d4fa48754a446ec69bb7aa9cf4d14c7215f407d1475941c5ff/seqeval.py\", line 97, in _compute\r\n report = classification_report(y_true=references, y_pred=predictions, suffix=suffix, output_dict=True)\r\nTypeError: classification_report() got an unexpected keyword argument 'output_dict'\r\n```\r\n\r\nI'm still trying multiple things to see if I can work around this, but I thought it might be useful to mention it here.\r\n\r\n```\r\nName: transformers\r\nVersion: 4.3.0.dev0\r\n\r\nName: datasets\r\nVersion: 1.2.1\r\n```",
"Hi, can you try to update your local installation of `seqeval` ?\r\n\r\n```\r\npip install --upgrade seqeval\r\n```",
"@lhoestq thanks for the reply. Indeed it was some issue with my setup. I removed the \"transformers\" and \"datasets\" (that I had previously installed from the source code), cleared the cache and installed everything again. It works great now!"
] | 1,602,838,305,000 | 1,611,245,235,000 | 1,603,103,472,000 | CONTRIBUTOR | null | Recently, the original seqeval has enabled us to get per type scores and overall scores as a dictionary.
This PR replaces the current code with the original function(`classification_report`) to simplify it.
Also, the original code has been updated to fix #352.
- Related issue: https://github.com/chakki-works/seqeval/pull/38
```python
from datasets import load_metric
metric = load_metric("seqeval")
y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
metric.compute(predictions=y_pred, references=y_true)
# Output: {'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/738/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/738",
"html_url": "https://github.com/huggingface/datasets/pull/738",
"diff_url": "https://github.com/huggingface/datasets/pull/738.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/738.patch",
"merged_at": 1603103471000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/737/comments | https://api.github.com/repos/huggingface/datasets/issues/737/events | https://github.com/huggingface/datasets/issues/737 | 722,463,923 | MDU6SXNzdWU3MjI0NjM5MjM= | 737 | Trec Dataset Connection Error | {
"login": "aychang95",
"id": 10554495,
"node_id": "MDQ6VXNlcjEwNTU0NDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/10554495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aychang95",
"html_url": "https://github.com/aychang95",
"followers_url": "https://api.github.com/users/aychang95/followers",
"following_url": "https://api.github.com/users/aychang95/following{/other_user}",
"gists_url": "https://api.github.com/users/aychang95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aychang95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aychang95/subscriptions",
"organizations_url": "https://api.github.com/users/aychang95/orgs",
"repos_url": "https://api.github.com/users/aychang95/repos",
"events_url": "https://api.github.com/users/aychang95/events{/privacy}",
"received_events_url": "https://api.github.com/users/aychang95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks for reporting.\r\nThat's because the download url has changed. The old url now redirects to the new one but we don't support redirection for downloads.\r\n\r\nI'm opening a PR to update the url"
] | 1,602,777,473,000 | 1,603,097,676,000 | 1,603,097,676,000 | NONE | null | **Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label (but the link doesn't seem broken)
<details>
<summary>Error Logs</summary>
Using custom data configuration default
Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /root/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-8-66bf1242096e> in <module>()
----> 1 load_dataset("trec")
10 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)
473 elif response is not None and response.status_code == 404:
474 raise FileNotFoundError("Couldn't find file at {}".format(url))
--> 475 raise ConnectionError("Couldn't reach {}".format(url))
476
477 # Try a second time
ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label
</details> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/737/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/736/comments | https://api.github.com/repos/huggingface/datasets/issues/736/events | https://github.com/huggingface/datasets/pull/736 | 722,348,191 | MDExOlB1bGxSZXF1ZXN0NTA0MTE0MjMy | 736 | Start community-provided dataset docs | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"can you also reference the `--organization` flag like in https://github.com/huggingface/transformers/blob/master/docs/source/model_sharing.rst#upload-your-model-with-the-cli ?",
"done!",
"Not sure if the changes in `datasets/wmt_t2t/wmt_utils.py` are intentional.\r\nIf you want to add more configs to wmt, could you do it in a serapate PR ?",
"I don't think I changed wmt_utils (I think github is wrong or my setup is poorly configured).\r\n\r\nLocally git diff master --name-only says one file. Master is up to date.\r\nTried to make a new PR #755 and the same thing happened.",
"Trying new fork."
] | 1,602,769,299,000 | 1,603,458,928,000 | 1,603,458,928,000 | CONTRIBUTOR | null | This is one I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.
+ In slack @thomwolf called it a `user-namespace` dataset, but the docs call it `community dataset`.
I think the first naming is clearer, but I didn't address that here.
+ I didn't add metadata, will try that. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/736/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/736",
"html_url": "https://github.com/huggingface/datasets/pull/736",
"diff_url": "https://github.com/huggingface/datasets/pull/736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/736.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/735/comments | https://api.github.com/repos/huggingface/datasets/issues/735/events | https://github.com/huggingface/datasets/issues/735 | 722,225,270 | MDU6SXNzdWU3MjIyMjUyNzA= | 735 | Throw error when an unexpected key is used in data_files | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks for reporting !\r\nWe'll add support for other keys"
] | 1,602,759,327,000 | 1,604,064,232,000 | 1,604,064,232,000 | CONTRIBUTOR | null | I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users.
So the following, unintuitively, returns only one key (namely `train`).
```python
datasets = load_dataset("text", data_files={"train": train_f, "valid": valid_f})
print(datasets.keys())
# dict_keys(['train'])
```
whereas using `validation` instead, does return the expected result:
```python
datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f})
print(datasets.keys())
# dict_keys(['train', 'validation'])
```
I would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/735/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/734/comments | https://api.github.com/repos/huggingface/datasets/issues/734/events | https://github.com/huggingface/datasets/pull/734 | 721,767,848 | MDExOlB1bGxSZXF1ZXN0NTAzNjMwMDcz | 734 | Fix GLUE metric description | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,602,708,254,000 | 1,602,754,063,000 | 1,602,754,062,000 | MEMBER | null | Small typo: the description says translation instead of prediction. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/734/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/734",
"html_url": "https://github.com/huggingface/datasets/pull/734",
"diff_url": "https://github.com/huggingface/datasets/pull/734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/734.patch",
"merged_at": 1602754062000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/733/comments | https://api.github.com/repos/huggingface/datasets/issues/733/events | https://github.com/huggingface/datasets/pull/733 | 721,366,744 | MDExOlB1bGxSZXF1ZXN0NTAzMjk2NDQw | 733 | Update link to dataset viewer | {
"login": "negedng",
"id": 12969168,
"node_id": "MDQ6VXNlcjEyOTY5MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/12969168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/negedng",
"html_url": "https://github.com/negedng",
"followers_url": "https://api.github.com/users/negedng/followers",
"following_url": "https://api.github.com/users/negedng/following{/other_user}",
"gists_url": "https://api.github.com/users/negedng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/negedng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/negedng/subscriptions",
"organizations_url": "https://api.github.com/users/negedng/orgs",
"repos_url": "https://api.github.com/users/negedng/repos",
"events_url": "https://api.github.com/users/negedng/events{/privacy}",
"received_events_url": "https://api.github.com/users/negedng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,602,674,003,000 | 1,602,684,451,000 | 1,602,684,451,000 | CONTRIBUTOR | null | Change 404 error links in quick tour to working ones | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/733/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/733",
"html_url": "https://github.com/huggingface/datasets/pull/733",
"diff_url": "https://github.com/huggingface/datasets/pull/733.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/733.patch",
"merged_at": 1602684451000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/732/comments | https://api.github.com/repos/huggingface/datasets/issues/732/events | https://github.com/huggingface/datasets/pull/732 | 721,359,448 | MDExOlB1bGxSZXF1ZXN0NTAzMjkwMjEy | 732 | dataset(wlasl): initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Followup: \r\nFrom the info in https://github.com/huggingface/datasets/pull/722, I probably should load the videos as array of frames directly into the database. \r\nThis will make the dataset generation time very long, but will make working with the dataset much easier.",
"When I run:\r\n```\r\npython datasets-cli dummy_data datasets/wlasl\r\n```\r\n\r\nI get:\r\n```\r\nChecking datasets/wlasl/wlasl.py for additional imports. \r\nFound main folder for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl \r\nFound specific version folder for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786 \r\nFound script file from datasets/wlasl/wlasl.py to /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/wlasl.py \r\nFound dataset infos file from datasets/wlasl/dataset_infos.json to /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/dataset_infos.json \r\nFound metadata file for dataset datasets/wlasl/wlasl.py at /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786/wlasl.json \r\nUsing custom data configuration default \r\nLoading Dataset Infos from /home/nlp/amit/.cache/huggingface/modules/datasets_modules/datasets/wlasl/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786\r\nCreating dummy folder structure for datasets/wlasl/dummy/0.3.0... \r\nDataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data. \r\nTraceback (most recent call last): \r\nFile \"datasets-cli\", line 36, in \r\nservice.run() File \"/home/nlp/amit/anaconda2/envs/meta-scholar/lib/python3.7/site-packages/datasets-1.1.2-py3.7.egg/datasets/commands/dummy_data.py\", line 73, in run \r\nfor split in generator_splits: \r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```"
] | 1,602,673,302,000 | 1,616,480,383,000 | 1,616,480,383,000 | CONTRIBUTOR | null | takes like 9-10 hours to download all of the videos for the dataset, but it does finish :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/732/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/732",
"html_url": "https://github.com/huggingface/datasets/pull/732",
"diff_url": "https://github.com/huggingface/datasets/pull/732.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/732.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/731/comments | https://api.github.com/repos/huggingface/datasets/issues/731/events | https://github.com/huggingface/datasets/pull/731 | 721,142,985 | MDExOlB1bGxSZXF1ZXN0NTAzMTExNzc4 | 731 | dataset(aslg_pc12): initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks @lhoestq \r\nAre there any guidelines for the dummy data?\r\nIn this particular case for example, the dataset fetches from two hardcoded URLs. \r\nDo I just `head -n 10` both files and zip them?\r\n\r\n",
"> Thanks @lhoestq\r\n> Are there any guidelines for the dummy data?\r\n> In this particular case for example, the dataset fetches from two hardcoded URLs.\r\n> Do I just `head -n 10` both files and zip them?\r\n\r\nYes the idea is just to have a few examples to properly test the script and make sure it keeps working in the long run.\r\n\r\nAnd FYI there's a command to help you name the dummy data files correctly. More info in the documentation [here](https://huggingface.co/docs/datasets/share_dataset.html#adding-dummy-data)",
"@lhoestq passes all tests"
] | 1,602,652,477,000 | 1,603,898,826,000 | 1,603,898,826,000 | CONTRIBUTOR | null | This contains the only current public part of this corpus.
The rest of the corpus is not yet been made public, but this sample is still being used by researchers. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/731/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/731",
"html_url": "https://github.com/huggingface/datasets/pull/731",
"diff_url": "https://github.com/huggingface/datasets/pull/731.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/731.patch",
"merged_at": 1603898826000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/730/comments | https://api.github.com/repos/huggingface/datasets/issues/730/events | https://github.com/huggingface/datasets/issues/730 | 721,073,812 | MDU6SXNzdWU3MjEwNzM4MTI= | 730 | Possible caching bug | {
"login": "ArneBinder",
"id": 3375489,
"node_id": "MDQ6VXNlcjMzNzU0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArneBinder",
"html_url": "https://github.com/ArneBinder",
"followers_url": "https://api.github.com/users/ArneBinder/followers",
"following_url": "https://api.github.com/users/ArneBinder/following{/other_user}",
"gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions",
"organizations_url": "https://api.github.com/users/ArneBinder/orgs",
"repos_url": "https://api.github.com/users/ArneBinder/repos",
"events_url": "https://api.github.com/users/ArneBinder/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArneBinder/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Thanks for reporting. That's a bug indeed.\r\nApparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`)",
"Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command \r\n`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`\r\n\r\nchange the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html\r\n`dataset = datasets.load_dataset('json', data_files=args.dataset)`\r\n\r\nErrors:\r\n`Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-c1e124ad488911b8/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264...\r\n`"
] | 1,602,640,954,000 | 1,638,109,737,000 | 1,603,964,161,000 | NONE | null | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produces this output:
```
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
```
Just changing the order (and deleting the temp files):
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
```
produces this:
```
Using custom data configuration default
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': '🤗🤗🤗'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': '🤗🤗🤗'}
```
Is it intended that the cache path does not depend on the config entries?
tested with datasets==1.1.2 and python==3.8.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/730/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/729/comments | https://api.github.com/repos/huggingface/datasets/issues/729/events | https://github.com/huggingface/datasets/issues/729 | 719,558,876 | MDU6SXNzdWU3MTk1NTg4NzY= | 729 | Better error message when one forgets to call `add_batch` before `compute` | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,602,525,562,000 | 1,603,984,704,000 | 1,603,984,704,000 | MEMBER | null | When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer.
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
pass # User forgets to call `add_batch`
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-13-267729d187fa> in <module>
3 pass
4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 5 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
343 elif self.process_id == 0:
344 # Let's acquire a lock on each node files to be sure they are finished writing
--> 345 file_paths, filelocks = self._get_all_cache_files()
346
347 # Read the predictions and references
~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self)
280 filelocks = []
281 for process_id, file_path in enumerate(file_paths):
--> 282 filelock = FileLock(file_path + ".lock")
283 try:
284 filelock.acquire(timeout=self.timeout)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/729/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/728/comments | https://api.github.com/repos/huggingface/datasets/issues/728/events | https://github.com/huggingface/datasets/issues/728 | 719,555,780 | MDU6SXNzdWU3MTk1NTU3ODA= | 728 | Passing `cache_dir` to a metric does not work | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,602,525,314,000 | 1,603,964,082,000 | 1,603,964,082,000 | MEMBER | null | When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
~/git/datasets/src/datasets/metric.py in _finalize(self)
349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features))
--> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
351 except FileNotFoundError:
~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions)
227 # Prepend path to filename
--> 228 pa_table = self._read_files(files)
229 files = copy.deepcopy(files)
~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files)
166 for f_dict in files:
--> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
168 pa_tables.append(pa_table)
~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take)
291 )
--> 292 mmap = pa.memory_map(filename)
293 f = pa.ipc.open_stream(mmap)
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-17-e42d43cc981f> in <module>
2 for i in range(0, 1024, batch_size):
3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 4 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
351 except FileNotFoundError:
352 raise ValueError(
--> 353 "Error in finalize: another metric instance is already using the local cache file. "
354 "Please specify an experiment_id to avoid colision between distributed metric instances."
355 )
ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.
```
The code works when we remove the `cache_dir=...` from the metric. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/728/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/727/comments | https://api.github.com/repos/huggingface/datasets/issues/727/events | https://github.com/huggingface/datasets/issues/727 | 719,386,366 | MDU6SXNzdWU3MTkzODYzNjY= | 727 | Parallel downloads progress bar flickers | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [] | 1,602,509,765,000 | 1,602,509,765,000 | null | MEMBER | null | When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.
To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar.
Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/727/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/726/comments | https://api.github.com/repos/huggingface/datasets/issues/726/events | https://github.com/huggingface/datasets/issues/726 | 719,313,754 | MDU6SXNzdWU3MTkzMTM3NTQ= | 726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | {
"login": "SparkJiao",
"id": 16469472,
"node_id": "MDQ6VXNlcjE2NDY5NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/16469472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SparkJiao",
"html_url": "https://github.com/SparkJiao",
"followers_url": "https://api.github.com/users/SparkJiao/followers",
"following_url": "https://api.github.com/users/SparkJiao/following{/other_user}",
"gists_url": "https://api.github.com/users/SparkJiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SparkJiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SparkJiao/subscriptions",
"organizations_url": "https://api.github.com/users/SparkJiao/orgs",
"repos_url": "https://api.github.com/users/SparkJiao/repos",
"events_url": "https://api.github.com/users/SparkJiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/SparkJiao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Hi try, to provide more information please.\r\n\r\nExample code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).",
"> Hi try, to provide more information please.\r\n> \r\n> Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).\r\n\r\nI have update the description, sorry for the incomplete issue by mistake.",
"Hi, I have manually downloaded the compressed dataset `openwebtext.tar.xz' and use the following command to preprocess the examples:\r\n```\r\n>>> dataset = load_dataset('/home/admin/workspace/datasets/datasets-master/datasets-master/datasets/openwebtext', data_dir='/home/admin/workspace/datasets')\r\nUsing custom data configuration default\r\nDownloading and preparing dataset openwebtext/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/admin/.cache/huggingface/datasets/openwebtext/default/0.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...\r\nDataset openwebtext downloaded and prepared to /home/admin/.cache/huggingface/datasets/openwebtext/default/0.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02. Subsequent calls will reuse this data.\r\n>>> len(dataset['train'])\r\n74571\r\n>>>\r\n```\r\nThe size of the pre-processed example file is only 354MB, however the processed bookcorpus dataset is 4.6g. Are there any problems?",
"NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n\r\ni got this issue when i try to work on my own datasets kindly tell me, from where i can get checksums of train and dev file in my github repo",
"Hi, I got the similar issue for xnli dataset while working on colab with python3.7. \r\n\r\n`nlp.load_dataset(path = 'xnli')`\r\n\r\nThe above command resulted in following issue : \r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']\r\n```\r\n\r\nAny idea how to fix this ?",
"Did anyone figure out how to fix this error?"
] | 1,602,503,110,000 | 1,633,830,741,000 | null | NONE | null | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/openwebtext/plain_text/1.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 536, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://zenodo.org/record/3834942/files/openwebtext.tar.xz']
```
I think this problem is caused because the released dataset has changed. Or I should download the dataset manually?
Sorry for release the unfinised issue by mistake. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/726/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/726/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/725/comments | https://api.github.com/repos/huggingface/datasets/issues/725/events | https://github.com/huggingface/datasets/pull/725 | 718,985,641 | MDExOlB1bGxSZXF1ZXN0NTAxMjUxODI1 | 725 | pretty print dataset objects | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Great, as you found it useful I improved the code a bit to automate indentation in the parent class, so that the child repr doesn't need to guess the indentation level, while repr'ing nicely on its own.\r\n\r\n- do we want indent=4 or 2?\r\n- do we want `{` ... `}` or w/o?\r\n\r\ncurrently it's indent4 and w/ curly braces, so it looks:\r\n\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 157252\r\n })\r\n validation: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5599\r\n })\r\n test: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5577\r\n })\r\n})\r\n```\r\njust child:\r\n```\r\nDataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5577\r\n})\r\n```\r\n\r\n",
"Yes! A lot better indeed!"
] | 1,602,468,226,000 | 1,603,470,275,000 | 1,603,443,646,000 | CONTRIBUTOR | null | Currently, if I do:
```
from datasets import load_dataset
load_dataset("wikihow", 'all', data_dir="/hf/pegasus-datasets/wikihow/")
```
I get:
```
DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None),
'headline': Value(dtype='string', id=None), 'title': Value(dtype='string',
id=None)}, num_rows: 157252), 'validation': Dataset(features: {'text':
Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None)}, num_rows: 5599), 'test':
Dataset(features: {'text': Value(dtype='string', id=None), 'headline':
Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)},
num_rows: 5577)})
```
This is not very readable.
Can we either have a better `__repr__` or have a custom method to nicely pprint the dataset object?
Here is my very simple attempt. With this PR, it produces:
```
DatasetDict({
train: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 157252
})
validation: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 5599
})
test: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 5577
})
})
```
I did omit the data types on purpose to make it more readable, but it shouldn't be too difficult to integrate those too.
note that this PR also fixes the inconsistency in output that in master misses enclosing `{}` for Dataset, but it is there for `DatasetDict` - or perhaps it was by design.
I'm totally not attached to this format, just wanting something more readable. One approach could be to serialize to `json.dumps` or something similar. It'd make the indentation simpler.
Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/725/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/725",
"html_url": "https://github.com/huggingface/datasets/pull/725",
"diff_url": "https://github.com/huggingface/datasets/pull/725.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/725.patch",
"merged_at": 1603443646000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/724/comments | https://api.github.com/repos/huggingface/datasets/issues/724/events | https://github.com/huggingface/datasets/issues/724 | 718,947,700 | MDU6SXNzdWU3MTg5NDc3MDA= | 724 | need to redirect /nlp to /datasets and remove outdated info | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": null,
"id": null,
"node_id": null,
"avatar_url": null,
"gravatar_id": null,
"url": null,
"html_url": null,
"followers_url": null,
"following_url": null,
"gists_url": null,
"starred_url": null,
"subscriptions_url": null,
"organizations_url": null,
"repos_url": null,
"events_url": null,
"received_events_url": null,
"type": null,
"site_admin": null
} | [] | null | [
"Should be fixed now: \r\n\r\n![image](https://user-images.githubusercontent.com/35882/95917301-040b0600-0d78-11eb-9655-c4ac0e788089.png)\r\n\r\nNot sure I understand what you mean by the second part?\r\n",
"Thank you!\r\n\r\n> Not sure I understand what you mean by the second part?\r\n\r\nCompare the 2:\r\n* https://huggingface.co/datasets/wikihow\r\n* https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all\r\nCan you see the difference? 2nd has formatting, 1st doesn't.\r\n",
"For context, those are two different pages (not an old vs new one), one is from the dataset viewer (you can browse data inside the datasets) while the other is just a basic reference page displayed some metadata about the dataset.\r\n\r\nFor the second one, we'll move to markdown parsing soon, so it'll be formatted better.",
"I understand. I was just flagging the lack of markup issue."
] | 1,602,457,932,000 | 1,602,694,812,000 | 1,602,694,812,000 | CONTRIBUTOR | null | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/724/timeline | null | null | {
"url": null,
"html_url": null,
"diff_url": null,
"patch_url": null,
"merged_at": null
} | true |