url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.27B
| node_id
stringlengths 18
32
| number
int64 1
4.51k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,655B
| updated_at
int64 1,587B
1,655B
| closed_at
int64 1,587B
1,655B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4507/comments | https://api.github.com/repos/huggingface/datasets/issues/4507/events | https://github.com/huggingface/datasets/issues/4507 | 1,272,615,932 | I_kwDODunzps5L2pP8 | 4,507 | How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script | {
"login": "liyucheng09",
"id": 27999909,
"node_id": "MDQ6VXNlcjI3OTk5OTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liyucheng09",
"html_url": "https://github.com/liyucheng09",
"followers_url": "https://api.github.com/users/liyucheng09/followers",
"following_url": "https://api.github.com/users/liyucheng09/following{/other_user}",
"gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions",
"organizations_url": "https://api.github.com/users/liyucheng09/orgs",
"repos_url": "https://api.github.com/users/liyucheng09/repos",
"events_url": "https://api.github.com/users/liyucheng09/events{/privacy}",
"received_events_url": "https://api.github.com/users/liyucheng09/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,655,319,394,000 | 1,655,319,464,000 | null | NONE | null | If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair.
Or I can paraphrase the question in the following way: how to skip `_split_generators` step in `DatasetBuilder` to let `as_dataset` gives a single `Dataset` rather than a list`[Dataset]`?
Many thanks for any help. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4507/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4506/comments | https://api.github.com/repos/huggingface/datasets/issues/4506/events | https://github.com/huggingface/datasets/issues/4506 | 1,272,516,895 | I_kwDODunzps5L2REf | 4,506 | Failure to hash (and cache) a `.map(...)` (almost always) | {
"login": "DrMatters",
"id": 22641583,
"node_id": "MDQ6VXNlcjIyNjQxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrMatters",
"html_url": "https://github.com/DrMatters",
"followers_url": "https://api.github.com/users/DrMatters/followers",
"following_url": "https://api.github.com/users/DrMatters/following{/other_user}",
"gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions",
"organizations_url": "https://api.github.com/users/DrMatters/orgs",
"repos_url": "https://api.github.com/users/DrMatters/repos",
"events_url": "https://api.github.com/users/DrMatters/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrMatters/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Important info:\r\n\r\nAs hashes are generated randomly for functions, it leads to **false identifying some results as already hashed** (mapping is not executed after a method update) when there's a `pytorch_lightning.seed_everything(123)`"
] | 1,655,313,091,000 | 1,655,315,434,000 | null | NONE | null | ## Describe the bug
Sometimes I get messages about not being able to hash a method:
`Parameter 'function'=<function StupidDataModule._separate_speaker_id_from_dialogue at 0x7f1b27180d30> of the transform datasets.arrow_dataset.Dataset.
_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.`
Whilst the function looks like this:
```python
@staticmethod
def _separate_speaker_id_from_dialogue(example: arrow_dataset.Example):
speaker_id, dialogue = tuple(zip(*(example["dialogue"])))
example["speaker_id"] = speaker_id
example["dialogue"] = dialogue
return example
```
This is the first step in my preprocessing pipeline, but sometimes the message about failure to hash is not appearing on the first step, but then appears on a later step.
This error is sometimes causing a failure to use cached data, instead of re-running all steps again.
## Steps to reproduce the bug
```python
import copy
import datasets
from datasets import arrow_dataset
def main():
dataset = datasets.load_dataset("blended_skill_talk")
res = dataset.map(method)
print(res)
def method(example: arrow_dataset.Example):
example['previous_utterance_copy'] = copy.deepcopy(example['previous_utterance'])
return example
if __name__ == '__main__':
main()
```
Run with:
```
python -m reproduce_error
```
## Expected results
Dataset is mapped and cached correctly.
## Actual results
The code outputs this at some point:
`Parameter 'function'=<function method at 0x7faa83d2a160> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Ubuntu 20.04.3
- Python version: 3.9.12
- PyArrow version: 8.0.0
- Datasets version: 2.3.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4506/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4505/comments | https://api.github.com/repos/huggingface/datasets/issues/4505/events | https://github.com/huggingface/datasets/pull/4505 | 1,272,477,226 | PR_kwDODunzps45uH-o | 4,505 | Fix double dots in data files | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI fails are unrelated to this PR (apparently something related to `seqeval` on windows) - merging :)"
] | 1,655,310,664,000 | 1,655,313,358,000 | 1,655,312,753,000 | MEMBER | null | As mentioned in https://github.com/huggingface/transformers/pull/17715 `data_files` can't find a file if the path contains double dots `/../`. This has been introduced in https://github.com/huggingface/datasets/pull/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot)
I fixed this and added a test
cc @sgugger @ydshieh | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4505/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4505/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4505",
"html_url": "https://github.com/huggingface/datasets/pull/4505",
"diff_url": "https://github.com/huggingface/datasets/pull/4505.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4505.patch",
"merged_at": 1655312753000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4504/comments | https://api.github.com/repos/huggingface/datasets/issues/4504/events | https://github.com/huggingface/datasets/issues/4504 | 1,272,418,480 | I_kwDODunzps5L15Cw | 4,504 | Can you please add the Stanford dog dataset? | {
"login": "dgrnd4",
"id": 69434832,
"node_id": "MDQ6VXNlcjY5NDM0ODMy",
"avatar_url": "https://avatars.githubusercontent.com/u/69434832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dgrnd4",
"html_url": "https://github.com/dgrnd4",
"followers_url": "https://api.github.com/users/dgrnd4/followers",
"following_url": "https://api.github.com/users/dgrnd4/following{/other_user}",
"gists_url": "https://api.github.com/users/dgrnd4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dgrnd4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dgrnd4/subscriptions",
"organizations_url": "https://api.github.com/users/dgrnd4/orgs",
"repos_url": "https://api.github.com/users/dgrnd4/repos",
"events_url": "https://api.github.com/users/dgrnd4/events{/privacy}",
"received_events_url": "https://api.github.com/users/dgrnd4/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"would you like to give it a try, @dgrnd4? (maybe with the help of the dataset author?)",
"@julien-c i am sorry but I have no idea about how it works: can I add the dataset by myself, following \"instructions to add a new dataset\"?\r\nCan I add a dataset even if it's not mine? (it's public in the link that I wrote on the post)\r\n"
] | 1,655,307,575,000 | 1,655,314,455,000 | null | NONE | null | ## Adding a Dataset
- **Name:** *Stanford dog dataset*
- **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/*
- **Data:** *[link to the Github repository or current dataset location](http://vision.stanford.edu/aditya86/ImageNetDogs/)*
- **Motivation:** *The dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It is useful for fine-grain purpose *
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4504/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4503/comments | https://api.github.com/repos/huggingface/datasets/issues/4503/events | https://github.com/huggingface/datasets/pull/4503 | 1,272,367,055 | PR_kwDODunzps45twLR | 4,503 | Add feverous config to fever dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4503). All of your documentation changes will be reflected on that endpoint."
] | 1,655,305,187,000 | 1,655,305,665,000 | null | MEMBER | null | Related to: #4452 and #3792. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4503/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4503",
"html_url": "https://github.com/huggingface/datasets/pull/4503",
"diff_url": "https://github.com/huggingface/datasets/pull/4503.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4503.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4502/comments | https://api.github.com/repos/huggingface/datasets/issues/4502/events | https://github.com/huggingface/datasets/issues/4502 | 1,272,353,700 | I_kwDODunzps5L1pOk | 4,502 | Logic bug in arrow_writer? | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,655,304,600,000 | 1,655,304,600,000 | null | CONTRIBUTOR | null | https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488
I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows:
```
- if batch_examples and len(next(iter(batch_examples.values()))) == 0:
+ if not batch_examples or len(next(iter(batch_examples.values()))) == 0:
return
```
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4502/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4501/comments | https://api.github.com/repos/huggingface/datasets/issues/4501/events | https://github.com/huggingface/datasets/pull/4501 | 1,272,300,646 | PR_kwDODunzps45th2M | 4,501 | Corrected broken links in doc | {
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655,302,337,000 | 1,655,305,865,000 | 1,655,305,256,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4501/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4501/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4501",
"html_url": "https://github.com/huggingface/datasets/pull/4501",
"diff_url": "https://github.com/huggingface/datasets/pull/4501.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4501.patch",
"merged_at": 1655305256000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4500/comments | https://api.github.com/repos/huggingface/datasets/issues/4500/events | https://github.com/huggingface/datasets/pull/4500 | 1,272,281,992 | PR_kwDODunzps45tdxk | 4,500 | Add `concatenate_datasets` for iterable datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4500). All of your documentation changes will be reflected on that endpoint."
] | 1,655,301,530,000 | 1,655,302,799,000 | null | MEMBER | null | `concatenate_datasets` currently only supports lists of `datasets.Dataset`, not lists of `datasets.IterableDataset` like `interleave_datasets`
Fix https://github.com/huggingface/datasets/issues/2564
I also moved `_interleave_map_style_datasets` from combine.py to arrow_dataset.py, since the logic depends a lot on the `Dataset` object internals
And I moved `concatenate_datasets` from arrow_dataset.py to combine.py to have it with `interleave_datasets` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4500/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4500",
"html_url": "https://github.com/huggingface/datasets/pull/4500",
"diff_url": "https://github.com/huggingface/datasets/pull/4500.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4500.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4499/comments | https://api.github.com/repos/huggingface/datasets/issues/4499/events | https://github.com/huggingface/datasets/pull/4499 | 1,272,118,162 | PR_kwDODunzps45s6Jh | 4,499 | fix ETT m1/m2 test/val dataset | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thansk for the fix ! Can you regenerate the datasets_infos.json please ? This way it will update the expected number of examples in the test and val splits",
"ah yes!"
] | 1,655,293,862,000 | 1,655,304,956,000 | 1,655,304,313,000 | CONTRIBUTOR | null | https://huggingface.co/datasets/ett/discussions/1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4499/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4499/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4499",
"html_url": "https://github.com/huggingface/datasets/pull/4499",
"diff_url": "https://github.com/huggingface/datasets/pull/4499.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4499.patch",
"merged_at": 1655304312000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4498/comments | https://api.github.com/repos/huggingface/datasets/issues/4498/events | https://github.com/huggingface/datasets/issues/4498 | 1,272,100,549 | I_kwDODunzps5L0rbF | 4,498 | WER and CER > 1 | {
"login": "sadrasabouri",
"id": 43045767,
"node_id": "MDQ6VXNlcjQzMDQ1NzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/43045767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sadrasabouri",
"html_url": "https://github.com/sadrasabouri",
"followers_url": "https://api.github.com/users/sadrasabouri/followers",
"following_url": "https://api.github.com/users/sadrasabouri/following{/other_user}",
"gists_url": "https://api.github.com/users/sadrasabouri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sadrasabouri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sadrasabouri/subscriptions",
"organizations_url": "https://api.github.com/users/sadrasabouri/orgs",
"repos_url": "https://api.github.com/users/sadrasabouri/repos",
"events_url": "https://api.github.com/users/sadrasabouri/events{/privacy}",
"received_events_url": "https://api.github.com/users/sadrasabouri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"WER can have values bigger than 1.0, this is expected when there are too many insertions\r\n\r\nFrom [wikipedia](https://en.wikipedia.org/wiki/Word_error_rate):\r\n> Note that since N is the number of words in the reference, the word error rate can be larger than 1.0"
] | 1,655,292,912,000 | 1,655,311,085,000 | 1,655,311,085,000 | NONE | null | ## Describe the bug
It seems that in some cases in which the `prediction` is longer than the `reference` we may have word/character error rate higher than 1 which is a bit odd.
If it's a real bug I think I can solve it with a PR changing [this](https://github.com/huggingface/datasets/blob/master/metrics/wer/wer.py#L105) line to
```python
return min(incorrect / total, 1.0)
```
## Steps to reproduce the bug
```python
from datasets import load_metric
wer = load_metric("wer")
wer_value = wer.compute(predictions=["Hi World vka"], references=["Hello"])
print(wer_value)
```
## Expected results
```
1.0
```
## Actual results
```
3.0
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4498/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4497/comments | https://api.github.com/repos/huggingface/datasets/issues/4497/events | https://github.com/huggingface/datasets/pull/4497 | 1,271,964,338 | PR_kwDODunzps45sYns | 4,497 | Re-add download_manager module in utils | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the fix.\r\n\r\nI'm wondering how this fixes backward compatibility...\r\n\r\nExecuting this code:\r\n```python\r\nfrom datasets.utils.download_manager import DownloadMode\r\n```\r\nwe will have\r\n```python\r\nDownloadMode = None\r\n```\r\n\r\nIf afterwards we use something like:\r\n```python\r\nif download_mode == DownloadMode.FORCE_REDOWNLOAD\r\n```\r\nthat will raise an exception.",
"It works fine on my side:\r\n```python\r\n>>> from datasets.utils.download_manager import DownloadMode\r\n>>> DownloadMode is not None\r\nTrue\r\n```",
"As reported in https://github.com/huggingface/evaluate/pull/143\r\n```python\r\nfrom datasets.utils import DownloadConfig\r\n```\r\nis also missing, I'm re-adding it",
"Took the liberty of merging this one, to do a patch release soon. If we think of a better approach we can improve it later"
] | 1,655,286,273,000 | 1,655,289,208,000 | 1,655,288,624,000 | MEMBER | null | https://github.com/huggingface/datasets/pull/4384 moved `datasets.utils.download_manager` to `datasets.download.download_manager`
This breaks `evaluate` which imports `DownloadMode` from `datasets.utils.download_manager`
This PR re-adds `datasets.utils.download_manager` without circular imports.
We could also show a message that says that accessing it is deprecated, but I think we can do this in a subsequent PR, and just focus on doing a patch release for now | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4497/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4497",
"html_url": "https://github.com/huggingface/datasets/pull/4497",
"diff_url": "https://github.com/huggingface/datasets/pull/4497.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4497.patch",
"merged_at": 1655288624000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4496/comments | https://api.github.com/repos/huggingface/datasets/issues/4496/events | https://github.com/huggingface/datasets/pull/4496 | 1,271,945,704 | PR_kwDODunzps45sUnW | 4,496 | Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4496). All of your documentation changes will be reflected on that endpoint.",
"FYI I used the following regex to look for the `assertEqual` statements where the assertion was being done over a Tuple: `self.assertEqual(.*, \\(.*,)(\\)\\))$`, hope this is useful!"
] | 1,655,285,356,000 | 1,655,286,253,000 | null | CONTRIBUTOR | null | As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4496/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4496",
"html_url": "https://github.com/huggingface/datasets/pull/4496",
"diff_url": "https://github.com/huggingface/datasets/pull/4496.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4496.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4495/comments | https://api.github.com/repos/huggingface/datasets/issues/4495/events | https://github.com/huggingface/datasets/pull/4495 | 1,271,851,025 | PR_kwDODunzps45sAgO | 4,495 | Fix patching module that doesn't exist | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655,281,070,000 | 1,655,311,249,000 | 1,655,283,249,000 | MEMBER | null | Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true
When trying to patch `scipy.io.loadmat`:
```python
ModuleNotFoundError: No module named 'scipy'
```
Instead it shouldn't raise an error and do nothing
Bug introduced by #4375
Fix https://github.com/huggingface/datasets/issues/4494 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4495/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4495",
"html_url": "https://github.com/huggingface/datasets/pull/4495",
"diff_url": "https://github.com/huggingface/datasets/pull/4495.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4495.patch",
"merged_at": 1655283249000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4494/comments | https://api.github.com/repos/huggingface/datasets/issues/4494/events | https://github.com/huggingface/datasets/issues/4494 | 1,271,850,599 | I_kwDODunzps5LzuZn | 4,494 | Patching fails for modules that are not installed or don't exist | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,655,281,049,000 | 1,655,283,249,000 | 1,655,283,249,000 | MEMBER | null | Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true
When trying to patch `scipy.io.loadmat`:
```python
ModuleNotFoundError: No module named 'scipy'
```
Instead it shouldn't raise an error and do nothing
We use patching to extend such functions to support remote URLs and work in streaming mode | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4494/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4493/comments | https://api.github.com/repos/huggingface/datasets/issues/4493/events | https://github.com/huggingface/datasets/pull/4493 | 1,271,306,385 | PR_kwDODunzps45qL7J | 4,493 | Add `@transmit_format` in `flatten`, `rename_column`, and `rename_columns` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@mariosasko please let me know whether we need to include some sort of tests to make sure that the decorator is working as expected. Thanks! 🤗 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4493). All of your documentation changes will be reflected on that endpoint.",
"Hi, thanks for working on this! Yes, please add (simple) tests so we can avoid any unexpected behavior in the future.\r\n\r\n`@transmit_format` doesn't handle column renaming, so I removed it from `rename_column` and `rename_columns` and added a comment to explain this."
] | 1,655,237,349,000 | 1,655,310,206,000 | null | CONTRIBUTOR | null | As suggested by @mariosasko in https://github.com/huggingface/datasets/pull/4411, we should include the `@format_columns` decorator to `flatten`, `rename_column`, and `rename_columns` so as to ensure that the value of `_format_columns` in an `ArrowDataset` is properly updated. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4493/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4493",
"html_url": "https://github.com/huggingface/datasets/pull/4493",
"diff_url": "https://github.com/huggingface/datasets/pull/4493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4493.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4492/comments | https://api.github.com/repos/huggingface/datasets/issues/4492/events | https://github.com/huggingface/datasets/pull/4492 | 1,271,112,497 | PR_kwDODunzps45pktu | 4,492 | Pin the revision in imagenet download links | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655,226,917,000 | 1,655,228,113,000 | 1,655,227,545,000 | MEMBER | null | Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism.
cc @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4492/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4492",
"html_url": "https://github.com/huggingface/datasets/pull/4492",
"diff_url": "https://github.com/huggingface/datasets/pull/4492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4492.patch",
"merged_at": 1655227545000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4491/comments | https://api.github.com/repos/huggingface/datasets/issues/4491/events | https://github.com/huggingface/datasets/issues/4491 | 1,270,803,822 | I_kwDODunzps5Lvu1u | 4,491 | Dataset Viewer issue for Pavithree/test | {
"login": "Pavithree",
"id": 23344465,
"node_id": "MDQ6VXNlcjIzMzQ0NDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pavithree",
"html_url": "https://github.com/Pavithree",
"followers_url": "https://api.github.com/users/Pavithree/followers",
"following_url": "https://api.github.com/users/Pavithree/following{/other_user}",
"gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions",
"organizations_url": "https://api.github.com/users/Pavithree/orgs",
"repos_url": "https://api.github.com/users/Pavithree/repos",
"events_url": "https://api.github.com/users/Pavithree/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pavithree/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This issue can be resolved according to this post https://stackoverflow.com/questions/70566660/parquet-with-null-columns-on-pyarrow. It looks like first data entry in the json file must not have any null values as pyarrow uses this first file to infer schema for entire dataset."
] | 1,655,212,990,000 | 1,655,217,441,000 | 1,655,217,273,000 | NONE | null | ### Link
https://huggingface.co/datasets/Pavithree/test
### Description
I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missing from my end? Kindly help.
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4491/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4490/comments | https://api.github.com/repos/huggingface/datasets/issues/4490/events | https://github.com/huggingface/datasets/issues/4490 | 1,270,719,074 | I_kwDODunzps5LvaJi | 4,490 | Use `torch.nested_tensor` for arrays of varying length in torch formatter | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,655,209,180,000 | 1,655,209,180,000 | null | CONTRIBUTOR | null | Use `torch.nested_tensor` for arrays of varying length in `TorchFormatter`.
The PyTorch API of nested tensors is in the prototype stage, so wait for it to become more mature. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4490/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4490/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4489/comments | https://api.github.com/repos/huggingface/datasets/issues/4489/events | https://github.com/huggingface/datasets/pull/4489 | 1,270,706,195 | PR_kwDODunzps45oONF | 4,489 | Add SV-Ident dataset | {
"login": "e-tornike",
"id": 20404466,
"node_id": "MDQ6VXNlcjIwNDA0NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20404466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-tornike",
"html_url": "https://github.com/e-tornike",
"followers_url": "https://api.github.com/users/e-tornike/followers",
"following_url": "https://api.github.com/users/e-tornike/following{/other_user}",
"gists_url": "https://api.github.com/users/e-tornike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-tornike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-tornike/subscriptions",
"organizations_url": "https://api.github.com/users/e-tornike/orgs",
"repos_url": "https://api.github.com/users/e-tornike/repos",
"events_url": "https://api.github.com/users/e-tornike/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-tornike/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,655,208,540,000 | 1,655,292,918,000 | null | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4489/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4489",
"html_url": "https://github.com/huggingface/datasets/pull/4489",
"diff_url": "https://github.com/huggingface/datasets/pull/4489.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4489.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4488/comments | https://api.github.com/repos/huggingface/datasets/issues/4488/events | https://github.com/huggingface/datasets/pull/4488 | 1,270,613,857 | PR_kwDODunzps45n6Ja | 4,488 | Update PASS dataset version | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655,203,634,000 | 1,655,224,915,000 | 1,655,224,348,000 | CONTRIBUTOR | null | Update the PASS dataset to version v3 (the newest one) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt).
PS: The older versions are not exposed as configs in the script because v1 was removed from Zenodo, and the same thing will probably happen to v2. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4488/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4488",
"html_url": "https://github.com/huggingface/datasets/pull/4488",
"diff_url": "https://github.com/huggingface/datasets/pull/4488.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4488.patch",
"merged_at": 1655224348000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4487/comments | https://api.github.com/repos/huggingface/datasets/issues/4487/events | https://github.com/huggingface/datasets/pull/4487 | 1,270,525,163 | PR_kwDODunzps45nm5J | 4,487 | Support streaming UDHR dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655,199,213,000 | 1,655,269,762,000 | 1,655,269,189,000 | MEMBER | null | This PR:
- Adds support for streaming UDHR dataset
- Adds the BCP 47 language code as feature | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4487/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4487",
"html_url": "https://github.com/huggingface/datasets/pull/4487",
"diff_url": "https://github.com/huggingface/datasets/pull/4487.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4487.patch",
"merged_at": 1655269189000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4486/comments | https://api.github.com/repos/huggingface/datasets/issues/4486/events | https://github.com/huggingface/datasets/pull/4486 | 1,269,518,084 | PR_kwDODunzps45kP88 | 4,486 | Add CCAgT dataset | {
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4486). All of your documentation changes will be reflected on that endpoint.",
"Hi! Excellent job @johnnv1! There were typos/missing words in the card, so I took the liberty to rewrite some parts to make them easier to understand. Let me know if you are ok with the changes. Also, feel free to add some info under the `Who are the annotators?` section.\r\n\r\nAdditionally, I fixed the issue with streaming and renamed the `digits` feature to `objects`.\r\n\r\n@lhoestq Are you ok with skipping the dummy data test here as it's tricky to generate it due to the splits separation logic?",
"I think I can also add instance segmentation: by exposing the segment of each instance, so it will be similar with object detection:\r\n\r\n- `instances`: a dictionary containing bounding boxes, segments, and labels of the cell objects \r\n - `bbox`: a list of bounding boxes\r\n - `segment`: a list of segments in format of `[polygon]`, where each polygon is `[x0, y0, ..., xn, yn]`\r\n - `label`: a list of integers representing the category\r\n\r\nDo you think it would be ok?",
"Don't you think it makes sense to keep the same category IDs for all approaches? \r\n\r\nNow we have:\r\n - nucleus category ID equals 0 for object detection and instance segmentation\r\n - background category ID equals 0 (on the masks) for semantic segmentation",
"I find it weird to have a dummy label in object detection just to align the mapping with semantic segmentation. Instead, let's explain in the card (under Data Fields -> annotation) what the pixel values mean (background + object detection labels)"
] | 1,655,130,019,000 | 1,655,314,747,000 | null | NONE | null | As described in #4075
I could not generate the dummy data. Also, on the data repository isn't provided the split IDs, but I copy the functions that provide the correct data split. In summary, to have a better distribution, the data in this dataset should be separated based on the amount of NORs in each image. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4486/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4486/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4486",
"html_url": "https://github.com/huggingface/datasets/pull/4486",
"diff_url": "https://github.com/huggingface/datasets/pull/4486.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4486.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4485/comments | https://api.github.com/repos/huggingface/datasets/issues/4485/events | https://github.com/huggingface/datasets/pull/4485 | 1,269,463,054 | PR_kwDODunzps45kD7A | 4,485 | Fix cast to null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,655,127,872,000 | 1,655,214,234,000 | 1,655,213,654,000 | MEMBER | null | It currently fails with `ArrowNotImplementedError` instead of `TypeError` when one tries to cast integer to null type.
Because if this, type inference breaks when one replaces null values with integers in `map` (it first tries to cast to the previous type before inferring the new type).
Fix https://github.com/huggingface/datasets/issues/4483 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4485/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4485/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4485",
"html_url": "https://github.com/huggingface/datasets/pull/4485",
"diff_url": "https://github.com/huggingface/datasets/pull/4485.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4485.patch",
"merged_at": 1655213654000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4484/comments | https://api.github.com/repos/huggingface/datasets/issues/4484/events | https://github.com/huggingface/datasets/pull/4484 | 1,269,383,811 | PR_kwDODunzps45jywZ | 4,484 | Better ImportError message when a dataset script dependency is missing | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Discussed offline with @mariosasko, merging :)"
] | 1,655,124,277,000 | 1,655,128,831,000 | 1,655,128,247,000 | MEMBER | null | When a depenency is missing for a dataset script, an ImportError message is shown, with a tip to install the missing dependencies. This message is not ideal at the moment: it may show duplicate dependencies, and is not very readable.
I improved it from
```
ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance'
```
to
```
ImportError: To be able to use bigbench, you need to install the following dependency: bigbench.
Please install it using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' for instance'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4484/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4484/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4484",
"html_url": "https://github.com/huggingface/datasets/pull/4484",
"diff_url": "https://github.com/huggingface/datasets/pull/4484.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4484.patch",
"merged_at": 1655128247000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4483/comments | https://api.github.com/repos/huggingface/datasets/issues/4483/events | https://github.com/huggingface/datasets/issues/4483 | 1,269,253,840 | I_kwDODunzps5Lp0bQ | 4,483 | Dataset.map throws pyarrow.lib.ArrowNotImplementedError when converting from list of empty lists | {
"login": "sanderland",
"id": 48946947,
"node_id": "MDQ6VXNlcjQ4OTQ2OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanderland",
"html_url": "https://github.com/sanderland",
"followers_url": "https://api.github.com/users/sanderland/followers",
"following_url": "https://api.github.com/users/sanderland/following{/other_user}",
"gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanderland/subscriptions",
"organizations_url": "https://api.github.com/users/sanderland/orgs",
"repos_url": "https://api.github.com/users/sanderland/repos",
"events_url": "https://api.github.com/users/sanderland/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanderland/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @sanderland ! Thanks for reporting :) This is a bug, I opened a PR to fix it. We'll do a new release soon\r\n\r\nIn the meantime you can fix it by specifying in advance that the \"label\" are integers:\r\n```python\r\nimport numpy as np\r\n\r\nds = Dataset.from_dict(\r\n {\r\n \"text\": [\"the lazy dog jumps over the quick fox\", \"another sentence\"],\r\n \"label\": [[], []],\r\n }\r\n)\r\n# explicitly say that the \"label\" type is int64, even though it contains only null values\r\nds = ds.cast_column(\"label\", Sequence(Value(\"int64\")))\r\n\r\ndef mapper(features):\r\n features['label'] = [\r\n [0,0,0] for l in features['label']\r\n ]\r\n return features\r\n\r\nds_mapped = ds.map(mapper,batched=True)\r\n```"
] | 1,655,117,272,000 | 1,655,213,654,000 | 1,655,213,654,000 | NONE | null | ## Describe the bug
Dataset.map throws pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null when converting from a type of 'empty lists' to 'lists with some type'.
This appears to be due to the interaction of arrow internals and some assumptions made by datasets.
The bug appeared when binarizing some labels, and then adding a dataset which had all these labels absent (to force the model to not label empty strings such with anything)
Particularly the fact that this only happens in batched mode is strange.
## Steps to reproduce the bug
```python
import numpy as np
ds = Dataset.from_dict(
{
"text": ["the lazy dog jumps over the quick fox", "another sentence"],
"label": [[], []],
}
)
def mapper(features):
features['label'] = [
[0,0,0] for l in features['label']
]
return features
ds_mapped = ds.map(mapper,batched=True)
```
## Expected results
Not crashing
## Actual results
```
../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2346: in map
return self._map_single(
../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:532: in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:499: in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
../.venv/lib/python3.8/site-packages/datasets/fingerprint.py:458: in wrapper
out = func(self, *args, **kwargs)
../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2751: in _map_single
writer.write_batch(batch)
../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:503: in write_batch
arrays.append(pa.array(typed_sequence))
pyarrow/array.pxi:230: in pyarrow.lib.array
???
pyarrow/array.pxi:110: in pyarrow.lib._handle_arrow_array_protocol
???
../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:198: in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper
return func(array, *args, **kwargs)
../.venv/lib/python3.8/site-packages/datasets/table.py:1812: in cast_array_to_feature
casted_values = _c(array.values, feature.feature)
../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper
return func(array, *args, **kwargs)
../.venv/lib/python3.8/site-packages/datasets/table.py:1843: in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper
return func(array, *args, **kwargs)
../.venv/lib/python3.8/site-packages/datasets/table.py:1752: in array_cast
return array.cast(pa_type)
pyarrow/array.pxi:915: in pyarrow.lib.Array.cast
???
../.venv/lib/python3.8/site-packages/pyarrow/compute.py:376: in cast
return call_function("cast", [arr], options)
pyarrow/_compute.pyx:542: in pyarrow._compute.call_function
???
pyarrow/_compute.pyx:341: in pyarrow._compute.Function.call
???
pyarrow/error.pxi:144: in pyarrow.lib.pyarrow_internal_check_status
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null
pyarrow/error.pxi:121: ArrowNotImplementedError
```
## Workarounds
* Not using batched=True
* Using an np.array([],dtype=float) or similar instead of [] in the input
* Naming the output column differently from the input column
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: Ubuntu
- Python version: 3.8
- PyArrow version: 8.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4483/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4482/comments | https://api.github.com/repos/huggingface/datasets/issues/4482/events | https://github.com/huggingface/datasets/pull/4482 | 1,269,237,447 | PR_kwDODunzps45jS_c | 4,482 | Test that TensorFlow is not imported on startup | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4482). All of your documentation changes will be reflected on that endpoint."
] | 1,655,116,429,000 | 1,655,122,701,000 | null | MEMBER | null | TF takes some time to be imported, and also uses some GPU memory.
I just added a test to make sure that in the future it's never imported by default when
```python
import datasets
```
is called.
Right now this fails because `huggingface_hub` does import tensorflow (though this is fixed now on their `main` branch)
I'll mark this PR as ready for review once `huggingface_hub` has a new release | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4482/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4482/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4482",
"html_url": "https://github.com/huggingface/datasets/pull/4482",
"diff_url": "https://github.com/huggingface/datasets/pull/4482.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4482.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4481/comments | https://api.github.com/repos/huggingface/datasets/issues/4481/events | https://github.com/huggingface/datasets/pull/4481 | 1,269,187,792 | PR_kwDODunzps45jIRi | 4,481 | Fix iwslt2017 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"CI fails are just abut missing tags in the dataset card, merging !"
] | 1,655,113,881,000 | 1,655,117,397,000 | 1,655,116,818,000 | MEMBER | null | The files were moved to google drive, I hosted them on the Hub instead (ok according to the license)
I also updated the `datasets_infos.json` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4481/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4481/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4481",
"html_url": "https://github.com/huggingface/datasets/pull/4481",
"diff_url": "https://github.com/huggingface/datasets/pull/4481.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4481.patch",
"merged_at": 1655116818000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4480/comments | https://api.github.com/repos/huggingface/datasets/issues/4480/events | https://github.com/huggingface/datasets/issues/4480 | 1,268,921,567 | I_kwDODunzps5LojTf | 4,480 | Bigbench tensorflow GPU dependency | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting ! :) cc @andersjohanandreassen can you take a look at this ?\r\n\r\nAlso @cceyda feel free to open an issue at [BIG-Bench](https://github.com/google/BIG-bench) as well regarding the `AttributeError`",
"I'm on vacation for the next week, so won't be able to do much debugging at the moment. Sorry for the inconvenience.\r\nBut I did quickly take a look:\r\n\r\n**pypi**:\r\nI managed to reproduce the above error with the pypi version begin out of date. \r\nThe version on `https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz` should be up to date, but it was my understanding that there was some issue with the pypi upload, so I don't even understand why there is a version [on pypi from April 1](https://pypi.org/project/bigbench/0.0.1/). Perhaps @ethansdyer, who's handling the pypi upload, knows the answer to that?\r\n\r\n**OOM error**:\r\nBut, I'm unable to reproduce the OOM error in a google colab with GPU enabled.\r\nThis is what I ran:\r\n```\r\n!pip install bigbench@https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz\r\n!pip install datasets\r\n\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"bigbench\",\"swedish_to_german_proverbs\")\r\n``` \r\nThe `swedish_to_german_proverbs`task is only 72 examples, so I don't understand what could be causing the OOM error. Loading the task has no effect on the RAM for me. @cceyda Can you confirm that this does not occur in a [colab](https://colab.research.google.com/)?\r\nIf the GPU is somehow causing issues on your system, disabling the GPU from TF might be an option too\r\n```\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\"\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"Solved.\r\nYes it works on colab, and somehow magically on my machine too now. hmm not sure what was wrong before I had used a fresh venv both times with just the dataloading code, and tried multiple times. (maybe just a wrong tensorflow version got mixed up somehow) The tensorflow call seems to come from the bigbench side anyway.\r\n\r\nabout bigbench pypi version update, I opened an issue over there https://github.com/google/BIG-bench/issues/846\r\n\r\nanyway closing this now. If anyone else has the same problem can re-open."
] | 1,655,097,846,000 | 1,655,235,924,000 | 1,655,235,923,000 | CONTRIBUTOR | null | ## Describe the bug
Loading bigbech
```py
from datasets import load_dataset
dataset = load_dataset("bigbench","swedish_to_german_proverbs")
```
tries to use gpu and fails with OOM with the following error
```
Downloading and preparing dataset bigbench/swedish_to_german_proverbs (download: Unknown size, generated: 68.92 KiB, post-processed: Unknown size, total: 68.92 KiB) to /home/ceyda/.cache/huggingface/datasets/bigbench/swedish_to_german_proverbs/1.0.0/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0...
Generating default split: 0%| | 0/72 [00:00<?, ? examples/s]2022-06-13 14:11:04.154469: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-13 14:11:05.133600: F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 3: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 25396838400
Aborted (core dumped)
```
I think this is because bigbench dependency (below) installs tensorflow (GPU version) and dataloading tries to use GPU as default.
`pip install bigbench@https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz`
while just doing 'pip install bigbench' results in following error
```
File "/home/ceyda/.local/lib/python3.7/site-packages/datasets/load.py", line 109, in import_main_class
module = importlib.import_module(module_path)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 118, in <module>
class Bigbench(datasets.GeneratorBasedBuilder):
File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 127, in Bigbench
BigBenchConfig(name=name, version=datasets.Version("1.0.0")) for name in bb_utils.get_all_json_task_names()
AttributeError: module 'bigbench.api.util' has no attribute 'get_all_json_task_names'
```
## Steps to avoid the bug
Not ideal but can solve with (since I don't really use tensorflow elsewhere)
`pip uninstall tensorflow`
`pip install tensorflow-cpu`
## Environment info
- datasets @ master
- Python version: 3.7
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4480/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4480/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4479/comments | https://api.github.com/repos/huggingface/datasets/issues/4479/events | https://github.com/huggingface/datasets/pull/4479 | 1,268,558,237 | PR_kwDODunzps45hHtZ | 4,479 | Include entity positions as feature in ReCoRD | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4479). All of your documentation changes will be reflected on that endpoint."
] | 1,655,034,988,000 | 1,655,036,946,000 | null | CONTRIBUTOR | null | https://huggingface.co/datasets/super_glue/viewer/record/validation
TLDR: We need to record entity positions, which included in the source data but excluded by the loading script, to enable efficient and effective training for ReCoRD.
Currently, the loading script ignores the entity positions ("entity_start", "entity_end") and only records entity text. This might be because the training method of the official baseline is to make n training instance from a datapoint by replacing \"\@ placeholder\" in query with each entity individually.
But it increases the already heavy computation by multiple folds. So DeBERTa uses a method that take entity embeddings by their positions in passage, and thus make one training instance from one datapoint. It is way more efficient and proved effective for ReCoRD task. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4479/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4479/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4479",
"html_url": "https://github.com/huggingface/datasets/pull/4479",
"diff_url": "https://github.com/huggingface/datasets/pull/4479.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4479.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4478/comments | https://api.github.com/repos/huggingface/datasets/issues/4478/events | https://github.com/huggingface/datasets/issues/4478 | 1,268,358,213 | I_kwDODunzps5LmZxF | 4,478 | Dataset slow during model training | {
"login": "lehrig",
"id": 9555494,
"node_id": "MDQ6VXNlcjk1NTU0OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9555494?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lehrig",
"html_url": "https://github.com/lehrig",
"followers_url": "https://api.github.com/users/lehrig/followers",
"following_url": "https://api.github.com/users/lehrig/following{/other_user}",
"gists_url": "https://api.github.com/users/lehrig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lehrig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lehrig/subscriptions",
"organizations_url": "https://api.github.com/users/lehrig/orgs",
"repos_url": "https://api.github.com/users/lehrig/repos",
"events_url": "https://api.github.com/users/lehrig/events{/privacy}",
"received_events_url": "https://api.github.com/users/lehrig/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! cc @Rocketknight1 maybe you know better ?\r\n\r\nI'm not too familiar with `tf.data.experimental.save`. Note that `datasets` uses memory mapping, so depending on your hardware and the disk you are using you can expect performance differences with a dataset loaded in RAM",
"Hi @lehrig, I suspect what's happening here is that our `to_tf_dataset()` method has some performance issues when streaming samples. This is usually not a problem, but they become apparent when streaming a vision dataset into a very small vision model, which will need a lot of sample throughput to saturate the GPU.\r\n\r\nWhen you save a `tf.data.Dataset` with `tf.data.experimental.save`, all of the samples from the dataset (which are, in this case, batches of images), are saved to disk. When you load this saved dataset, you're effectively bypassing `to_tf_dataset()` entirely, which alleviates this performance bottleneck.\r\n\r\n`to_tf_dataset()` is something we're actively working on overhauling right now - particularly for image datasets, we want to make it possible to access the underlying images with `tf.data` without going through the current layer of indirection with `Arrow`, which should massively improve simplicity and performance. \r\n\r\nHowever, if you just want this to work quickly but without needing your save/load hack, my advice would be to simply load the dataset into memory if it's small enough to fit. Since all your samples have the same dimensions, you can do this simply with:\r\n\r\n```\r\ndataset = load_from_disk(prep_data_dir)\r\ndataset = dataset.with_format(\"numpy\")\r\ndata_in_memory = dataset[:]\r\n```\r\n\r\nThen you can simply do something like:\r\n\r\n```\r\nmodel.fit(data_in_memory[\"pixel_values\"], data_in_memory[\"labels\"])\r\n```",
"Thanks for the information! \r\n\r\nI have now updated the training code like so:\r\n\r\n```\r\ndataset = load_from_disk(prep_data_dir)\r\ntrain_dataset = dataset[\"train\"][:]\r\nvalidation_dataset = dataset[\"dev\"][:]\r\n\r\n...\r\n\r\nmodel.fit(\r\n train_dataset[\"pixel_values\"],\r\n train_dataset[\"label\"],\r\n epochs=epochs,\r\n validation_data=(\r\n validation_dataset[\"pixel_values\"],\r\n validation_dataset[\"label\"]\r\n ),\r\n callbacks=[earlyStopping, mcp_save, reduce_lr_loss]\r\n)\r\n```\r\n\r\n- Creating the in-memory dataset is quite quick\r\n- But: There is now a long wait (~4-5 Minutes) before the training starts (why?)\r\n- And: Training times have improved but the very first epoch leaves me wondering why it takes so long (why?)\r\n\r\n**Epoch Breakdown:**\r\n- Epoch 1/10\r\n78s 12s/step - loss: 3.1307 - accuracy: 0.0737 - val_loss: 2.2827 - val_accuracy: 0.1273 - lr: 0.0010\r\n- Epoch 2/10\r\n1s 168ms/step - loss: 2.3616 - accuracy: 0.2350 - val_loss: 2.2679 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 3/10\r\n1s 189ms/step - loss: 2.0221 - accuracy: 0.3180 - val_loss: 2.2670 - val_accuracy: 0.1818 - lr: 0.0010\r\n- Epoch 4/10\r\n0s 67ms/step - loss: 1.8895 - accuracy: 0.3548 - val_loss: 2.2771 - val_accuracy: 0.1273 - lr: 0.0010\r\n- Epoch 5/10\r\n0s 67ms/step - loss: 1.7846 - accuracy: 0.3963 - val_loss: 2.2860 - val_accuracy: 0.1455 - lr: 0.0010\r\n- Epoch 6/10\r\n0s 65ms/step - loss: 1.5946 - accuracy: 0.4516 - val_loss: 2.2938 - val_accuracy: 0.1636 - lr: 0.0010\r\n- Epoch 7/10\r\n0s 63ms/step - loss: 1.4217 - accuracy: 0.5115 - val_loss: 2.2968 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 8/10\r\n0s 67ms/step - loss: 1.3089 - accuracy: 0.5438 - val_loss: 2.2842 - val_accuracy: 0.2182 - lr: 0.0010\r\n- Epoch 9/10\r\n1s 184ms/step - loss: 1.2480 - accuracy: 0.5806 - val_loss: 2.2652 - val_accuracy: 0.1818 - lr: 0.0010\r\n- Epoch 10/10\r\n0s 65ms/step - loss: 1.2699 - accuracy: 0.5622 - val_loss: 2.2670 - val_accuracy: 0.2000 - lr: 0.0010\r\n\r\n",
"Regarding the new long ~5 min. wait introduced by the in-memory dataset update: this might be causing it? https://datascience.stackexchange.com/questions/33364/why-model-fit-generator-in-keras-is-taking-so-much-time-even-before-picking-the\r\n\r\nFor now, my save/load hack is still more performant, even though having more boiler-plate code :/ ",
"That 5 minute wait is quite surprising! I don't have a good explanation for why it's happening, but it can't be an issue with `datasets` or `tf.data` because you're just fitting directly on Numpy arrays at this point. All I can suggest is seeing if you can isolate the issue - for example, does fitting on a smaller dataset containing only 10% of the original data reduce the wait? This might indicate the delay is caused by your data being copied or converted somehow. Alternatively, you could try removing things like callbacks and seeing if you could isolate the issue there."
] | 1,654,976,419,000 | 1,655,208,271,000 | null | NONE | null | ## Describe the bug
While migrating towards 🤗 Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training.
First, I have optimized my dataset following https://discuss.huggingface.co/t/solved-image-dataset-seems-slow-for-larger-image-size/10960/6, which actually improved the situation from what I had before but did not completely solve it.
Second, I saved and loaded my dataset using `tf.data.experimental.save` and `tf.data.experimental.load` before training (for which I would have expected no performance change). However, I ended up with the performance I had before tinkering with 🤗 Datasets.
Any idea what's the reason for this and how to speed-up training with 🤗 Datasets?
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
import os
dataset_dir = "./dataset"
prep_dataset_dir = "./prepdataset"
model_dir = "./model"
# Load Data
dataset = load_dataset("Lehrig/Monkey-Species-Collection", "downsized")
def read_image_file(example):
with open(example["image"].filename, "rb") as f:
example["image"] = {"bytes": f.read()}
return example
dataset = dataset.map(read_image_file)
dataset.save_to_disk(dataset_dir)
# Preprocess
from datasets import (
Array3D,
DatasetDict,
Features,
load_from_disk,
Sequence,
Value
)
import numpy as np
from transformers import ImageFeatureExtractionMixin
dataset = load_from_disk(dataset_dir)
num_classes = dataset["train"].features["label"].num_classes
one_hot_matrix = np.eye(num_classes)
feature_extractor = ImageFeatureExtractionMixin()
def to_pixels(image):
image = feature_extractor.resize(image, size=size)
image = feature_extractor.to_numpy_array(image, channel_first=False)
image = image / 255.0
return image
def process(examples):
examples["pixel_values"] = [
to_pixels(image) for image in examples["image"]
]
examples["label"] = [
one_hot_matrix[label] for label in examples["label"]
]
return examples
features = Features({
"pixel_values": Array3D(dtype="float32", shape=(size, size, 3)),
"label": Sequence(feature=Value(dtype="int32"), length=num_classes)
})
prep_dataset = dataset.map(
process,
remove_columns=["image"],
batched=True,
batch_size=batch_size,
num_proc=2,
features=features,
)
prep_dataset = prep_dataset.with_format("numpy")
# Split
train_dev_dataset = prep_dataset['test'].train_test_split(
test_size=test_size,
shuffle=True,
seed=seed
)
train_dev_test_dataset = DatasetDict({
'train': train_dev_dataset['train'],
'dev': train_dev_dataset['test'],
'test': prep_dataset['test'],
})
train_dev_test_dataset.save_to_disk(prep_dataset_dir)
# Train Model
import datetime
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.applications import InceptionV3
from tensorflow.keras.layers import Dense, Dropout, GlobalAveragePooling2D, BatchNormalization
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
from transformers import DefaultDataCollator
dataset = load_from_disk(prep_data_dir)
data_collator = DefaultDataCollator(return_tensors="tf")
train_dataset = dataset["train"].to_tf_dataset(
columns=['pixel_values'],
label_cols=['label'],
shuffle=True,
batch_size=batch_size,
collate_fn=data_collator
)
validation_dataset = dataset["dev"].to_tf_dataset(
columns=['pixel_values'],
label_cols=['label'],
shuffle=False,
batch_size=batch_size,
collate_fn=data_collator
)
print(f'{datetime.datetime.now()} - Saving Data')
tf.data.experimental.save(train_dataset, model_dir+"/train")
tf.data.experimental.save(validation_dataset, model_dir+"/val")
print(f'{datetime.datetime.now()} - Loading Data')
train_dataset = tf.data.experimental.load(model_dir+"/train")
validation_dataset = tf.data.experimental.load(model_dir+"/val")
shape = np.shape(dataset["train"][0]["pixel_values"])
backbone = InceptionV3(
include_top=False,
weights='imagenet',
input_shape=shape
)
for layer in backbone.layers:
layer.trainable = False
model = Sequential()
model.add(backbone)
model.add(GlobalAveragePooling2D())
model.add(Dense(128, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(64, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(10, activation='softmax'))
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
print(model.summary())
earlyStopping = EarlyStopping(
monitor='val_loss',
patience=10,
verbose=0,
mode='min'
)
mcp_save = ModelCheckpoint(
f'{model_dir}/best_model.hdf5',
save_best_only=True,
monitor='val_loss',
mode='min'
)
reduce_lr_loss = ReduceLROnPlateau(
monitor='val_loss',
factor=0.1,
patience=7,
verbose=1,
min_delta=0.0001,
mode='min'
)
hist = model.fit(
train_dataset,
epochs=epochs,
validation_data=validation_dataset,
callbacks=[earlyStopping, mcp_save, reduce_lr_loss]
)
```
## Expected results
Same performance when training without my "save/load hack" or a good explanation/recommendation about the issue.
## Actual results
Performance slower without my "save/load hack".
**Epoch Breakdown (without my "save/load hack"):**
- Epoch 1/10
41s 2s/step - loss: 1.6302 - accuracy: 0.5048 - val_loss: 1.4713 - val_accuracy: 0.3273 - lr: 0.0010
- Epoch 2/10
32s 2s/step - loss: 0.5357 - accuracy: 0.8510 - val_loss: 1.0447 - val_accuracy: 0.5818 - lr: 0.0010
- Epoch 3/10
36s 3s/step - loss: 0.3547 - accuracy: 0.9231 - val_loss: 0.6245 - val_accuracy: 0.7091 - lr: 0.0010
- Epoch 4/10
36s 3s/step - loss: 0.2721 - accuracy: 0.9231 - val_loss: 0.3395 - val_accuracy: 0.9091 - lr: 0.0010
- Epoch 5/10
32s 2s/step - loss: 0.1676 - accuracy: 0.9856 - val_loss: 0.2187 - val_accuracy: 0.9636 - lr: 0.0010
- Epoch 6/10
42s 3s/step - loss: 0.2066 - accuracy: 0.9615 - val_loss: 0.1635 - val_accuracy: 0.9636 - lr: 0.0010
- Epoch 7/10
32s 2s/step - loss: 0.1814 - accuracy: 0.9423 - val_loss: 0.1418 - val_accuracy: 0.9636 - lr: 0.0010
- Epoch 8/10
32s 2s/step - loss: 0.1301 - accuracy: 0.9856 - val_loss: 0.1388 - val_accuracy: 0.9818 - lr: 0.0010
- Epoch 9/10
loss: 0.1102 - accuracy: 0.9856 - val_loss: 0.1185 - val_accuracy: 0.9818 - lr: 0.0010
- Epoch 10/10
32s 2s/step - loss: 0.1013 - accuracy: 0.9808 - val_loss: 0.0978 - val_accuracy: 0.9818 - lr: 0.0010
**Epoch Breakdown (with my "save/load hack"):**
- Epoch 1/10
13s 625ms/step - loss: 3.0478 - accuracy: 0.1146 - val_loss: 2.3061 - val_accuracy: 0.0727 - lr: 0.0010
- Epoch 2/10
0s 80ms/step - loss: 2.3105 - accuracy: 0.2656 - val_loss: 2.3085 - val_accuracy: 0.0909 - lr: 0.0010
- Epoch 3/10
0s 77ms/step - loss: 1.8608 - accuracy: 0.3542 - val_loss: 2.3130 - val_accuracy: 0.0909 - lr: 0.0010
- Epoch 4/10
1s 98ms/step - loss: 1.8677 - accuracy: 0.3750 - val_loss: 2.3157 - val_accuracy: 0.0909 - lr: 0.0010
- Epoch 5/10
1s 204ms/step - loss: 1.5561 - accuracy: 0.4583 - val_loss: 2.3049 - val_accuracy: 0.0909 - lr: 0.0010
- Epoch 6/10
1s 210ms/step - loss: 1.4657 - accuracy: 0.4896 - val_loss: 2.2944 - val_accuracy: 0.0909 - lr: 0.0010
- Epoch 7/10
1s 205ms/step - loss: 1.4018 - accuracy: 0.5312 - val_loss: 2.2917 - val_accuracy: 0.0909 - lr: 0.0010
- Epoch 8/10
1s 207ms/step - loss: 1.2370 - accuracy: 0.5729 - val_loss: 2.2814 - val_accuracy: 0.0909 - lr: 0.0010
- Epoch 9/10
1s 214ms/step - loss: 1.1190 - accuracy: 0.6250 - val_loss: 2.2733 - val_accuracy: 0.0909 - lr: 0.0010
- Epoch 10/10
1s 207ms/step - loss: 1.1484 - accuracy: 0.6302 - val_loss: 2.2624 - val_accuracy: 0.0909 - lr: 0.0010
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-4.18.0-305.45.1.el8_4.ppc64le-ppc64le-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.2
- TensorFlow: 2.8.0
- GPU (used during training): Tesla V100-SXM2-32GB
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4478/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4478/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4477/comments | https://api.github.com/repos/huggingface/datasets/issues/4477/events | https://github.com/huggingface/datasets/issues/4477 | 1,268,308,986 | I_kwDODunzps5LmNv6 | 4,477 | Dataset Viewer issue for fgrezes/WIESP2022-NER | {
"login": "AshTayade",
"id": 42551754,
"node_id": "MDQ6VXNlcjQyNTUxNzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/42551754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AshTayade",
"html_url": "https://github.com/AshTayade",
"followers_url": "https://api.github.com/users/AshTayade/followers",
"following_url": "https://api.github.com/users/AshTayade/following{/other_user}",
"gists_url": "https://api.github.com/users/AshTayade/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AshTayade/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AshTayade/subscriptions",
"organizations_url": "https://api.github.com/users/AshTayade/orgs",
"repos_url": "https://api.github.com/users/AshTayade/repos",
"events_url": "https://api.github.com/users/AshTayade/events{/privacy}",
"received_events_url": "https://api.github.com/users/AshTayade/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"https://huggingface.co/datasets/fgrezes/WIESP2022-NER\r\n\r\nThe error:\r\n\r\n```\r\nMessage: Couldn't find a dataset script at /src/services/worker/fgrezes/WIESP2022-NER/WIESP2022-NER.py or any data file in the same directory. Couldn't find 'fgrezes/WIESP2022-NER' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**test*', '**eval*'] in dataset repository fgrezes/WIESP2022-NER with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']\r\n```\r\n\r\nI understand the issue is not related to the dataset viewer in itself, but with the autodetection of the data files without a loading script in the datasets library. cc @lhoestq @albertvillanova @mariosasko ",
"Apparently it finds `scoring-scripts/compute_seqeval.py` which matches `**eval*`, a regex that detects a test split. We should probably improve the regex because it's not supposed to catch this kind of files. It must also only check for files with supported extensions: txt, csv, png etc."
] | 1,654,962,557,000 | 1,655,114,366,000 | null | NONE | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4477/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4477/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4476/comments | https://api.github.com/repos/huggingface/datasets/issues/4476/events | https://github.com/huggingface/datasets/issues/4476 | 1,267,987,499 | I_kwDODunzps5Lk_Qr | 4,476 | `to_pandas` doesn't take into account format. | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Thanks for opening a discussion :)\r\n\r\nNote that you can use `.remove_columns(...)` to keep only the ones you're interested in before calling `.to_pandas()`",
"Yes I can do that thank you!\r\n\r\nDo you think that conceptually my example should work? If not, I'm happy to close this issue. \r\n\r\nIf yes, I can start working on it.",
"Hi! Instead of `with_format(columns=['a', 'b']).to_pandas()`, use `with_format(\"pandas\", columns=[\"a\", \"b\"])` for easy conversion of the parts of the dataset to pandas via indexing/slicing.\r\n\r\nThe full code:\r\n```python\r\nfrom datasets import Dataset\r\n\r\nds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]})\r\npandas_df = ds.with_format(\"pandas\", columns=['a', 'b'])[:]\r\n```",
"Ahhhh Thank you!\r\n\r\nclosing then :)"
] | 1,654,892,731,000 | 1,655,314,901,000 | 1,655,314,901,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
I have a large dataset that I need to convert part of to pandas to do some further analysis. Calling `to_pandas` directly on it is expensive. So I thought I could simply select the columns that I want and then call `to_pandas`.
**Describe the solution you'd like**
```python
from datasets import Dataset
ds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]})
pandas_df = ds.with_format(columns=['a', 'b']).to_pandas()
# I would expect `pandas_df` to only include a,b as column.
```
**Describe alternatives you've considered**
I could remove all columns that I don't want? But I don't know all of them in advance.
**Additional context**
I can probably make a PR with some pointers.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4476/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4476/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4475/comments | https://api.github.com/repos/huggingface/datasets/issues/4475/events | https://github.com/huggingface/datasets/pull/4475 | 1,267,798,451 | PR_kwDODunzps45eufw | 4,475 | Improve error message for missing pacakges from inside dataset script | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I opened a PR before I noticed yours ^^' You can find it here: https://github.com/huggingface/datasets/pull/4484\r\n\r\nThe only comment I have regarding your message is that it possibly shows several `pip install` commands, whereas one can run one single `pip install` command with the list of missing dependencies, which is maybe simpler.\r\n\r\nLet me know which one your prefer",
"Closing in favor of #4484. "
] | 1,654,880,376,000 | 1,655,126,787,000 | 1,655,126,203,000 | CONTRIBUTOR | null | Improve the error message for missing packages from inside a dataset script:
With this change, the error message for missing packages for `bigbench` looks as follows:
```
ImportError: To be able to use bigbench, you need to install the following dependencies:
- 'bigbench' using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"'
```
And this is how it looked before:
```
ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4475/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4475/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4475",
"html_url": "https://github.com/huggingface/datasets/pull/4475",
"diff_url": "https://github.com/huggingface/datasets/pull/4475.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4475.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4474/comments | https://api.github.com/repos/huggingface/datasets/issues/4474/events | https://github.com/huggingface/datasets/pull/4474 | 1,267,767,541 | PR_kwDODunzps45en98 | 4,474 | [Docs] How to use with PyTorch page | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,878,349,000 | 1,655,217,632,000 | 1,655,215,473,000 | MEMBER | null | Currently the docs about PyTorch are scattered around different pages, and we were missing a place to explain more in depth how to use and optimize a dataset for PyTorch. This PR is related to #4457 which is the TF counterpart :)
cc @Rocketknight1 we can try to align both documentations contents now I think
cc @stevhliu let me know what you think ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4474/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4474/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4474",
"html_url": "https://github.com/huggingface/datasets/pull/4474",
"diff_url": "https://github.com/huggingface/datasets/pull/4474.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4474.patch",
"merged_at": 1655215472000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4473/comments | https://api.github.com/repos/huggingface/datasets/issues/4473/events | https://github.com/huggingface/datasets/pull/4473 | 1,267,555,994 | PR_kwDODunzps45d5-R | 4,473 | Add SST-2 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"on the hub this dataset is referenced as `sst-2` not `sst2` – is there a canonical orthography? If not, could we name it `sst-2`?",
"@julien-c, we normally do not use hyphens for dataset names: whenever the original dataset name contains a hyphen, we usually:\r\n- either suppress it: CoNLL-2000 (`conll2000`), CORD-19 (`cord19`)\r\n- or replace it with underscore: CC-News (`cc_news`), SQuAD-es (`squad_es`)\r\n\r\nThere are some exceptions though... (I wonder why)\r\n\r\nI think, the reason is there was a 1-to-1 relation with the corresponding Python module name.\r\n\r\nI personally find confusing not having a rule and using both hyphens and underscores indistinctly: you never know which is the right orthography.\r\n\r\nWhichever the decision we make, I would prefer to be applied consistently.\r\n\r\nAlso note that we already implemented this dataset as part of GLUE: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py#L163\r\n- dataset name: `glue`\r\n- config name: `sst2`\r\n\r\nOn the other hand, let's see how other libraries name it:\r\n- torchtext: `SST2` https://pytorch.org/text/stable/datasets.html#sst2\r\n- OpenAI CLIP: `rendered-sst2` https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md\r\n- Kaggle: `SST2` https://www.kaggle.com/datasets/atulanandjha/stanford-sentiment-treebank-v2-sst2/version/22\r\n- TensorFlow Datasets: `glue/sst2` https://www.tensorflow.org/datasets/catalog/glue#gluesst2",
"Ok, another option is to open PRs against the models in https://huggingface.co/models?datasets=sst-2 to change their dataset reference to `sst2`\r\n\r\n(BTW some models refer to `sst2` already – but they're less popular: https://huggingface.co/models?datasets=sst2)",
"OK, I'm taking care of the subsequent PRs on models to align with this dataset name."
] | 1,654,868,246,000 | 1,655,129,494,000 | 1,655,128,869,000 | MEMBER | null | Add SST-2 dataset.
Currently it is part of GLUE benchmark.
This PR adds it as a standalone dataset.
CC: @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4473/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4473/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4473",
"html_url": "https://github.com/huggingface/datasets/pull/4473",
"diff_url": "https://github.com/huggingface/datasets/pull/4473.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4473.patch",
"merged_at": 1655128869000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4472/comments | https://api.github.com/repos/huggingface/datasets/issues/4472/events | https://github.com/huggingface/datasets/pull/4472 | 1,267,488,523 | PR_kwDODunzps45drcb | 4,472 | Fix 401 error for unauthticated requests to non-existing repos | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,864,691,000 | 1,654,866,311,000 | 1,654,865,757,000 | MEMBER | null | The hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos.
This PR add support for the 401 error and fixes the CI fails on `master` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4472/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4472/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4472",
"html_url": "https://github.com/huggingface/datasets/pull/4472",
"diff_url": "https://github.com/huggingface/datasets/pull/4472.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4472.patch",
"merged_at": 1654865756000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4471/comments | https://api.github.com/repos/huggingface/datasets/issues/4471/events | https://github.com/huggingface/datasets/issues/4471 | 1,267,475,268 | I_kwDODunzps5LjCNE | 4,471 | CI error with repo lhoestq/_dummy | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"fixed by https://github.com/huggingface/datasets/pull/4472"
] | 1,654,863,966,000 | 1,654,867,493,000 | 1,654,867,493,000 | MEMBER | null | ## Describe the bug
CI is failing because of repo "lhoestq/_dummy". See: https://app.circleci.com/pipelines/github/huggingface/datasets/12461/workflows/1b040b45-9578-4ab9-8c44-c643c4eb8691/jobs/74269
```
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/datasets/lhoestq/_dummy?full=true
```
The repo seems to no longer exist: https://huggingface.co/api/datasets/lhoestq/_dummy
```
error: "Repository not found"
```
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4471/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4471/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4470/comments | https://api.github.com/repos/huggingface/datasets/issues/4470/events | https://github.com/huggingface/datasets/pull/4470 | 1,267,470,051 | PR_kwDODunzps45dnYw | 4,470 | Reorder returned validation/test splits in script template | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,863,673,000 | 1,654,884,250,000 | 1,654,883,690,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4470/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4470/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4470",
"html_url": "https://github.com/huggingface/datasets/pull/4470",
"diff_url": "https://github.com/huggingface/datasets/pull/4470.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4470.patch",
"merged_at": 1654883690000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4469 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4469/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4469/comments | https://api.github.com/repos/huggingface/datasets/issues/4469/events | https://github.com/huggingface/datasets/pull/4469 | 1,267,213,849 | PR_kwDODunzps45cweQ | 4,469 | Replace data URLs in wider_face dataset once hosted on the Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,848,805,000 | 1,654,879,328,000 | 1,654,878,766,000 | MEMBER | null | This PR replaces the URLs of data files in Google Drive with our Hub ones, once the data owners have approved to host their data on the Hub.
They also informed us that their dataset is licensed under CC BY-NC-ND. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4469/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4469/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4469",
"html_url": "https://github.com/huggingface/datasets/pull/4469",
"diff_url": "https://github.com/huggingface/datasets/pull/4469.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4469.patch",
"merged_at": 1654878766000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4468/comments | https://api.github.com/repos/huggingface/datasets/issues/4468/events | https://github.com/huggingface/datasets/pull/4468 | 1,266,715,742 | PR_kwDODunzps45bERK | 4,468 | Generalize tutorials for audio and vision | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,812,044,000 | 1,655,223,722,000 | 1,655,223,120,000 | MEMBER | null | This PR updates the tutorials to be more generalizable to all modalities. After reading the tutorials, a user should be able to load any type of dataset, know how to index into and slice a dataset, and do the most basic/common type of preprocessing (tokenization, resampling, applying transforms) depending on their dataset.
Other changes include:
- Removed the sections about a dataset's metadata, features, and columns because we cover this in an earlier tutorial about inspecting the `DatasetInfo` through the dataset builder.
- Separate the sharing dataset tutorial into two sections: (1) uploading via the web interface and (2) using the `huggingface_hub` library.
- Renamed some tutorials in the TOC to be more clear and specific.
- Added more text to nudge users towards joining the community and asking questions on the forums.
- If it's okay with everyone, I'd also like to remove the section about loading and using metrics since we have the `evaluate` docs now.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4468/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4468/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4468",
"html_url": "https://github.com/huggingface/datasets/pull/4468",
"diff_url": "https://github.com/huggingface/datasets/pull/4468.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4468.patch",
"merged_at": 1655223120000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4467/comments | https://api.github.com/repos/huggingface/datasets/issues/4467/events | https://github.com/huggingface/datasets/issues/4467 | 1,266,218,358 | I_kwDODunzps5LePV2 | 4,467 | Transcript string 'null' converted to [None] by load_dataset() | {
"login": "mbarnig",
"id": 1360633,
"node_id": "MDQ6VXNlcjEzNjA2MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1360633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbarnig",
"html_url": "https://github.com/mbarnig",
"followers_url": "https://api.github.com/users/mbarnig/followers",
"following_url": "https://api.github.com/users/mbarnig/following{/other_user}",
"gists_url": "https://api.github.com/users/mbarnig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbarnig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbarnig/subscriptions",
"organizations_url": "https://api.github.com/users/mbarnig/orgs",
"repos_url": "https://api.github.com/users/mbarnig/repos",
"events_url": "https://api.github.com/users/mbarnig/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbarnig/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @mbarnig, thanks for reporting.\r\n\r\nPlease note that is an expected behavior by `pandas` (we use the `pandas` library to parse CSV files): https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html\r\n```\r\nBy default the following values are interpreted as NaN: \r\n‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘<NA>’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’.\r\n```\r\n(see \"null\" in the last position in the above list).\r\n\r\nIn order to prevent `pandas` from performing that automatic conversion from the string \"null\" to a NaN value, you should pass the `pandas` parameter `keep_default_na=False`:\r\n```python\r\nIn [2]: dataset = load_dataset('csv', data_files={'train': 'null-test.csv'}, keep_default_na=False)\r\nIn [3]: dataset[\"train\"][0][\"transcript\"]\r\nOut[3]: 'null'\r\n```",
"Thanks for the quick answer."
] | 1,654,784,760,000 | 1,654,797,337,000 | 1,654,792,142,000 | NONE | null | ## Issue
I am training a luxembourgish speech-recognition model in Colab with a custom dataset, including a dictionary of luxembourgish words, for example the speaken numbers 0 to 9. When preparing the dataset with the script
`ds_train1 = mydataset.map(prepare_dataset)`
the following error was issued:
```
ValueError Traceback (most recent call last)
<ipython-input-69-1e8f2b37f5bc> in <module>()
----> 1 ds_train = mydataset_train.map(prepare_dataset)
11 frames
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2450 if not _is_valid_text_input(text):
2451 raise ValueError(
-> 2452 "text input must of type str (single example), List[str] (batch or single pretokenized example) "
2453 "or List[List[str]] (batch of pretokenized examples)."
2454 )
ValueError: text input must of type str (single example), List[str] (batch or single pretokenized example) or List[List[str]] (batch of pretokenized examples).
```
Debugging this problem was not easy, all transcriptions in the dataset are correct strings. Finally I discovered that the transcription string 'null' is interpreted as [None] by the `load_dataset()` script. By deleting this row in the dataset the training worked fine.
## Expected result:
transcription 'null' interpreted as 'str' instead of 'None'.
## Reproduction
Here is the code to reproduce the error with a one-row-dataset.
```
with open("null-test.csv") as f:
reader = csv.reader(f)
for row in reader:
print(row)
```
['wav_filename', 'wav_filesize', 'transcript']
['wavs/female/NULL1.wav', '17530', 'null']
```
dataset = load_dataset('csv', data_files={'train': 'null-test.csv'})
```
Using custom data configuration default-81ac0c0e27af3514
Downloading and preparing dataset csv/default to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519...
Downloading data files: 100%
1/1 [00:00<00:00, 29.55it/s]
Extracting data files: 100%
1/1 [00:00<00:00, 23.66it/s]
Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data.
100%
1/1 [00:00<00:00, 25.84it/s]
```
print(dataset['train']['transcript'])
```
[None]
## Environment info
```
!pip install datasets==2.2.2
!pip install transformers==4.19.2
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4467/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4467/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4466/comments | https://api.github.com/repos/huggingface/datasets/issues/4466/events | https://github.com/huggingface/datasets/pull/4466 | 1,266,159,920 | PR_kwDODunzps45ZLsd | 4,466 | Optimize contiguous shard and select | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I thought of just mentioning the benefits I got. Here's the code that @lhoestq provided:\r\n\r\n```py\r\nimport os\r\nfrom datasets import load_dataset\r\nfrom tqdm.auto import tqdm\r\n\r\nds = load_dataset(\"squad\", split=\"train\")\r\nos.makedirs(\"tmp\")\r\n\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n size = len(ds) // num_shards\r\n shard = Dataset(ds.data.slice(size * index, size), fingerprint=f\"{ds._fingerprint}_{index}\")\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt is 1.64s. Previously the code was:\r\n\r\n```py\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n shard = ds.shard(num_shards=num_shards, index=index, contiguous=True)\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n # upload_to_gcs(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt was 2min31s. \r\n\r\nI ran it on my humble MacBook Pro:\r\n\r\n<img width=\"574\" alt=\"image\" src=\"https://user-images.githubusercontent.com/22957388/172864881-f1db489a-2305-47f2-a07f-7d3df610b1b8.png\">\r\n",
"I addressed your comments @albertvillanova , let me know what you think :)"
] | 1,654,782,339,000 | 1,655,222,670,000 | 1,655,222,085,000 | MEMBER | null | Currently `.shard()` and `.select()` always create an indices mapping. However if the requested data are contiguous, it's much more optimized to simply slice the Arrow table instead of building an indices mapping. In particular:
- the shard/select operation will be much faster
- reading speed will be much faster in the resulting dataset, since it won't have to do a lookup step in the indices mapping
Since `.shard()` is also used for `.map()` with `num_proc>1`, it will also significantly improve the reading speed of multiprocessed `.map()` operations
Here is an example of speed-up:
```python
>>> import io
>>> import numpy as np
>>> from datasets import Dataset
>>> ds = Dataset.from_dict({"a": np.random.rand(10_000_000)})
>>> shard = ds.shard(num_shards=4, index=0, contiguous=True) # this calls `.select(range(2_500_000))`
>>> buf = io.BytesIO()
>>> %time dd.to_json(buf)
Creating json from Arrow format: 100%|██████████████████| 100/100 [00:00<00:00, 376.17ba/s]
CPU times: user 258 ms, sys: 9.06 ms, total: 267 ms
Wall time: 266 ms
```
while previously it was
```python
Creating json from Arrow format: 100%|███████████████████| 100/100 [00:03<00:00, 29.41ba/s]
CPU times: user 3.33 s, sys: 69.1 ms, total: 3.39 s
Wall time: 3.4 s
```
In this simple case the speed-up is x10, but @sayakpaul experienced a x100 speed-up on its data when exporting to JSON.
## Implementation details
I mostly improved `.select()`: it now checks if the input corresponds to a contiguous chunk of data and then it slices the main Arrow table (or the indices mapping table if it exists). To check if the input indices are contiguous it checks two possibilities:
- if the indices is of type `range`, it checks that start >= 0 and step = 1
- otherwise in the general case, it iterates over the indices. If all the indices are contiguous then we're good, otherwise we have to build an indices mapping.
Having to iterate over the indices doesn't cause performance issues IMO because:
- either they are contiguous and in this case the cost of iterating over the indices is much less than the cost of creating an indices mapping
- or they are not contiguous, and then iterating generally stops quickly when it first encounters the first indice that is not contiguous. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4466/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4466/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4466",
"html_url": "https://github.com/huggingface/datasets/pull/4466",
"diff_url": "https://github.com/huggingface/datasets/pull/4466.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4466.patch",
"merged_at": 1655222085000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4465/comments | https://api.github.com/repos/huggingface/datasets/issues/4465/events | https://github.com/huggingface/datasets/pull/4465 | 1,265,754,479 | PR_kwDODunzps45X0XY | 4,465 | Fix bigbench config names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,761,979,000 | 1,654,785,516,000 | 1,654,784,959,000 | MEMBER | null | Fix https://github.com/huggingface/datasets/issues/4462 in the case of bigbench | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4465/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4465/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4465",
"html_url": "https://github.com/huggingface/datasets/pull/4465",
"diff_url": "https://github.com/huggingface/datasets/pull/4465.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4465.patch",
"merged_at": 1654784958000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4464/comments | https://api.github.com/repos/huggingface/datasets/issues/4464/events | https://github.com/huggingface/datasets/pull/4464 | 1,265,682,931 | PR_kwDODunzps45XlWW | 4,464 | Extend support for streaming datasets that use xml.dom.minidom.parse | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,757,905,000 | 1,654,764,204,000 | 1,654,763,656,000 | MEMBER | null | This PR extends the support in streaming mode for datasets that use `xml.dom.minidom.parse`, by patching that function.
This PR adds support for streaming datasets like "Yaxin/SemEval2015".
Fix #4453. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4464/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4464/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4464",
"html_url": "https://github.com/huggingface/datasets/pull/4464",
"diff_url": "https://github.com/huggingface/datasets/pull/4464.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4464.patch",
"merged_at": 1654763655000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4463/comments | https://api.github.com/repos/huggingface/datasets/issues/4463/events | https://github.com/huggingface/datasets/pull/4463 | 1,265,093,211 | PR_kwDODunzps45Vnzu | 4,463 | Use config_id to check split sizes instead of config name | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"closing in favor of https://github.com/huggingface/datasets/pull/4465"
] | 1,654,710,324,000 | 1,654,762,543,000 | 1,654,761,997,000 | MEMBER | null | Fix https://github.com/huggingface/datasets/issues/4462 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4463/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4463/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4463",
"html_url": "https://github.com/huggingface/datasets/pull/4463",
"diff_url": "https://github.com/huggingface/datasets/pull/4463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4463.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4462/comments | https://api.github.com/repos/huggingface/datasets/issues/4462/events | https://github.com/huggingface/datasets/issues/4462 | 1,265,079,347 | I_kwDODunzps5LZ5Qz | 4,462 | NonMatchingSplitsSizesError when passing a dataset configuration parameter | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Why not adding `max_examples` as part of the config name?",
"Yup it can also work, and maybe it's simpler this way. Opening a PR to fix bigbench instead of https://github.com/huggingface/datasets/pull/4463"
] | 1,654,709,484,000 | 1,654,784,958,000 | 1,654,784,958,000 | MEMBER | null | As noticed in https://github.com/huggingface/datasets/pull/4125 when a dataset config class has a parameter that reduces the number of examples (e.g. named `max_examples`), then loading the dataset and passing `max_examples` raises `NonMatchingSplitsSizesError`.
This is because it will check for expected the number of examples of the config with the same name without taking into account the `max_examples` parameter. This can be fixed by checking the expected number of examples using the **config id** instead of name. Indeed the config id corresponds to the config name + an optional suffix that depends on the config parameters | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4462/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4462/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4461/comments | https://api.github.com/repos/huggingface/datasets/issues/4461/events | https://github.com/huggingface/datasets/issues/4461 | 1,264,800,451 | I_kwDODunzps5LY1LD | 4,461 | AttributeError: module 'datasets' has no attribute 'load_dataset' | {
"login": "AlexNLP",
"id": 59248970,
"node_id": "MDQ6VXNlcjU5MjQ4OTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/59248970?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlexNLP",
"html_url": "https://github.com/AlexNLP",
"followers_url": "https://api.github.com/users/AlexNLP/followers",
"following_url": "https://api.github.com/users/AlexNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/AlexNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlexNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlexNLP/subscriptions",
"organizations_url": "https://api.github.com/users/AlexNLP/orgs",
"repos_url": "https://api.github.com/users/AlexNLP/repos",
"events_url": "https://api.github.com/users/AlexNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlexNLP/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,654,696,760,000 | 1,654,699,260,000 | 1,654,699,260,000 | NONE | null | ## Describe the bug
I have piped install datasets, but this package doesn't have these attributes: load_dataset, load_metric.
## Environment info
- `datasets` version: 1.9.0
- Platform: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.6.13
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4461/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4461/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4460/comments | https://api.github.com/repos/huggingface/datasets/issues/4460/events | https://github.com/huggingface/datasets/pull/4460 | 1,264,644,205 | PR_kwDODunzps45UHIs | 4,460 | Drop Python 3.6 support | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4460). All of your documentation changes will be reflected on that endpoint."
] | 1,654,690,218,000 | 1,655,133,773,000 | null | CONTRIBUTOR | null | Remove the fallback imports/checks in the code needed for Python 3.6 and update the requirements/CI files.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4460/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4460/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4460",
"html_url": "https://github.com/huggingface/datasets/pull/4460",
"diff_url": "https://github.com/huggingface/datasets/pull/4460.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4460.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4459/comments | https://api.github.com/repos/huggingface/datasets/issues/4459/events | https://github.com/huggingface/datasets/pull/4459 | 1,264,636,481 | PR_kwDODunzps45UFc8 | 4,459 | Add and fix language tags for udhr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,689,822,000 | 1,654,691,784,000 | 1,654,691,233,000 | MEMBER | null | Related to #4362. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4459/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4459/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4459",
"html_url": "https://github.com/huggingface/datasets/pull/4459",
"diff_url": "https://github.com/huggingface/datasets/pull/4459.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4459.patch",
"merged_at": 1654691233000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4457 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4457/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4457/comments | https://api.github.com/repos/huggingface/datasets/issues/4457/events | https://github.com/huggingface/datasets/pull/4457 | 1,263,531,911 | PR_kwDODunzps45QZCU | 4,457 | First draft of the docs for TF + Datasets | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Some links are still missing I think :)",
"This is probably quite close to being ready, so cc some TF people @gante @amyeroberts @merveenoyan just so they see it! No need for a full review, but if you have any comments or suggestions feel free.",
"Thanks ! We plan to make a new release later today for `to_tf_dataset` FYI, so I think we can merge it soon and include this documentation in the new release"
] | 1,654,618,008,000 | 1,655,222,921,000 | 1,655,222,348,000 | MEMBER | null | I might cc a few of the other TF people to take a look when this is closer to being finished, but it's still a draft for now. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4457/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4457/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4457",
"html_url": "https://github.com/huggingface/datasets/pull/4457",
"diff_url": "https://github.com/huggingface/datasets/pull/4457.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4457.patch",
"merged_at": 1655222348000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4456/comments | https://api.github.com/repos/huggingface/datasets/issues/4456/events | https://github.com/huggingface/datasets/issues/4456 | 1,263,241,449 | I_kwDODunzps5LS4jp | 4,456 | Workflow for Tabular data | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [] | 1,654,606,102,000 | 1,654,703,156,000 | null | MEMBER | null | Tabular data are treated very differently than data for NLP, audio, vision, etc. and therefore the worflow for tabular data in `datasets` is not ideal.
For example for tabular data, it is common to use pandas/spark/dask to process the data, and then load the data into X and y (X is an array of features and y an array of labels), then train_test_split and finally feed the data to a machine learning model.
In `datasets` the workflow is different: we use load_dataset, then map, then train_test_split (if we only have a train split) and we end up with columnar dataset splits, not formatted as X and y.
Right now, it is already possible to convert a dataset from and to pandas, but there are still many things that could improve the workflow for tabular data:
- be able to load the data into X and y
- be able to load a dataset from the output of spark or dask (as far as I know it's usually csv or parquet files on S3/GCS/HDFS etc.)
- support "unsplit" datasets explicitly, instead of putting everything in "train" by default
cc @adrinjalali @merveenoyan feel free to complete/correct this :)
Feel free to also share ideas of APIs that would be super intuitive in your opinion ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4456/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/4456/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4455 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4455/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4455/comments | https://api.github.com/repos/huggingface/datasets/issues/4455/events | https://github.com/huggingface/datasets/pull/4455 | 1,263,089,067 | PR_kwDODunzps45O5F9 | 4,455 | Update data URLs in fever dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,598,454,000 | 1,654,673,094,000 | 1,654,672,577,000 | MEMBER | null | As stated in their website, data owners updated their URLs on 28/04/2022.
This PR updates the data URLs.
Fix #4452. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4455/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4455",
"html_url": "https://github.com/huggingface/datasets/pull/4455",
"diff_url": "https://github.com/huggingface/datasets/pull/4455.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4455.patch",
"merged_at": 1654672576000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4454/comments | https://api.github.com/repos/huggingface/datasets/issues/4454/events | https://github.com/huggingface/datasets/issues/4454 | 1,262,674,973 | I_kwDODunzps5LQuQd | 4,454 | Dataset Viewer issue for Yaxin/SemEval2015 | {
"login": "WithYouTo",
"id": 18160852,
"node_id": "MDQ6VXNlcjE4MTYwODUy",
"avatar_url": "https://avatars.githubusercontent.com/u/18160852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WithYouTo",
"html_url": "https://github.com/WithYouTo",
"followers_url": "https://api.github.com/users/WithYouTo/followers",
"following_url": "https://api.github.com/users/WithYouTo/following{/other_user}",
"gists_url": "https://api.github.com/users/WithYouTo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WithYouTo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WithYouTo/subscriptions",
"organizations_url": "https://api.github.com/users/WithYouTo/orgs",
"repos_url": "https://api.github.com/users/WithYouTo/repos",
"events_url": "https://api.github.com/users/WithYouTo/events{/privacy}",
"received_events_url": "https://api.github.com/users/WithYouTo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Closing since it's a duplicate of https://github.com/huggingface/datasets/issues/4453"
] | 1,654,572,706,000 | 1,654,602,791,000 | 1,654,602,791,000 | NONE | null | ### Link
_No response_
### Description
the link could not visit
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4454/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4453/comments | https://api.github.com/repos/huggingface/datasets/issues/4453/events | https://github.com/huggingface/datasets/issues/4453 | 1,262,674,105 | I_kwDODunzps5LQuC5 | 4,453 | Dataset Viewer issue for Yaxin/SemEval2015 | {
"login": "WithYouTo",
"id": 18160852,
"node_id": "MDQ6VXNlcjE4MTYwODUy",
"avatar_url": "https://avatars.githubusercontent.com/u/18160852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WithYouTo",
"html_url": "https://github.com/WithYouTo",
"followers_url": "https://api.github.com/users/WithYouTo/followers",
"following_url": "https://api.github.com/users/WithYouTo/following{/other_user}",
"gists_url": "https://api.github.com/users/WithYouTo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WithYouTo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WithYouTo/subscriptions",
"organizations_url": "https://api.github.com/users/WithYouTo/orgs",
"repos_url": "https://api.github.com/users/WithYouTo/repos",
"events_url": "https://api.github.com/users/WithYouTo/events{/privacy}",
"received_events_url": "https://api.github.com/users/WithYouTo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I understand that the issue is that a remote file (URL) is being loaded as a local file. Right @albertvillanova @lhoestq?\r\n\r\n```\r\nMessage: [Errno 2] No such file or directory: 'https://raw.githubusercontent.com/YaxinCui/ABSADataset/main/SemEval2015Task12Corrected/train/restaurants_train.xml'\r\n```",
"`xml.dom.minidom.parse` is not supported in streaming mode. I opened a PR here to fix it:\r\nhttps://huggingface.co/datasets/Yaxin/SemEval2015/discussions/1\r\n\r\nPlease review the PR @WithYouTo and let me know if it works !",
"Additionally, I'm also patching our library, so that we support streaming datasets that use `xml.dom.minidom.parse`."
] | 1,654,572,608,000 | 1,654,763,656,000 | 1,654,763,656,000 | NONE | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4453/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4452/comments | https://api.github.com/repos/huggingface/datasets/issues/4452/events | https://github.com/huggingface/datasets/issues/4452 | 1,262,529,654 | I_kwDODunzps5LQKx2 | 4,452 | Trying to load FEVER dataset results in NonMatchingChecksumError | {
"login": "santhnm2",
"id": 5347982,
"node_id": "MDQ6VXNlcjUzNDc5ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5347982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/santhnm2",
"html_url": "https://github.com/santhnm2",
"followers_url": "https://api.github.com/users/santhnm2/followers",
"following_url": "https://api.github.com/users/santhnm2/following{/other_user}",
"gists_url": "https://api.github.com/users/santhnm2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/santhnm2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/santhnm2/subscriptions",
"organizations_url": "https://api.github.com/users/santhnm2/orgs",
"repos_url": "https://api.github.com/users/santhnm2/repos",
"events_url": "https://api.github.com/users/santhnm2/events{/privacy}",
"received_events_url": "https://api.github.com/users/santhnm2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting @santhnm2. We are fixing it.\r\n\r\nData owners updated their URLs recently. We have to align with them, otherwise you do not download anything (that is why ignore_verifications does not work)."
] | 1,654,557,195,000 | 1,654,672,576,000 | 1,654,672,576,000 | NONE | null | ## Describe the bug
Trying to load the `fever` dataset fails with `datasets.utils.info_utils.NonMatchingChecksumError`.
I tried with `download_mode="force_redownload"` but that did not fix the error. I also tried with `ignore_verification=True` but then that raised a `json.decoder.JSONDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('fever', 'v1.0') # Fails with NonMatchingChecksumError
dataset = load_dataset('fever', 'v1.0', download_mode="force_redownload") # Fails with NonMatchingChecksumError
dataset = load_dataset('fever', 'v1.0', ignore_verification=True)` # Fails with JSONDecodeError
```
## Expected results
I expect this call to return with no error raised.
## Actual results
With `ignore_verification=False`:
```
*** datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://s3-eu-west-1.amazonaws.com/fever.public/train.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev_public.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_test.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_test.jsonl']
```
With `ignore_verification=True`:
```
*** json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.3.dev0
- Platform: Linux-4.15.0-50-generic-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4452/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4451/comments | https://api.github.com/repos/huggingface/datasets/issues/4451/events | https://github.com/huggingface/datasets/pull/4451 | 1,262,103,323 | PR_kwDODunzps45LkGc | 4,451 | Use newer version of multi-news with fixes | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Awesome thanks @mariosasko!"
] | 1,654,534,628,000 | 1,654,623,601,000 | 1,654,622,084,000 | CONTRIBUTOR | null | Closes #4430. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4451/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4451",
"html_url": "https://github.com/huggingface/datasets/pull/4451",
"diff_url": "https://github.com/huggingface/datasets/pull/4451.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4451.patch",
"merged_at": 1654622084000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4450/comments | https://api.github.com/repos/huggingface/datasets/issues/4450/events | https://github.com/huggingface/datasets/pull/4450 | 1,261,878,324 | PR_kwDODunzps45Kzwh | 4,450 | Update README.md of fquad | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,523,561,000 | 1,654,527,109,000 | 1,654,526,583,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4450/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4450/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4450",
"html_url": "https://github.com/huggingface/datasets/pull/4450",
"diff_url": "https://github.com/huggingface/datasets/pull/4450.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4450.patch",
"merged_at": 1654526583000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4449 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4449/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4449/comments | https://api.github.com/repos/huggingface/datasets/issues/4449/events | https://github.com/huggingface/datasets/issues/4449 | 1,261,262,326 | I_kwDODunzps5LLVX2 | 4,449 | Rj | {
"login": "Aeckard45",
"id": 87345839,
"node_id": "MDQ6VXNlcjg3MzQ1ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aeckard45",
"html_url": "https://github.com/Aeckard45",
"followers_url": "https://api.github.com/users/Aeckard45/followers",
"following_url": "https://api.github.com/users/Aeckard45/following{/other_user}",
"gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions",
"organizations_url": "https://api.github.com/users/Aeckard45/orgs",
"repos_url": "https://api.github.com/users/Aeckard45/repos",
"events_url": "https://api.github.com/users/Aeckard45/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aeckard45/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,654,482,272,000 | 1,654,530,290,000 | 1,654,530,290,000 | NONE | null | import android.content.DialogInterface;
import android.database.Cursor;
import android.os.Bundle;
import android.view.View;
import android.widget.ArrayAdapter;
import android.widget.Button;
import android.widget.EditText;
import android.widget.Toast;
import androidx.appcompat.app.AlertDialog;
import androidx.appcompat.app.AppCompatActivity;
public class MainActivity extends AppCompatActivity {
private EditText editTextID;
private EditText editTextName;
private EditText editTextNum;
private String name;
private int number;
private String ID;
private dbHelper db;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
db = new dbHelper(this);
editTextID = findViewById(R.id.editText1);
editTextName = findViewById(R.id.editText2);
editTextNum = findViewById(R.id.editText3);
Button buttonSave = findViewById(R.id.button);
Button buttonRead = findViewById(R.id.button2);
Button buttonUpdate = findViewById(R.id.button3);
Button buttonDelete = findViewById(R.id.button4);
Button buttonSearch = findViewById(R.id.button5);
Button buttonDeleteAll = findViewById(R.id.button6);
buttonSave.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
name = editTextName.getText().toString();
String num = editTextNum.getText().toString();
if (name.isEmpty() || num.isEmpty()) {
Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show();
} else {
number = Integer.parseInt(num);
try {
// Insert Data
db.insertData(name, number);
// Clear the fields
editTextID.getText().clear();
editTextName.getText().clear();
editTextNum.getText().clear();
} catch (Exception e) {
e.printStackTrace();
}
}
}
});
buttonRead.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
final ArrayAdapter<String> adapter = new ArrayAdapter<>(MainActivity.this, android.R.layout.simple_list_item_1);
String name;
String num;
String id;
try {
Cursor cursor = db.readData();
if (cursor != null && cursor.getCount() > 0) {
while (cursor.moveToNext()) {
id = cursor.getString(0); // get data in column index 0
name = cursor.getString(1); // get data in column index 1
num = cursor.getString(2); // get data in column index 2
// Add SQLite data to listView
adapter.add("ID :- " + id + "\n" +
"Name :- " + name + "\n" +
"Number :- " + num + "\n\n");
}
} else {
adapter.add("No Data");
}
cursor.close();
} catch (Exception e) {
e.printStackTrace();
}
// show the saved data in alertDialog
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setTitle("SQLite saved data");
builder.setIcon(R.mipmap.app_icon_foreground);
builder.setAdapter(adapter, new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
}
});
builder.setPositiveButton("OK", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.cancel();
}
});
AlertDialog dialog = builder.create();
dialog.show();
}
});
buttonUpdate.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
name = editTextName.getText().toString();
String num = editTextNum.getText().toString();
ID = editTextID.getText().toString();
if (name.isEmpty() || num.isEmpty() || ID.isEmpty()) {
Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show();
} else {
number = Integer.parseInt(num);
try {
// Update Data
db.updateData(ID, name, number);
// Clear the fields
editTextID.getText().clear();
editTextName.getText().clear();
editTextNum.getText().clear();
} catch (Exception e) {
e.printStackTrace();
}
}
}
});
buttonDelete.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ID = editTextID.getText().toString();
if (ID.isEmpty()) {
Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show();
} else {
try {
// Delete Data
db.deleteData(ID);
// Clear the fields
editTextID.getText().clear();
editTextName.getText().clear();
editTextNum.getText().clear();
} catch (Exception e) {
e.printStackTrace();
}
}
}
});
buttonDeleteAll.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
// Delete all data
// You can simply delete all the data by calling this method --> db.deleteAllData();
// You can try this also
AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this);
builder.setIcon(R.mipmap.app_icon_foreground);
builder.setTitle("Delete All Data");
builder.setCancelable(false);
builder.setMessage("Do you really need to delete your all data ?");
builder.setPositiveButton("Yes", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
// User confirmed , now you can delete the data
db.deleteAllData();
// Clear the fields
editTextID.getText().clear();
editTextName.getText().clear();
editTextNum.getText().clear();
}
});
builder.setNegativeButton("No", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
// user not confirmed
dialog.cancel();
}
});
AlertDialog dialog = builder.create();
dialog.show();
}
});
buttonSearch.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
ID = editTextID.getText().toString();
if (ID.isEmpty()) {
Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show();
} else {
try {
// Search data
Cursor cursor = db.searchData(ID);
if (cursor.moveToFirst()) {
editTextName.setText(cursor.getString(1));
editTextNum.setText(cursor.getString(2));
Toast.makeText(MainActivity.this, "Data successfully searched", Toast.LENGTH_SHORT).show();
} else {
Toast.makeText(MainActivity.this, "ID not found", Toast.LENGTH_SHORT).show();
editTextNum.setText("ID Not found");
editTextName.setText("ID not found");
}
cursor.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
});
}
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4449/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4448/comments | https://api.github.com/repos/huggingface/datasets/issues/4448/events | https://github.com/huggingface/datasets/issues/4448 | 1,260,966,129 | I_kwDODunzps5LKNDx | 4,448 | New Preprocessing Feature - Deduplication [Request] | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! The [datasets_sql](https://github.com/mariosasko/datasets_sql) package lets you easily find distinct rows in a dataset (an example with `SELECT DISTINCT` is in the readme). Deduplication is (still) not part of the official API because it's hard to implement for datasets bigger than RAM while only using the native PyArrow ops.\r\n\r\n(Btw, this is a duplicate of https://github.com/huggingface/datasets/issues/2514)"
] | 1,654,407,176,000 | 1,654,442,067,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
Many large datasets are full of duplications and it has been shown that deduplicating datasets can lead to better performance while training, and more truthful evaluation at test-time.
A feature that allows one to easily deduplicate a dataset can be cool!
**Describe the solution you'd like**
We can define a function and keep only the first/last data-point that yields the value according to this function.
**Describe alternatives you've considered**
The clear alternative is to repeat a clear boilerplate every time someone want to deduplicate a dataset.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4448/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4447/comments | https://api.github.com/repos/huggingface/datasets/issues/4447/events | https://github.com/huggingface/datasets/pull/4447 | 1,260,041,805 | PR_kwDODunzps45E4A- | 4,447 | Minor fixes/improvements in `scene_parse_150` card | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,269,754,000 | 1,654,530,625,000 | 1,654,530,097,000 | CONTRIBUTOR | null | Add `paperswithcode_id` and fix some links in the `scene_parse_150` card. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4447/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4447/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4447",
"html_url": "https://github.com/huggingface/datasets/pull/4447",
"diff_url": "https://github.com/huggingface/datasets/pull/4447.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4447.patch",
"merged_at": 1654530097000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4446/comments | https://api.github.com/repos/huggingface/datasets/issues/4446/events | https://github.com/huggingface/datasets/pull/4446 | 1,260,028,995 | PR_kwDODunzps45E1Qb | 4,446 | Add missing kwargs to docstrings | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,269,027,000 | 1,654,272,609,000 | 1,654,272,089,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4446/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4446",
"html_url": "https://github.com/huggingface/datasets/pull/4446",
"diff_url": "https://github.com/huggingface/datasets/pull/4446.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4446.patch",
"merged_at": 1654272089000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4445/comments | https://api.github.com/repos/huggingface/datasets/issues/4445/events | https://github.com/huggingface/datasets/pull/4445 | 1,259,947,568 | PR_kwDODunzps45EjtA | 4,445 | Fix missing args in docstring of load_dataset_builder | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,264,550,000 | 1,654,266,932,000 | 1,654,266,429,000 | MEMBER | null | Currently, the docstring of `load_dataset_builder` only contains the first parameter `path` (no other):
- https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/loading_methods#datasets.load_dataset_builder.path | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4445/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4445",
"html_url": "https://github.com/huggingface/datasets/pull/4445",
"diff_url": "https://github.com/huggingface/datasets/pull/4445.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4445.patch",
"merged_at": 1654266429000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4444 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4444/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4444/comments | https://api.github.com/repos/huggingface/datasets/issues/4444/events | https://github.com/huggingface/datasets/pull/4444 | 1,259,738,209 | PR_kwDODunzps45D2XX | 4,444 | Fix kwargs in docstrings | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,252,142,000 | 1,654,254,088,000 | 1,654,253,566,000 | MEMBER | null | To fix the rendering of `**kwargs` in docstrings, a parentheses must be added afterwards.
See:
- huggingface/doc-builder/issues/235 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4444/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4444/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4444",
"html_url": "https://github.com/huggingface/datasets/pull/4444",
"diff_url": "https://github.com/huggingface/datasets/pull/4444.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4444.patch",
"merged_at": 1654253566000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4443/comments | https://api.github.com/repos/huggingface/datasets/issues/4443/events | https://github.com/huggingface/datasets/issues/4443 | 1,259,606,334 | I_kwDODunzps5LFBE- | 4,443 | Dataset Viewer issue for openclimatefix/nimrod-uk-1km | {
"login": "ZYMXIXI",
"id": 32382826,
"node_id": "MDQ6VXNlcjMyMzgyODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/32382826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZYMXIXI",
"html_url": "https://github.com/ZYMXIXI",
"followers_url": "https://api.github.com/users/ZYMXIXI/followers",
"following_url": "https://api.github.com/users/ZYMXIXI/following{/other_user}",
"gists_url": "https://api.github.com/users/ZYMXIXI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZYMXIXI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZYMXIXI/subscriptions",
"organizations_url": "https://api.github.com/users/ZYMXIXI/orgs",
"repos_url": "https://api.github.com/users/ZYMXIXI/repos",
"events_url": "https://api.github.com/users/ZYMXIXI/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZYMXIXI/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"If I understand correctly, this is due to the key `split` missing in the line https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41 of the script.\r\nMaybe @albertvillanova could confirm.",
"I'm having a look.",
"Indeed there are several issues in this dataset loading script.\r\n\r\nThe one pointed out by @severo: for the default configuration \"crops\": https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L244\r\n- The download manager downloads `_URL`\r\n- But `_URL` is not defined: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41\r\n ```python\r\n _URL = {'train': []}\r\n ```\r\n- Afterwards, for each split, a different key in `_ULR` is used, but it only contains one key: \"train\"\r\n - \"valid\" key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L260\r\n - \"test key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L269\r\n \r\nThese keys do not exist inside `_URL`, thus the error message reported in the viewer: \r\n```\r\nException: KeyError\r\nMessage: 'valid'\r\n```",
"Would anyone want to submit a Hub PR (or open a Discussion for the authors to be aware) to this dataset? https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km",
"Hi, I'm the main author for that dataset, so I'll work on updating it! I was working on debugging some stuff awhile ago, which is what broke it. ",
"I've opened a Discussion page, so that we can ask/answer and propose fixes until the script works properly: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/discussions/1\r\n\r\nCC: @julien-c @jacobbieker "
] | 1,654,244,236,000 | 1,654,590,232,000 | null | NONE | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4443/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4442/comments | https://api.github.com/repos/huggingface/datasets/issues/4442/events | https://github.com/huggingface/datasets/issues/4442 | 1,258,589,276 | I_kwDODunzps5LBIxc | 4,442 | Dataset Viewer issue for amazon_polarity | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks, looking at it",
"Not sure what happened 😬, but it's fixed"
] | 1,654,197,518,000 | 1,654,627,837,000 | 1,654,627,837,000 | MEMBER | null | ### Link
https://huggingface.co/datasets/amazon_polarity/viewer/amazon_polarity/test
### Description
For some reason the train split is OK but the test split is not for this dataset:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/amazon_polarity/__init__.py'
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4442/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4441/comments | https://api.github.com/repos/huggingface/datasets/issues/4441/events | https://github.com/huggingface/datasets/issues/4441 | 1,258,568,656 | I_kwDODunzps5LBDvQ | 4,441 | Dataset Viewer issue for aeslc | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Not sure what happened 😬, but it's fixed"
] | 1,654,196,232,000 | 1,654,627,855,000 | 1,654,627,855,000 | MEMBER | null | ### Link
https://huggingface.co/datasets/aeslc
### Description
The dataset viewer can't find `dataset_infos.json` in it's cache:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/aeslc/eb8e30234cf984a58ebe9f205674597ac1db2ec91e7321cd7f36864f7e3671b8/dataset_infos.json'
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4441/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4440/comments | https://api.github.com/repos/huggingface/datasets/issues/4440/events | https://github.com/huggingface/datasets/pull/4440 | 1,258,494,469 | PR_kwDODunzps44_io_ | 4,440 | Update docs around audio and vision | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4440). All of your documentation changes will be reflected on that endpoint.",
"> Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead?\r\n\r\nWe plan to address this with end-to-end examples (for each modality) more focused on preprocessing than the ones in the Transformers docs."
] | 1,654,191,723,000 | 1,655,224,235,000 | null | MEMBER | null | As part of the strategy to center the docs around the different modalities, this PR updates the quickstart to include audio and vision examples. This improves the developer experience by making audio and vision content more discoverable, enabling users working in these modalities to also quickly get started without digging too deeply into the docs.
Other changes include:
- Moved the installation guide to the Get Started section because it should be part of a user's onboarding to the library before exploring tutorials or how-to's.
- Updated the native TF code at creating a `tf.data.Dataset` because it was throwing an error. The `to_tensor()` bit was redundant and removing it fixed the error (please double-check me here!).
- Added some UI components to the quickstart so it's easier for users to navigate directly to the relevant section with context about what to expect.
- Reverted to the code tabs for content that don't have any framework-specific text. I think this saves space compared to the code blocks. We'll still use the code blocks if the `torch` text is different from the `tf` text.
Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4440/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4440/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4440",
"html_url": "https://github.com/huggingface/datasets/pull/4440",
"diff_url": "https://github.com/huggingface/datasets/pull/4440.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4440.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4439/comments | https://api.github.com/repos/huggingface/datasets/issues/4439/events | https://github.com/huggingface/datasets/issues/4439 | 1,258,434,111 | I_kwDODunzps5LAi4_ | 4,439 | TIMIT won't load after manual download: Errors about files that don't exist | {
"login": "drscotthawley",
"id": 13925685,
"node_id": "MDQ6VXNlcjEzOTI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/13925685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drscotthawley",
"html_url": "https://github.com/drscotthawley",
"followers_url": "https://api.github.com/users/drscotthawley/followers",
"following_url": "https://api.github.com/users/drscotthawley/following{/other_user}",
"gists_url": "https://api.github.com/users/drscotthawley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drscotthawley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drscotthawley/subscriptions",
"organizations_url": "https://api.github.com/users/drscotthawley/orgs",
"repos_url": "https://api.github.com/users/drscotthawley/repos",
"events_url": "https://api.github.com/users/drscotthawley/events{/privacy}",
"received_events_url": "https://api.github.com/users/drscotthawley/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"To have some context, please see:\r\n- #4145\r\n\r\nPlease, also note that we have recently made some fixes to the script, which are in our GitHub master branch but not yet released:\r\n- #4422\r\n- #4425 \r\n- #4436",
"Thanks Albert! I'll try pulling `datasets` from the git repo instead of PyPI, and/or just wait for the next release.\r\n",
"I'm closing this issue then. Please, feel free to reopen it again if the problem persists."
] | 1,654,187,756,000 | 1,654,245,857,000 | 1,654,245,856,000 | NONE | null | ## Describe the bug
I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both cases the HF dataloader fails; it is looking for files that don't exist anywhere in the dataset: it is looking for files with lower-case letters like "**test*" (all the filenames in both my copies are uppercase) and certain file extensions that exclude the .DOC which is provided in TIMIT:
## Steps to reproduce the bug
```python
data = load_dataset('timit_asr', 'clean')['train']
```
## Expected results
The dataset should load with no errors.
## Actual results
This error message:
```
File "/home/ubuntu/envs/data2vec/lib/python3.9/site-packages/datasets/data_files.py", line 201, in resolve_patterns_locally_or_by_urls
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to resolve any data file that matches '['**test*', '**eval*']' at /home/ubuntu/datasets/timit with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
But this is a strange sort of error: why is it looking for lower-case file names when all the TIMIT dataset filenames are uppercase? Why does it exclude .DOC files when the only parts of the TIMIT data set with "TEST" in them have ".DOC" extensions? ...I wonder, how was anyone able to get this to work in the first place?
The files in the dataset look like the following:
```
³ PHONCODE.DOC
³ PROMPTS.TXT
³ SPKRINFO.TXT
³ SPKRSENT.TXT
³ TESTSET.DOC
```
...so why are these being excluded by the dataset loader?
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.27
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4439/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4438/comments | https://api.github.com/repos/huggingface/datasets/issues/4438/events | https://github.com/huggingface/datasets/pull/4438 | 1,258,255,394 | PR_kwDODunzps44-vhC | 4,438 | Fix docstring of inspect_dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,179,670,000 | 1,654,188,055,000 | 1,654,187,547,000 | MEMBER | null | As pointed out by @sgugger:
- huggingface/doc-builder/issues/235 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4438/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4438",
"html_url": "https://github.com/huggingface/datasets/pull/4438",
"diff_url": "https://github.com/huggingface/datasets/pull/4438.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4438.patch",
"merged_at": 1654187547000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4437/comments | https://api.github.com/repos/huggingface/datasets/issues/4437/events | https://github.com/huggingface/datasets/pull/4437 | 1,258,249,582 | PR_kwDODunzps44-uRW | 4,437 | Add missing columns to `blended_skill_talk` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,179,386,000 | 1,654,530,596,000 | 1,654,530,085,000 | CONTRIBUTOR | null | Adds the missing columns to `blended_skill_talk` to align the loading logic with [ParlAI](https://github.com/facebookresearch/ParlAI/blob/main/parlai/tasks/blended_skill_talk/build.py).
Fix #4426 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4437/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4437",
"html_url": "https://github.com/huggingface/datasets/pull/4437",
"diff_url": "https://github.com/huggingface/datasets/pull/4437.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4437.patch",
"merged_at": 1654530085000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4436/comments | https://api.github.com/repos/huggingface/datasets/issues/4436/events | https://github.com/huggingface/datasets/pull/4436 | 1,257,758,834 | PR_kwDODunzps449FsU | 4,436 | Fix directory names for LDC data in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,152,304,000 | 1,654,162,376,000 | 1,654,161,867,000 | MEMBER | null | Related to:
- #4422 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4436/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4436",
"html_url": "https://github.com/huggingface/datasets/pull/4436",
"diff_url": "https://github.com/huggingface/datasets/pull/4436.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4436.patch",
"merged_at": 1654161867000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4435/comments | https://api.github.com/repos/huggingface/datasets/issues/4435/events | https://github.com/huggingface/datasets/issues/4435 | 1,257,496,552 | I_kwDODunzps5K89_o | 4,435 | Load a local cached dataset that has been modified | {
"login": "mihail911",
"id": 2789441,
"node_id": "MDQ6VXNlcjI3ODk0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2789441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mihail911",
"html_url": "https://github.com/mihail911",
"followers_url": "https://api.github.com/users/mihail911/followers",
"following_url": "https://api.github.com/users/mihail911/following{/other_user}",
"gists_url": "https://api.github.com/users/mihail911/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mihail911/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mihail911/subscriptions",
"organizations_url": "https://api.github.com/users/mihail911/orgs",
"repos_url": "https://api.github.com/users/mihail911/repos",
"events_url": "https://api.github.com/users/mihail911/events{/privacy}",
"received_events_url": "https://api.github.com/users/mihail911/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! `datasets` caches every modification/loading, so you can either rerun the pipeline up to the `map` call or use `Dataset.from_file(modified_dataset)` to load the dataset directly from the cache file.",
"Awesome, hvala Mario! This works. "
] | 1,654,134,709,000 | 1,654,214,366,000 | 1,654,214,358,000 | NONE | null | ## Describe the bug
I have loaded a dataset as follows:
```
d = load_dataset("emotion", split="validation")
```
Afterwards I make some modifications to the dataset via a `map` call:
```
d.map(some_update_func, cache_file_name=modified_dataset)
```
This generates a cached version of the dataset on my local system in the same directory as the original download of the data (/path/to/cache). Running an `ls` returns:
```
modified_dataset
dataset_info.json
emotion-test.arrow
emotion-train.arrow
emotion-validation.arrow
```
as expected. However, when I try to load up the modified cached dataset via a call to
```
modified = load_dataset("emotion", split="validation", data_files="/path/to/cache/modified_dataset")
```
it simply redownloads a new version of the dataset and dumps to a new cache rather than loading up the original modified dataset:
```
Using custom data configuration validation-cdbf51685638421b
Downloading and preparing dataset emotion/validation to ...
```
How am I supposed to load the original modified local cache copy of the dataset?
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4435/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4434/comments | https://api.github.com/repos/huggingface/datasets/issues/4434/events | https://github.com/huggingface/datasets/pull/4434 | 1,256,207,321 | PR_kwDODunzps443mAr | 4,434 | Fix dummy dataset generation script for handling nested types of _URLs | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,654,095,195,000 | 1,654,603,708,000 | 1,654,593,849,000 | CONTRIBUTOR | null | It seems that when user specify nested _URLs structures in their dataset script. An error will be raised when generating dummy dataset.
I think the types of all elements in `dummy_data_dict.values()` should be checked because they may have different types.
Linked to issue #4428
PS: I am not sure whether my code fix this issue in a proper way. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4434/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4434",
"html_url": "https://github.com/huggingface/datasets/pull/4434",
"diff_url": "https://github.com/huggingface/datasets/pull/4434.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4434.patch",
"merged_at": 1654593849000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4433/comments | https://api.github.com/repos/huggingface/datasets/issues/4433/events | https://github.com/huggingface/datasets/pull/4433 | 1,255,830,758 | PR_kwDODunzps442P5L | 4,433 | Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Added back the `[:]` and a comment to explain why this is needed. "
] | 1,654,085,396,000 | 1,654,770,894,000 | 1,654,770,367,000 | CONTRIBUTOR | null | Fix #4348 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4433/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4433/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4433",
"html_url": "https://github.com/huggingface/datasets/pull/4433",
"diff_url": "https://github.com/huggingface/datasets/pull/4433.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4433.patch",
"merged_at": 1654770366000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4432/comments | https://api.github.com/repos/huggingface/datasets/issues/4432/events | https://github.com/huggingface/datasets/pull/4432 | 1,255,523,720 | PR_kwDODunzps441JmK | 4,432 | Fix builder docstring | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,076,730,000 | 1,654,191,827,000 | 1,654,191,315,000 | MEMBER | null | Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4432/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4432",
"html_url": "https://github.com/huggingface/datasets/pull/4432",
"diff_url": "https://github.com/huggingface/datasets/pull/4432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4432.patch",
"merged_at": 1654191315000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4431/comments | https://api.github.com/repos/huggingface/datasets/issues/4431/events | https://github.com/huggingface/datasets/pull/4431 | 1,254,618,948 | PR_kwDODunzps44x5aG | 4,431 | Add personaldialog datasets | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"These test errors are related to issue #4428 \r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"I only made a trivial modification in my commit https://github.com/huggingface/datasets/pull/4431/commits/402c893d35224d7828176717233909ac5f1e7b3e\r\n\r\nI have submitted a PR #4434 for the about issue.",
"> Awesome thanks for adding this dataset :)\r\n> \r\n> I just have one comment about the licensing.\r\n> \r\n> Also it seems that you already have the dataset in https://huggingface.co/datasets/silver/personal_dialog, so it's unnecessary to add it here\r\n\r\nThank you very much for your comment.\r\n\r\nSo, should I close this PR?",
"Thanks for fixing the licensing section :)\r\n\r\n> So, should I close this PR?\r\n\r\nYes you can close this PR, it's better if your dataset is under your namespace at https://huggingface.co/datasets/silver/personal_dialog :)\r\n\r\nDon't forget to update the licensing section on https://huggingface.co/datasets/silver/personal_dialog as well"
] | 1,654,046,440,000 | 1,654,951,223,000 | 1,654,950,676,000 | CONTRIBUTOR | null | It seems that all tests are passed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4431/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4431",
"html_url": "https://github.com/huggingface/datasets/pull/4431",
"diff_url": "https://github.com/huggingface/datasets/pull/4431.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4431.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4430 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4430/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4430/comments | https://api.github.com/repos/huggingface/datasets/issues/4430/events | https://github.com/huggingface/datasets/issues/4430 | 1,254,412,591 | I_kwDODunzps5KxNEv | 4,430 | Add ability to load newer, cleaner version of Multi-News | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! Our versioning is based on Git revisions (the `revision` param in `load_dataset`), so you can just replace the old URL with the new one and open a PR :). I can also give you some pointers if needed.",
"@mariosasko Awesome thanks! I will do that. Looks like this new version of the data is not available as a zip but as three files (train/dev/test). How is this usually handled in HF Datasets, should `_URL` be a dict with keys `train`, `val`, `test` perhaps?",
"Yes! Let me help you with more detailed instructions.\r\n\r\nIn the first step, we need to update the URLs. One of the possible dictionary structures is as follows:\r\n```python\r\n_URLs = {\r\n \"train\": {\"src\": \"https://drive.google.com/uc?export=download&id=1wHAWDOwOoQWSj7HYpyJ3Aeud8WhhaJ7P\", \"tgt\": \"https://drive.google.com/uc?export=download&id=1QVgswwhVTkd3VLCzajK6eVkcrSWEK6kq\"}\r\n \"val\": ...\r\n \"test\": ...\r\n}\r\n```\r\n\r\n(You can use this page to generate direct download links: https://sites.google.com/site/gdocs2direct/)\r\n\r\nThen we move to the `split_generators` method:\r\n```python\r\ndef _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n files = dl_manager.download(_URLs)\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\"src_file\": files[\"train\"][\"src\"], \"tgt_file\": files[\"train\"][\"tgt\"]},\r\n ),\r\n ... # same for val and test\r\n ]\r\n```\r\nFinally, we adjust the signature of `_generate_examples`:\r\n```python\r\ndef _generate_examples(self, src_file, tgt_file):\r\n \"\"\"Yields examples.\"\"\"\r\n with open(src_file, encoding=\"utf-8\") as src_f, open(\r\n tgt_file, encoding=\"utf-8\"\r\n ) as tgt_f:\r\n ... # the rest is the same\r\n```\r\n\r\nAnd that's it!\r\n\r\nPS: Let me know if you need help updating the dummy data and regenerating the metadata file.",
"Awesome! Thanks for the detailed help, that was straightforward with your instruction. However, I think I am being blocked by this issue: https://github.com/huggingface/datasets/issues/4428",
"Feel free to open a PR, and I can fix this manually.",
"Awsome, done in #4451!"
] | 1,654,030,844,000 | 1,654,622,084,000 | 1,654,622,084,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https://github.com/Alex-Fabbri/Multi-News/issues/11). There exists a [newer version which fixes some of these issues](https://drive.google.com/open?id=1jwBzXBVv8sfnFrlzPnSUBHEEAbpIUnFq).
Unfortunately I don't think you can just replace this old URL with the new one, otherwise this could lead to issues with reproducibility.
**Describe the solution you'd like**
Add a new version to the Multi-News dataloader that points to the updated dataset which has fixes for some known issues.
**Describe alternatives you've considered**
Replace the current URL to the original version to the dataset with the URL to the version with fixes.
**Additional context**
Would be happy to make a PR for this, could someone maybe point me to another dataloader that has multiple versions so I can see how this is handled in `datasets`?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4430/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4429/comments | https://api.github.com/repos/huggingface/datasets/issues/4429/events | https://github.com/huggingface/datasets/pull/4429 | 1,254,184,358 | PR_kwDODunzps44whxN | 4,429 | Update builder docstring for deprecated/added arguments | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mishig25 is investigating why deprecated/added do not affect the enclosed text format when used in args docstring: no special formatting appears: \r\n- https://moon-ci-docs.huggingface.co/docs/datasets/pr_4429/en/package_reference/builder_classes#datasets.DatasetBuilder",
"@albertvillanova please check now 👍 \r\nhttps://moon-ci-docs.huggingface.co/docs/datasets/pr_4429/en/package_reference/builder_classes#datasets.DatasetBuilder\r\n\r\n<img width=\"500\" alt=\"Screenshot 2022-06-06 at 10 20 34\" src=\"https://user-images.githubusercontent.com/11827707/172123471-fab97138-c903-4a71-ab7f-c90e5e43c58f.png\">\r\n",
"Thanks @mishig25.\r\n\r\nJust one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?",
"> Just one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?\r\n\r\nYes, that is expected 😊 because the depreacted box is being bounded by its parent box (the box for `name` argument in the screenshot above)"
] | 1,654,018,645,000 | 1,654,688,418,000 | 1,654,687,905,000 | MEMBER | null | This PR updates the builder docstring with deprecated/added directives for arguments name/config_name.
Follow up of:
- #4414
- huggingface/doc-builder#233
First merge:
- #4432 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4429/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4429",
"html_url": "https://github.com/huggingface/datasets/pull/4429",
"diff_url": "https://github.com/huggingface/datasets/pull/4429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4429.patch",
"merged_at": 1654687905000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4428/comments | https://api.github.com/repos/huggingface/datasets/issues/4428/events | https://github.com/huggingface/datasets/issues/4428 | 1,254,092,818 | I_kwDODunzps5Kv_AS | 4,428 | Errors when building dummy data if you use nested _URLS | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,654,013,457,000 | 1,654,593,849,000 | 1,654,593,849,000 | CONTRIBUTOR | null | ## Describe the bug
When making dummy data with the `datasets-cli dummy_data` tool,
an error will be raised if you use a nested _URLS in your dataset script.
Traceback (most recent call last):
File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module>
main()
File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 311, in run
self._autogenerate_dummy_data(
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 337, in _autogenerate_dummy_data
dataset_builder._split_generators(dl_manager)
File "/home/name/.cache/huggingface/modules/datasets_modules/datasets/personal_dialog/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77/personal_dialog.py", line 108, in _split_generators
data_dir = dl_manager.download_and_extract(urls)
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 56, in download_and_extract
dummy_output = self.mock_download_manager.download(url_or_urls)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 130, in download
return self.download_and_extract(data_url)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 122, in download_and_extract
return self.create_dummy_data_dict(dummy_file, data_url)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 165, in create_dummy_data_dict
if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):
TypeError: unhashable type: 'list'
## Steps to reproduce the bug
You can use my dataset script implemented here:
https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py
```python
datasets_cli dummy_data datasets/personal_dialog --auto_generate
```
You can change https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py#L54
to
```
"train": "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz"
```
before runing the above script to avoid downloading a large training data.
## Expected results
The dummy data should be generated
## Actual results
An error is raised.
It seems that in https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165
We only check if the first item of dummy_data_dict.values() is str.
However, dummy_data_dict.values() may have the type of [str, list, list].
A simple fix would be changing https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 to
```python
if all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):
```
But I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version: Python 3.9.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4428/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4427/comments | https://api.github.com/repos/huggingface/datasets/issues/4427/events | https://github.com/huggingface/datasets/pull/4427 | 1,253,959,313 | PR_kwDODunzps44vyGg | 4,427 | Add HF.co for PRs/Issues for specific datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,007,481,000 | 1,654,087,062,000 | 1,654,086,542,000 | MEMBER | null | As in https://github.com/huggingface/transformers/pull/17485, issues and PR for datasets under a namespace have to be on the HF Hub | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4427/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4427",
"html_url": "https://github.com/huggingface/datasets/pull/4427",
"diff_url": "https://github.com/huggingface/datasets/pull/4427.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4427.patch",
"merged_at": 1654086542000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4426/comments | https://api.github.com/repos/huggingface/datasets/issues/4426/events | https://github.com/huggingface/datasets/issues/4426 | 1,253,887,311 | I_kwDODunzps5KvM1P | 4,426 | Add loading variable number of columns for different splits | {
"login": "DrMatters",
"id": 22641583,
"node_id": "MDQ6VXNlcjIyNjQxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrMatters",
"html_url": "https://github.com/DrMatters",
"followers_url": "https://api.github.com/users/DrMatters/followers",
"following_url": "https://api.github.com/users/DrMatters/following{/other_user}",
"gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions",
"organizations_url": "https://api.github.com/users/DrMatters/orgs",
"repos_url": "https://api.github.com/users/DrMatters/repos",
"events_url": "https://api.github.com/users/DrMatters/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrMatters/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! Indeed the column is missing, but you shouldn't get an error? Have you made some modifications (locally) to the loading script? I've opened a PR to add the missing columns to the script. "
] | 1,654,004,416,000 | 1,654,273,525,000 | 1,654,273,525,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have.
When loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4426/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4425/comments | https://api.github.com/repos/huggingface/datasets/issues/4425/events | https://github.com/huggingface/datasets/pull/4425 | 1,253,641,604 | PR_kwDODunzps44uuDq | 4,425 | Make extensions case-insensitive in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,991,804,000 | 1,654,092,930,000 | 1,654,092,411,000 | MEMBER | null | Related to #4422. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4425/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4425",
"html_url": "https://github.com/huggingface/datasets/pull/4425",
"diff_url": "https://github.com/huggingface/datasets/pull/4425.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4425.patch",
"merged_at": 1654092411000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4424/comments | https://api.github.com/repos/huggingface/datasets/issues/4424/events | https://github.com/huggingface/datasets/pull/4424 | 1,253,542,488 | PR_kwDODunzps44uZBD | 4,424 | Fix DuplicatedKeysError in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,986,865,000 | 1,654,005,050,000 | 1,654,004,551,000 | MEMBER | null | Fix #4422. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4424/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4424",
"html_url": "https://github.com/huggingface/datasets/pull/4424",
"diff_url": "https://github.com/huggingface/datasets/pull/4424.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4424.patch",
"merged_at": 1654004551000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4423 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4423/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4423/comments | https://api.github.com/repos/huggingface/datasets/issues/4423/events | https://github.com/huggingface/datasets/pull/4423 | 1,253,326,023 | PR_kwDODunzps44trdP | 4,423 | Add new dataset MMChat | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! As for https://github.com/huggingface/datasets/pull/4431 please also update the licensing section in https://huggingface.co/datasets/silver/mmchat ;)\r\n\r\nThen if it's fine for you feel free to close this PR"
] | 1,653,972,307,000 | 1,654,951,252,000 | 1,654,950,702,000 | CONTRIBUTOR | null | Hi, I am adding a new dataset MMChat.
It seems that all tests are passed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4423/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4423",
"html_url": "https://github.com/huggingface/datasets/pull/4423",
"diff_url": "https://github.com/huggingface/datasets/pull/4423.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4423.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4422/comments | https://api.github.com/repos/huggingface/datasets/issues/4422/events | https://github.com/huggingface/datasets/issues/4422 | 1,253,146,511 | I_kwDODunzps5KsX-P | 4,422 | Cannot load timit_asr data set | {
"login": "bhaddow",
"id": 992795,
"node_id": "MDQ6VXNlcjk5Mjc5NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/992795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhaddow",
"html_url": "https://github.com/bhaddow",
"followers_url": "https://api.github.com/users/bhaddow/followers",
"following_url": "https://api.github.com/users/bhaddow/following{/other_user}",
"gists_url": "https://api.github.com/users/bhaddow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhaddow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhaddow/subscriptions",
"organizations_url": "https://api.github.com/users/bhaddow/orgs",
"repos_url": "https://api.github.com/users/bhaddow/repos",
"events_url": "https://api.github.com/users/bhaddow/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhaddow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @bhaddow.\r\n\r\nI'm fixing it.",
"Thanks for the quick fix!",
"@bhaddow we have also made a fix so that you don't have to convert to uppercase the file extensions of the LDC data.\r\n\r\nWould you mind checking if it works OK now for you and reporting if there are any issues? Thanks. ",
"Hi @albertvillanova -It loads fine on a copy of the data from deepai - although I have to remove the copies of the .WAV files (with extension .WAV,wav). On a copy of the data that was obtained from the LDC, the glob still fails to find the files. The LDC copy looks like it was copied from CD, in 2004, so the structure may be different to a current download.",
"Ah, if I change the train/ and test/ directories to TRAIN/ and TEST/ then it works!",
"Thanks for your investigation and report, @bhaddow. I'm adding another fix for the TRAIN/train and TEST/test directory names."
] | 1,653,948,022,000 | 1,654,151,645,000 | 1,654,004,551,000 | NONE | null | ## Describe the bug
I am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a "duplicate key" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all.
## Steps to reproduce the bug
```python
timit = datasets.load_dataset("timit_asr", data_dir = "/path/to/dataset")
# Sample code to reproduce the bug
```
## Expected results
The data set should load without error. It worked for me before the LDC url change.
## Actual results
```
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: SA1
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4422/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4421/comments | https://api.github.com/repos/huggingface/datasets/issues/4421/events | https://github.com/huggingface/datasets/pull/4421 | 1,253,059,467 | PR_kwDODunzps44szxR | 4,421 | Add extractor for bzip2-compressed files | {
"login": "asivokon",
"id": 2910707,
"node_id": "MDQ6VXNlcjI5MTA3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2910707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asivokon",
"html_url": "https://github.com/asivokon",
"followers_url": "https://api.github.com/users/asivokon/followers",
"following_url": "https://api.github.com/users/asivokon/following{/other_user}",
"gists_url": "https://api.github.com/users/asivokon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asivokon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asivokon/subscriptions",
"organizations_url": "https://api.github.com/users/asivokon/orgs",
"repos_url": "https://api.github.com/users/asivokon/repos",
"events_url": "https://api.github.com/users/asivokon/events{/privacy}",
"received_events_url": "https://api.github.com/users/asivokon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,653,938,380,000 | 1,654,528,970,000 | 1,654,528,970,000 | CONTRIBUTOR | null | This change enables loading bzipped datasets, just like any other compressed dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4421/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4421",
"html_url": "https://github.com/huggingface/datasets/pull/4421",
"diff_url": "https://github.com/huggingface/datasets/pull/4421.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4421.patch",
"merged_at": 1654528969000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4420/comments | https://api.github.com/repos/huggingface/datasets/issues/4420/events | https://github.com/huggingface/datasets/issues/4420 | 1,252,739,239 | I_kwDODunzps5Kq0in | 4,420 | Metric evaluation problems in multi-node, shared file system | {
"login": "gullabi",
"id": 40303490,
"node_id": "MDQ6VXNlcjQwMzAzNDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/40303490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gullabi",
"html_url": "https://github.com/gullabi",
"followers_url": "https://api.github.com/users/gullabi/followers",
"following_url": "https://api.github.com/users/gullabi/following{/other_user}",
"gists_url": "https://api.github.com/users/gullabi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gullabi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gullabi/subscriptions",
"organizations_url": "https://api.github.com/users/gullabi/orgs",
"repos_url": "https://api.github.com/users/gullabi/repos",
"events_url": "https://api.github.com/users/gullabi/events{/privacy}",
"received_events_url": "https://api.github.com/users/gullabi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"If you call `metric.compute` in a distributed setup like yours, then `metric.compute` is called in each process. `metric.compute` first calls `metric.add_batch`, and it looks like your error appears at that stage.\r\n\r\nTo make sure that all the processes have started writing their predictions/references at the same time, each process waits for process 0 to lock `slurm-{world_size}-0.arrow.lock`. Process 0 locks this file when `metric.add_batch` is called, so here when `metric.compute` is called.\r\n\r\nTherefore your error can happen when process 0 takes too much time to call `metric.compute` compared to process 3 (>100 seconds by default). I haven't tried running your code but could it be the case ?\r\n\r\nI guess it could also happen if you run multiple times the same distributed job at the same time with the same `experiment_id` because they would collide.\r\n",
"We've finally been able to isolate the problem, it wasn't a timing problem, but rather a file locking one. \r\nThe locks produced by calling `flock` where not visible between nodes (so the master node couldn't check other node's locks nor the other way around). \r\n\r\nWe are now having issues with the pre-processing in our runner script, but are not related with the rendezvous process during the evaluation phase. We will let you know about it once we address it. \r\n\r\nOur solution to the rendezvous is as follows:\r\n- We solved the problem by calling `lockf` instead of `flock`.\r\n- We had to change slightly the `_check_all_processes_locks` method so that the main process (i.e. process 0) didn't check it's own lock (because `lockf` permits recursive locks and thus checking it only replaced the current lock with a new one). \r\n\r\nWe use a shared file system between nodes using GPFS in our cluster setup. Maybe the difference between the behavior we see with respect to your usage in multi-node executions comes from that fact. Which file system scheme do you use for the multi-node executions? \r\n\r\n`lockf` seems to work in more settings than `flock`, so maybe we could write a PR so you could test it in your environment. ",
"Cool, I'm glad you managed to make evaluation work :)\r\n\r\nI'm not completely aware of the differences between lockf and flock, but I've read somewhere that flock is preferable over lockf in multithreading and multiprocessing situations. Here we definitely are in such a situation so unless it is super important I don't think we will switch to lockf"
] | 1,653,917,045,000 | 1,654,693,385,000 | null | NONE | null | ## Describe the bug
Metric evaluation fails in multi-node within a shared file system, because the master process cannot find the lock files from other nodes. (This issue was originally mentioned in the transformers repo https://github.com/huggingface/transformers/issues/17412)
## Steps to reproduce the bug
1. clone [this huggingface model](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) and replace the `run_speech_recognition_ctc.py` script with the version in the gist [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71#file-run_speech_recognition_ctc-py).
2. Setup the `venv` according to the requirements of the model file plus `datasets==2.0.0`, `transformers==4.18.0` and `torch==1.9.0`
3. Launch the runner in a distributed environment which has a shared file system for two nodes, preferably with SLURM. Example [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71)
Specifically for the datasets, for the distributed setup the `load_metric` is called as:
```
process_id=int(os.environ["RANK"])
num_process=int(os.environ["WORLD_SIZE"])
eval_metrics = {metric: load_metric(metric,
process_id=process_id,
num_process=num_process,
experiment_id="slurm")
for metric in data_args.eval_metrics}
```
## Expected results
The training should not fail, due to the failure of the `Metric.compute()` step.
## Actual results
For the test I am executing the world size is 4, with 2 GPUs in 2 nodes. However the process is not finding the necessary lock files
```
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 841, in <module>
main()
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 792, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1497, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1624, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2291, in evaluate
metric_key_prefix=metric_key_prefix,
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2535, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in compute_metrics
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in <dictcomp>
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 419, in compute
self.add_batch(**inputs)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 465, in add_batch
self._init_writer()
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 552, in _init_writer
self._check_rendez_vous() # wait for master to be ready and to let everyone go
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 342, in _check_rendez_vous
) from None
ValueError: Expected to find locked file /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock from process 3 but it doesn't exist.
```
When I look at the cache directory, I can see all the lock files in principle:
```
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-rdv.lock
```
I see that there was another related issue here https://github.com/huggingface/datasets/issues/1942, but it seems to have resolved via https://github.com/huggingface/datasets/pull/1966. Let me know if there is problem with how I am calling the `load_metric` or whether I need to make changes to the `.compute()` steps.
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-4.18.0-147.8.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core
- Python version: 3.7.4
- PyArrow version: 7.0.0
- Pandas version: 1.3.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4420/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4419/comments | https://api.github.com/repos/huggingface/datasets/issues/4419/events | https://github.com/huggingface/datasets/issues/4419 | 1,252,652,896 | I_kwDODunzps5Kqfdg | 4,419 | Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! If the only goal is to improve readability, it's better to use `assertTupleEqual` than `assertSequenceEqual` for Python tuples. Also, note that this function is called internally by `assertEqual`, but I guess we can accept a PR to be more verbose.",
"Hi @mariosasko, right! I'll update the issue title/desc with `assertTupleEqual` even though as you said it seems to be internally using `assertEqual` so I'm not sure whether it's worth it or not...\r\n\r\nhttps://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual",
"I thought we were supposed to move gradually from `unittest` to `pytest`..."
] | 1,653,912,798,000 | 1,655,286,304,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating.
Find an example of an `assertEqual` over a tuple in 🤗 `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570
**Describe the solution you'd like**
Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`.
**Additional context**
If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks 🤗
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4419/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4418/comments | https://api.github.com/repos/huggingface/datasets/issues/4418/events | https://github.com/huggingface/datasets/pull/4418 | 1,252,506,268 | PR_kwDODunzps44q9pG | 4,418 | Add dataset MMChat | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,653,905,440,000 | 1,653,922,698,000 | 1,653,922,698,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4418/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4418",
"html_url": "https://github.com/huggingface/datasets/pull/4418",
"diff_url": "https://github.com/huggingface/datasets/pull/4418.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4418.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4417/comments | https://api.github.com/repos/huggingface/datasets/issues/4417/events | https://github.com/huggingface/datasets/issues/4417 | 1,251,933,091 | I_kwDODunzps5Knvuj | 4,417 | how to convert a dict generator into a huggingface dataset. | {
"login": "StephennFernandes",
"id": 32235549,
"node_id": "MDQ6VXNlcjMyMjM1NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StephennFernandes",
"html_url": "https://github.com/StephennFernandes",
"followers_url": "https://api.github.com/users/StephennFernandes/followers",
"following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}",
"gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions",
"organizations_url": "https://api.github.com/users/StephennFernandes/orgs",
"repos_url": "https://api.github.com/users/StephennFernandes/repos",
"events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}",
"received_events_url": "https://api.github.com/users/StephennFernandes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | open | false | null | [] | null | [
"@albertvillanova @lhoestq , could you please help me on this issue. ",
"Hi ! As mentioned on the [forum](https://discuss.huggingface.co/t/how-to-wrap-a-generator-with-hf-dataset/18464), the simplest for now would be to define a [dataset script](https://huggingface.co/docs/datasets/dataset_script) which can contain your generator. But we can also explore adding something like `ds = Dataset.from_iterable(seqio_dataset)`",
"@lhoestq , hey i did as you instructed, but sadly i cannot get pass through the download_manager, as i dont have anything to download. i was skipping the ` def _split_generators(self, dl_manager):` function. but i cannot get around it. I get a `NotImplementedError: `\r\n\r\nthe following is my code for the same: \r\n\r\n\r\n\r\n```\r\nimport datasets \r\nimport functools\r\nimport glob \r\nfrom datasets import load_from_disk\r\nimport seqio\r\nimport tensorflow as tf\r\nimport t5.data\r\nfrom datasets import load_dataset\r\nfrom t5.data import postprocessors\r\nfrom t5.data import preprocessors\r\nfrom t5.evaluation import metrics\r\nfrom seqio import FunctionDataSource, utils\r\n\r\nTaskRegistry = seqio.TaskRegistry\r\n\r\ndata_path = glob.glob(\"/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/*\", recursive=False)\r\n\r\n\r\ndef gen_dataset(split, shuffle=False, seed=None, column=\"text\", dataset_path=None):\r\n dataset = load_from_disk(dataset_path)\r\n if shuffle:\r\n if seed:\r\n dataset = dataset.shuffle(seed=seed)\r\n else:\r\n dataset = dataset.shuffle()\r\n while True:\r\n for item in dataset[str(split)]:\r\n yield item[column]\r\n\r\n\r\ndef dataset_fn(split, shuffle_files, seed=None, dataset_path=None):\r\n return tf.data.Dataset.from_generator(\r\n functools.partial(gen_dataset, split, shuffle_files, seed, dataset_path=dataset_path),\r\n output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_path)\r\n )\r\n\r\n@utils.map_over_dataset\r\ndef target_to_key(x, key_map, target_key):\r\n \"\"\"Assign the value from the dataset to target_key in key_map\"\"\"\r\n return {**key_map, target_key: x}\r\n\r\n\r\n_CITATION = \"Not ready yet\"\r\n_DESCRIPTION = \"a custom seqio based mixed samples on a given temperature value, that again returns a dataset in HF dataset format well samples on the Mixture temperature\"\r\n_HOMEPAGE = \"ldcil.org\"\r\n\r\nclass CustomSeqio(datasets.GeneratorBasedBuilder):\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n }\r\n ),\r\n homepage=\"https://ldcil.org\",\r\n citation=_CITATION,)\r\n\r\ndef generate_examples(self):\r\n seqio_train_list = []\r\n for lang in data_path:\r\n dataset_name = lang.split(\"/\")[-1]\r\n dataset_shapes = None \r\n\r\n TaskRegistry.add(\r\n str(dataset_name),\r\n source=seqio.FunctionDataSource(\r\n dataset_fn=functools.partial(dataset_fn, dataset_path=lang),\r\n splits=(\"train\", \"test\"),\r\n caching_permitted=False,\r\n num_input_examples=dataset_shapes,\r\n ),\r\n preprocessors=[\r\n functools.partial(\r\n target_to_key, key_map={\r\n \"targets\": None,\r\n }, target_key=\"targets\")],\r\n output_features={\"targets\": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},\r\n metric_fns=[]\r\n )\r\n\r\n seqio_train_dataset = seqio.get_mixture_or_task(dataset_name).get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n seqio_train_list.append(seqio_train_dataset)\r\n \r\n lang_name_list = []\r\n for lang in data_path:\r\n lang_name = lang.split(\"/\")[-1]\r\n lang_name_list.append(lang_name)\r\n\r\n seqio_mixture = seqio.MixtureRegistry.add(\r\n \"seqio_mixture\",\r\n lang_name_list,\r\n default_rate=0.7)\r\n \r\n seqio_mixture_dataset = seqio.get_mixture_or_task(\"seqio_mixture\").get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n\r\n for id, ex in enumerate(seqio_mixture_dataset):\r\n yield id, {\"text\": ex[\"targets\"].numpy().decode()}\r\n```\r\n\r\nand i load it by:\r\n\r\n`seqio_mixture = load_dataset(\"seqio_loader\")`",
"@lhoestq , just to make things clear ... \r\n\r\nthe following is my original code, thats not in the HF dataset loading script: \r\n\r\n```\r\nimport functools\r\nimport seqio\r\nimport tensorflow as tf\r\nimport t5.data\r\nfrom datasets import load_from_disk\r\nfrom t5.data import postprocessors\r\nfrom t5.data import preprocessors\r\nfrom t5.evaluation import metrics\r\nfrom seqio import FunctionDataSource, utils\r\nimport glob \r\n\r\nTaskRegistry = seqio.TaskRegistry\r\n\r\n\r\n\r\ndef gen_dataset(split, shuffle=False, seed=None, column=\"text\", dataset_path=None):\r\n dataset = load_from_disk(dataset_path)\r\n if shuffle:\r\n if seed:\r\n dataset = dataset.shuffle(seed=seed)\r\n else:\r\n dataset = dataset.shuffle()\r\n while True:\r\n for item in dataset[str(split)]:\r\n yield item[column]\r\n\r\n\r\ndef dataset_fn(split, shuffle_files, seed=None, dataset_path=None):\r\n return tf.data.Dataset.from_generator(\r\n functools.partial(gen_dataset, split, shuffle_files, seed, dataset_path=dataset_path),\r\n output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_path)\r\n )\r\n\r\n\r\n@utils.map_over_dataset\r\ndef target_to_key(x, key_map, target_key):\r\n \"\"\"Assign the value from the dataset to target_key in key_map\"\"\"\r\n return {**key_map, target_key: x}\r\n\r\ndata_path = glob.glob(\"/home/stephen/Desktop/MEGA_CORPUS/COMBINED_CORPUS/*\", recursive=False)\r\n\r\nseqio_train_list = []\r\n\r\nfor lang in data_path:\r\n dataset_name = lang.split(\"/\")[-1]\r\n dataset_shapes = None \r\n\r\n TaskRegistry.add(\r\n str(dataset_name),\r\n source=seqio.FunctionDataSource(\r\n dataset_fn=functools.partial(dataset_fn, dataset_path=lang),\r\n splits=(\"train\", \"test\"),\r\n caching_permitted=False,\r\n num_input_examples=dataset_shapes,\r\n ),\r\n preprocessors=[\r\n functools.partial(\r\n target_to_key, key_map={\r\n \"targets\": None,\r\n }, target_key=\"targets\")],\r\n output_features={\"targets\": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},\r\n metric_fns=[]\r\n )\r\n\r\n seqio_train_dataset = seqio.get_mixture_or_task(dataset_name).get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n seqio_train_list.append(seqio_train_dataset)\r\n\r\nlang_name_list = []\r\nfor lang in data_path:\r\n lang_name = lang.split(\"/\")[-1]\r\n lang_name_list.append(lang_name)\r\n\r\nseqio_mixture = seqio.MixtureRegistry.add(\r\n \"seqio_mixture\",\r\n lang_name_list,\r\n default_rate=0.7\r\n)\r\n\r\nseqio_mixture_dataset = seqio.get_mixture_or_task(\"seqio_mixture\").get_dataset(\r\n sequence_length=None,\r\n split=\"train\",\r\n shuffle=True,\r\n num_epochs=1,\r\n shard_info=seqio.ShardInfo(index=0, num_shards=10),\r\n use_cached=False,\r\n seed=42)\r\n\r\nfor _, ex in zip(range(15), seqio_mixture_dataset):\r\n print(ex[\"targets\"].numpy().decode())\r\n```\r\n\r\nwhere the seqio_mixture_dataset is the generator that i wanted to be wrapped in HF dataset. \r\n\r\nalso additionally, could you please tell me how do i set the `default_rate=0.7` args where `seqio_mixture` is defined to be made as a custom option in the HF load_dataset() method,\r\n\r\nmaybe like this: \r\n`seqio_mixture_dataset = datasets.load_dataset(\"seqio_loader\",temperature=0.5)`",
"I like the idea of having `Dataset.from_iterable(iterable)` in the API. The only problem is that we also want to make this part cachable, which is tricky if `iterable` is a generator. \r\n\r\nSome resources on this issue:\r\n* https://github.com/uqfoundation/dill/issues/311\r\n* https://stackoverflow.com/questions/7180212/why-cant-generators-be-pickled\r\n* https://github.com/tonyroberts/generator_tools - python package for pickling generators; pickles bytecode, so it creates version-specific dumps",
"For the caching maybe we can have `Dataset.from_generator` as TF and pickle+hash the generator function (not the generator object itself) ?\r\n\r\nAnd then keep `Dataset.from_iterable` fo pickable objects like lists",
"@lhoestq, @mariosasko do you too have any examples where the dataset is a generator and needs to be wrapped into hf dataset ? ",
"@lhoestq, following to my previous question ... what possibly could be done in this [link1](https://github.com/huggingface/datasets/issues/4417#issuecomment-1146627404) [link2](https://github.com/huggingface/datasets/issues/4417#issuecomment-1146627593) case? do you have any ideas? ",
"@lhoestq +1 for the `Dataset.from_generator` idea.\r\n\r\nHaving thought about it, let's avoid adding `Dataset.from_iterable` to the API since dictionaries are technically iteralbles (\"iterable\" is a broad term in Python), and we already provide `Dataset.from_dict`. And for lists maybe we can add `Dataset.from_list` similar to `pa.Table.from_pylist`. WDYT?\r\n",
"Hi @StephennFernandes!\r\n\r\nTo fix the issues in the copied code, rename `generate_examples` to` _generate_examples` and add one level of indentation as this is a method of `GeneratorBasedBuilder` and define `_split_generators` as follows (again as a method of `GeneratorBasedBuilder):\r\n```python\r\n def _split_generators(self, dl_manager):\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={},\r\n ),\r\n ]\r\n```\r\n\r\nAnd if you are feeling extra adventurous, you can try to use ArrowWriter to directly create a cache file:\r\n```python\r\nfrom datasets import Dataset\r\nfrom datasets.arrow_writer import ArrowWriter\r\n\r\nwriter = ArrowWriter(path=\"path/to/cache_file.arrow\", writer_batch_size=1000)\r\n\r\nwith writer:\r\n for ex in generator:\r\n writer.write(ex) \r\n writer.finalize()\r\n\r\ndset = Dataset.from_file(\"path/to/cache_file.arrow\")\r\n```\r\n\r\n"
] | 1,653,841,707,000 | 1,654,598,988,000 | null | NONE | null | ### Link
_No response_
### Description
Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset.
The generator contains all the samples needed for training the model but I cannot convert it into a huggingface dataset.
The code looks like this:
```
for ex in seqio_data:
print(ex[“text”])
```
I need to convert the seqio_data (generator) into huggingface dataset.
the complete seqio code goes here:
```
import functools
import seqio
import tensorflow as tf
import t5.data
from datasets import load_dataset
from t5.data import postprocessors
from t5.data import preprocessors
from t5.evaluation import metrics
from seqio import FunctionDataSource, utils
TaskRegistry = seqio.TaskRegistry
def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_params=None):
dataset = load_dataset(**dataset_params)
if shuffle:
if seed:
dataset = dataset.shuffle(seed=seed)
else:
dataset = dataset.shuffle()
while True:
for item in dataset[str(split)]:
yield item[column]
def dataset_fn(split, shuffle_files, seed=None, dataset_params=None):
return tf.data.Dataset.from_generator(
functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params),
output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name)
)
@utils.map_over_dataset
def target_to_key(x, key_map, target_key):
"""Assign the value from the dataset to target_key in key_map"""
return {**key_map, target_key: x}
dataset_name = 'oscar-corpus/OSCAR-2109'
subset= 'mr'
dataset_params = {"path": dataset_name, "language":subset, "use_auth_token":True}
dataset_shapes = None
TaskRegistry.add(
"oscar_marathi_corpus",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=dataset_shapes,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"targets": None,
}, target_key="targets")],
output_features={"targets": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},
metric_fns=[]
)
dataset = seqio.get_mixture_or_task("oscar_marathi_corpus").get_dataset(
sequence_length=None,
split="train",
shuffle=True,
num_epochs=1,
shard_info=seqio.ShardInfo(index=0, num_shards=10),
use_cached=False,
seed=42
)
for _, ex in zip(range(5), dataset):
print(ex['targets'].numpy().decode())
```
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4417/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4416/comments | https://api.github.com/repos/huggingface/datasets/issues/4416/events | https://github.com/huggingface/datasets/pull/4416 | 1,251,875,763 | PR_kwDODunzps44o7sF | 4,416 | Add LCCC dataset | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you very much for your help @albertvillanova .\r\n\r\nI think I have fixed all the comments.\r\n\r\nPlease let me know if this PR need further modification ;)",
"@albertvillanova Thank you very much for your kind help.\r\nThese suggestions make the code looks more pythonic.\r\n\r\nI have commited these changes",
"Hi ! The dataset seems to be a duplicate of https://huggingface.co/datasets/silver/lccc - next time no need to add it on github if it's already available on huggingface.co ;)",
"> Hi ! The dataset seems to be a duplicate of https://huggingface.co/datasets/silver/lccc - next time no need to add it on github if it's already available on huggingface.co ;)\r\n\r\nOK, sorry for the inconvenience. I have closed another two PRs since these datasets are already available on huggingface.co",
"It's fine, thanks @silverriver for adding these datasets !"
] | 1,653,827,239,000 | 1,655,288,939,000 | 1,654,161,226,000 | CONTRIBUTOR | null | Hi, I am trying to add a new dataset lccc.
All tests are passed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4416/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4416",
"html_url": "https://github.com/huggingface/datasets/pull/4416",
"diff_url": "https://github.com/huggingface/datasets/pull/4416.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4416.patch",
"merged_at": 1654161226000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4415/comments | https://api.github.com/repos/huggingface/datasets/issues/4415/events | https://github.com/huggingface/datasets/pull/4415 | 1,251,002,981 | PR_kwDODunzps44mIJk | 4,415 | Update `dataset_infos.json` with new split info in `Dataset.push_to_hub` to avoid verification error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,671,022,000 | 1,654,605,745,000 | 1,654,605,232,000 | CONTRIBUTOR | null | Update `dataset_infos.json` when pushing splits one by one via `Dataset.push_to_hub` to avoid the splits verification error.
TODO:
~~- [ ] handle token + `{Audio, Image}.embed_storage`~~
- [x] tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4415/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4415",
"html_url": "https://github.com/huggingface/datasets/pull/4415",
"diff_url": "https://github.com/huggingface/datasets/pull/4415.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4415.patch",
"merged_at": 1654605232000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4414/comments | https://api.github.com/repos/huggingface/datasets/issues/4414/events | https://github.com/huggingface/datasets/pull/4414 | 1,250,546,888 | PR_kwDODunzps44klhY | 4,414 | Rename DatasetBuilder config_name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,643,682,000 | 1,654,009,641,000 | 1,654,009,131,000 | MEMBER | null | This PR renames the DatasetBuilder keyword argument `name` to `config_name` so that:
- it avoids confusion with the attribute `DatasetBuilder.name`, which is different
- it aligns with the Dataset property name `config_name`, defined in `DatasetInfoMixin.config_name`
Other simpler possibility could be to rename it to just `config` instead.
Please note I have only renamed this argument of DatasetBuilder because I think this refactoring has a low impact on users: we can assume this is not a public facing parameter, but private or related to the inners of our library.
It would have a major impact to rename it also in:
- load_dataset
- load_dataset_builder: although this could also be assumed as inners...
- in our CLI commands
Besides the naming of `name`, I also find really confusing the naming of `path` in `load_dataset`. IMHO, they should have a more simpler and precise meaning (currently, they are too vague). I would propose (maybe for next major release):
```
load_dataset(dataset, config,...
```
instead of
```
load_dataset(path, name,...
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4414/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4414",
"html_url": "https://github.com/huggingface/datasets/pull/4414",
"diff_url": "https://github.com/huggingface/datasets/pull/4414.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4414.patch",
"merged_at": 1654009131000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4413/comments | https://api.github.com/repos/huggingface/datasets/issues/4413/events | https://github.com/huggingface/datasets/issues/4413 | 1,250,259,822 | I_kwDODunzps5KhXNu | 4,413 | Dataset Viewer issue for ett | {
"login": "dgcnz",
"id": 24966039,
"node_id": "MDQ6VXNlcjI0OTY2MDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/24966039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dgcnz",
"html_url": "https://github.com/dgcnz",
"followers_url": "https://api.github.com/users/dgcnz/followers",
"following_url": "https://api.github.com/users/dgcnz/following{/other_user}",
"gists_url": "https://api.github.com/users/dgcnz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dgcnz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dgcnz/subscriptions",
"organizations_url": "https://api.github.com/users/dgcnz/orgs",
"repos_url": "https://api.github.com/users/dgcnz/repos",
"events_url": "https://api.github.com/users/dgcnz/events{/privacy}",
"received_events_url": "https://api.github.com/users/dgcnz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting @dgcnz.\r\n\r\nI have checked that the dataset works fine in streaming mode.\r\n\r\nAdditionally, other datasets containing timestamps are properly rendered by the viewer: https://huggingface.co/datasets/blbooks\r\n\r\nI have tried to force the refresh of the preview, but the endpoint is not responsive: Connection timed out\r\n\r\nCC: @severo ",
"I've just resent the refresh of the preview to the new endpoint, without success.\r\n\r\nCC: @severo ",
"Fixed!\r\n\r\nhttps://huggingface.co/datasets/ett/viewer/h1/test\r\n\r\n<img width=\"982\" alt=\"Capture d’écran 2022-06-15 à 09 30 22\" src=\"https://user-images.githubusercontent.com/1676121/173769035-a075d753-ecfc-4a43-b54b-973105d464d3.png\">\r\n"
] | 1,653,617,555,000 | 1,655,278,246,000 | 1,655,278,246,000 | NONE | null | ### Link
https://huggingface.co/datasets/ett
### Description
Timestamp is not JSON serializable.
```
Status code: 500
Exception: Status500Error
Message: Type is not JSON serializable: Timestamp
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4413/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4412/comments | https://api.github.com/repos/huggingface/datasets/issues/4412/events | https://github.com/huggingface/datasets/pull/4412 | 1,249,490,179 | PR_kwDODunzps44hFvq | 4,412 | Skip hidden files/directories in data files resolution and `iter_files` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This PR (via new release) broke many transformers tests.\r\n\r\nI will try to post a summary shortly.\r\n\r\ncc: @ydshieh ",
"So now it can't handle a local path via: `--train_file tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/train.json` even though it's there. it works just fine if I change the path to not have `..`\r\n\r\nYou can reproduce the original problem with:\r\n\r\n```\r\n$ cd transformers \r\n$ python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --train_file tests/fixtures/tests_samples/wmt_en_ro/train.json --validation_file tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/val.json --output_dir /tmp/tmp5o5to4k0 --overwrite_output_dir --max_source_length 32 --max_target_length 32 --val_max_target_length 32 --warmup_steps 8 --predict_with_generate --save_steps 0 --eval_steps 1 --group_by_length --label_smoothing_factor 0.1 --source_lang en --target_lang ro --report_to none --source_prefix \"translate English to Romanian: \" --fp16 --do_train --num_train_epochs 1 --max_train_samples 16 --per_device_train_batch_size 2 --learning_rate 3e-3\r\n[...]\r\nTraceback (most recent call last):\r\n File \"examples/pytorch/translation/run_translation.py\", line 656, in <module>\r\n main()\r\n File \"examples/pytorch/translation/run_translation.py\", line 346, in main\r\n raw_datasets = load_dataset(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 1656, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 1439, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 1097, in dataset_module_factory\r\n return PackagedDatasetModuleFactory(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/load.py\", line 743, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 588, in from_local_or_remote\r\n DataFilesList.from_local_or_remote(\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 556, in from_local_or_remote\r\n data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 194, in resolve_patterns_locally_or_by_urls\r\n for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):\r\n File \"/home/stas/anaconda3/envs/py38-pt111/lib/python3.8/site-packages/datasets/data_files.py\", line 144, in _resolve_single_pattern_locally\r\n raise FileNotFoundError(error_msg)\r\nFileNotFoundError: Unable to find '/mnt/nvme0/code/huggingface/transformers-master/tests/deepspeed/../fixtures/tests_samples/wmt_en_ro/val.json' at /mnt/nvme0/code/huggingface/transformers-master\r\n```",
"will apply a workaround to `transformers` tests here https://github.com/huggingface/transformers/pull/17721\r\n",
"This has been fixed with https://github.com/huggingface/datasets/pull/4505, will do a patch release tomorrow for `datasets` ;)",
"Thank you for the quick fix, @lhoestq "
] | 1,653,567,028,000 | 1,655,313,085,000 | 1,654,088,656,000 | CONTRIBUTOR | null | Fix #4115 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4412/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4412",
"html_url": "https://github.com/huggingface/datasets/pull/4412",
"diff_url": "https://github.com/huggingface/datasets/pull/4412.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4412.patch",
"merged_at": 1654088656000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4411/comments | https://api.github.com/repos/huggingface/datasets/issues/4411/events | https://github.com/huggingface/datasets/pull/4411 | 1,249,462,390 | PR_kwDODunzps44g_yL | 4,411 | Update `_format_columns` in `remove_columns` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"🤗 This PR closes https://github.com/huggingface/datasets/issues/4398",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi! Thanks for reporting and providing a fix. I made a small change to make the fix easier to understand.",
"Hi, @mariosasko thanks! It makes sense, sorry I'm not that familiar with `datasets` code 😩 ",
"Sure @albertvillanova I'll do that later today and ping you once done, thanks! :hugs:",
"Hi again @albertvillanova! Let me know if those tests are fine 🤗 ",
"Hi @alvarobartt,\r\n\r\nI think your tests are failing. I don't know why previously, after your last commit, the CI tests were not triggered. \r\n\r\nIn order to force the re-running of the CI tests, I had to edit your file using the GitHub UI.\r\n\r\nFirst I tried to do it using my terminal, but I don't have push right to your PR branch: I would ask you next time you open a PR, please mark the checkbox \"Allow edits from maintainers\": https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests",
"Hi @albertvillanova, let me check those again! And regarding that checkbox I thought it was already checked so my bad there 😩 ",
"@albertvillanova again it seems that the tests were not automatically triggered, but I tested those locally and now they work, as previously those were failing as I created an assertion as `self.assertEqual` over an empty list that was compared as `None` while the value was `[]` so I updated it to be `self.assertListEqual` and changed the comparison value to `[]`.",
"@lhoestq any idea why the CI is not triggered?",
"@alvarobartt I have tested locally and the tests continue failing.\r\n\r\nI think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n",
"You're right @albertvillanova I was indeed running the tests with `datasets==2.2.0` rather than with the branch version, I'll check it again! Sorry for the inconvenience...",
"> @alvarobartt I have tested locally and the tests continue failing.\r\n> \r\n> I think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n\r\nIn order to have some regressions tests for the fixed scenario, I've manually updated the value of `_format_columns` in the `ArrowDataset` so as to check whether it's updated or not right after calling `remove_columns`, and it does behave as expected, so with the latest version of this branch the reported issue doesn't occur anymore.",
"Hi again @albertvillanova sorry I was on leave! I'll do that ASAP :hugs:",
"@albertvillanova, does it make sense to add regression tests for `DatasetDict`? As `DatasetDict` doesn't have the attribute `_format_columns`, when we call `remove_columns` over a `DatasetDict` it removes the columns and updates the attributes of each split which is an `ArrowDataset`.\r\n\r\nSo on, we can either:\r\n- Update first the `_format_columns` attribute of each split and then remove the columns over the `DatasetDict`\r\n- Loop over the splits of `DatasetDict` and remove the columns right after updating `_format_columns` of each `ArrowDataset`.\r\n\r\nI assume that the best regression test is the one implemented (mentioned first above), let me know if there's a better way to do that 👍🏻 ",
"I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?",
"> I think there's already a decorator to support transmitting the right `_format_columns`: `@transmit_format`, have you tried adding this decorator to `remove_columns` ?\r\n\r\nHi @lhoestq I can check now!",
"It worked indeed @lhoestq, thanks for the proposal and the review! 🤗 ",
"Oops, I forgot about `@transmit_format`'s existence. From what I see, we should also use this decorator in `flatten`, `rename_column` and `rename_columns`. \r\n\r\n@alvarobartt Let me know if you'd like to work on this (in a subsequent PR).",
"Sure @mariosasko I can prepare another PR to add those too, thanks! "
] | 1,653,565,206,000 | 1,655,233,537,000 | 1,655,222,516,000 | CONTRIBUTOR | null | As explained at #4398, when calling `dataset.add_faiss_index` under certain conditions when calling a sequence of operations `cast_column`, `map`, and `remove_columns`, it fails as it's trying to look for already removed columns.
So on, after testing some possible fixes, it seems that setting the dataset format right after removing the columns seems to be working fine, so I had to add a call to `.set_format` in the `remove_columns` function.
Hope this helps! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4411/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4411",
"html_url": "https://github.com/huggingface/datasets/pull/4411",
"diff_url": "https://github.com/huggingface/datasets/pull/4411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4411.patch",
"merged_at": 1655222515000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4410/comments | https://api.github.com/repos/huggingface/datasets/issues/4410/events | https://github.com/huggingface/datasets/pull/4410 | 1,249,148,457 | PR_kwDODunzps44f_Td | 4,410 | Remove Google Drive URL in spider dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,545,855,000 | 1,653,547,722,000 | 1,653,547,212,000 | MEMBER | null | The `spider` dataset is distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license.
Fix #4401. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4410/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4410",
"html_url": "https://github.com/huggingface/datasets/pull/4410",
"diff_url": "https://github.com/huggingface/datasets/pull/4410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4410.patch",
"merged_at": 1653547212000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4409/comments | https://api.github.com/repos/huggingface/datasets/issues/4409/events | https://github.com/huggingface/datasets/pull/4409 | 1,249,083,179 | PR_kwDODunzps44fxiH | 4,409 | Update: add using pcm bytes (#4323) | {
"login": "YooSungHyun",
"id": 34292279,
"node_id": "MDQ6VXNlcjM0MjkyMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/34292279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YooSungHyun",
"html_url": "https://github.com/YooSungHyun",
"followers_url": "https://api.github.com/users/YooSungHyun/followers",
"following_url": "https://api.github.com/users/YooSungHyun/following{/other_user}",
"gists_url": "https://api.github.com/users/YooSungHyun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YooSungHyun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YooSungHyun/subscriptions",
"organizations_url": "https://api.github.com/users/YooSungHyun/orgs",
"repos_url": "https://api.github.com/users/YooSungHyun/repos",
"events_url": "https://api.github.com/users/YooSungHyun/events{/privacy}",
"received_events_url": "https://api.github.com/users/YooSungHyun/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@lhoestq Maybe I'm missing something, but what's the reason to read and encode PCM files to WAV in `Audio.encode_example`. Isn't the whole purpose of the decodable types to operate on raw files whenever possible? IMO this PR should only modify `Audio.decode_example` to support PCM files/bytes decoding.",
"Because the PCM file is not enough, we also need the `sampling_rate` associated to it. Therefore the two alternatives are either:\r\n- convert to WAV\r\n- add a `sampling_rate` field to the Audio arrow storage (not sure how it would behave for backward compatibility though)",
"But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.",
"How does it get the sampling rate of a PCM file then ? According to [SO](https://stackoverflow.com/a/57027667/17517845) it's not possible to infer it from the file alone",
"> Awesome thanks ! Could you also add tests in `tests/features/test_audio.py` ?\r\n> \r\n> Maybe add a small pcm file in `tests/features/data` and check that everything works as expected in tests cases like `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` for example.\r\n\r\n@lhoestq how can i test test_audio.py? where is \"__main__\" func?\r\ndo you have some example or guideline?",
"> But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.\r\n\r\n@mariosasko @lhoestq \r\nthanks for comment!\r\n\r\nFirst of all, \"PCM file\" can not read alone to any audio library.\r\n\"PCM file\" has not any audio META information header. (it just purely audio byte data. therefore, we don't have to encoding and decoding)\r\nbut, \"PCM file\" is audio extension, so we can use `datasets.Audio`\r\n\r\nif you want to read \"PCM file\" to audio file likely, it have to needs additional parameter. (channel, sampling_rate, else....)\r\nbut, in many situation, we only know sampling_rate for PCM\r\n\r\nand, if we want to use `datasets.Audio` for \"PCM file\", we must process encode_example.\r\ntherefore, i have to use sampling_rate for encoding for making wav-style byte. (we only know sampling_rate)\r\n\r\nIn my source code, I don't compare sampling rate(`datasets.Audio's self.sampling_rate` and `read pcm sampling_rate(value[\"sampling_rate\"])`) and checking mono\r\n@mariosasko ! do you want to process resampling and making mono? then i can modify my source\r\n",
"There is no \"main\" function in test scripts :) To run a test script you must use the `pytest` command:\r\n```\r\npytest tests/features/test_audio.py\r\n```\r\n\r\nto run only one function you can also do\r\n```\r\npytest tests/features/test_audio.py::test_audio_feature_type_to_arrow\r\n```\r\nfor example",
"@lhoestq\r\nmaybe, if i write test code, i have to commit test_audio.py and send pr?\r\nbecause, we need to keep `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` method after my pr merged?",
"You can add your tests in this PR with the other changes you did",
"@lhoestq \r\ntest complete & commit my test_audio.py\r\n\r\nAND, some change in my code.\r\n\r\naudio.py\r\ni think \"sampling_rate\" is already Audio object initial variable. so, we don`t have to use input parameter.\r\n\r\ntest_audio.py\r\nwe can check \"PCM\" file to path (exactly, extenstion)\r\nso, test case has to know `path`. if only have `bytes`, we don`t know that is \"PCM\" or not",
"@lhoestq\r\nand, why circleci raised exception?\r\nmaybe, [repo](https://huggingface.co/api/datasets/lhoestq/_dummy?full=true) url is not found!\r\nPLZ, CHK!"
] | 1,653,539,196,000 | 1,655,201,971,000 | null | NONE | null | first of all, please look #4323
why i can not use {"path","array","sampling_rate"}
because sf.write(format="wav") and sf.read(BytesIO) is changed my pcm data value
maybe, i think wav got header but, pcm is not.
and variable naming, pcm data is "byte" type. so, "array" name is not fair i think
so, i use scipy lib and numpy (that is huggingface dependency)
and refer to @lhoestq answered,
1. encode -> using sampling_rate and pcm byte -> wav style byte (scipy.wavfile.write to byte)
2. byte converting using fairseq style pcm audio read [FileAudioDataset](https://github.com/facebookresearch/fairseq/blob/main/fairseq/data/audio/raw_audio_dataset.py)
4. decode -> read wavfile.read
that way is not screw up my pcm byte to float data, and another audio type(wav) safety
please check! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4409/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4409",
"html_url": "https://github.com/huggingface/datasets/pull/4409",
"diff_url": "https://github.com/huggingface/datasets/pull/4409.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4409.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4408/comments | https://api.github.com/repos/huggingface/datasets/issues/4408/events | https://github.com/huggingface/datasets/pull/4408 | 1,248,687,574 | PR_kwDODunzps44ecNI | 4,408 | Update imagenet gate | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,510,739,000 | 1,653,511,511,000 | 1,653,511,007,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4408/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4408",
"html_url": "https://github.com/huggingface/datasets/pull/4408",
"diff_url": "https://github.com/huggingface/datasets/pull/4408.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4408.patch",
"merged_at": 1653511007000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4407/comments | https://api.github.com/repos/huggingface/datasets/issues/4407/events | https://github.com/huggingface/datasets/issues/4407 | 1,248,671,778 | I_kwDODunzps5KbTgi | 4,407 | Dataset Viewer issue for conll2012_ontonotesv5 | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @jiangwy99.\r\n\r\nI guess this could be addressed only once we fix our issue with irresponsive backend endpoint.\r\n\r\nCC: @severo ",
"I've just sent the forcing of the refresh of the preview to the new endpoint.",
"Fixed, thanks for the patience. The issue was the amount of RAM allowed to extract the first rows of the dataset was not sufficient."
] | 1,653,509,913,000 | 1,654,627,156,000 | 1,654,627,156,000 | NONE | null | ### Link
https://huggingface.co/datasets/conll2012_ontonotesv5
### Description
Dataset viewer outage.
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4407/timeline | null | completed | null | null | false |