url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.47B
| node_id
stringlengths 18
32
| number
int64 1
5.33k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5331/comments | https://api.github.com/repos/huggingface/datasets/issues/5331/events | https://github.com/huggingface/datasets/pull/5331 | 1,473,146,738 | PR_kwDODunzps5EKDpr | 5,331 | Support for multiple configs in packaged modules via metadata yaml info | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5331). All of your documentation changes will be reflected on that endpoint."
] | 2022-12-02T16:43:44Z | 2022-12-02T18:01:31Z | null | CONTRIBUTOR | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5331.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5331",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5331.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5331"
} | will solve https://github.com/huggingface/datasets/issues/5209 and https://github.com/huggingface/datasets/issues/5151 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5331/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5329/comments | https://api.github.com/repos/huggingface/datasets/issues/5329/events | https://github.com/huggingface/datasets/pull/5329 | 1,471,999,125 | PR_kwDODunzps5EGK3y | 5,329 | Clarify imagefolder is for small datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5329). All of your documentation changes will be reflected on that endpoint.",
"I think it's also reasonable to add the same note to the AudioFolder decription",
"Thank you ! I think \"regular\" is more appropriate than \"small\". It can easily scale to a few thousands of images - just not millions x)",
"Replaced \"small\" with \"several thousand\" since what is considered \"regular\" and even \"small\" can be kind of vague!"
] | 2022-12-01T21:47:29Z | 2022-12-02T18:36:54Z | null | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5329.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5329",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5329.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5329"
} | Based on feedback from [here](https://github.com/huggingface/datasets/issues/5317#issuecomment-1334108824), this PR adds a note to the `imagefolder` loading and creating docs that `imagefolder` is designed for small scale image datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5329/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5328/comments | https://api.github.com/repos/huggingface/datasets/issues/5328/events | https://github.com/huggingface/datasets/pull/5328 | 1,471,661,437 | PR_kwDODunzps5EFAyT | 5,328 | Fix docs building for main | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"EDIT\r\nAt least the docs for ~~main~~ PR branch are now built:\r\n- https://github.com/huggingface/datasets/actions/runs/3594847760/jobs/6053620813",
"Build documentation for main branch was triggered after this PR being merged: https://github.com/huggingface/datasets/actions/runs/3603370082/jobs/6071482470"
] | 2022-12-01T17:07:45Z | 2022-12-02T16:29:00Z | 2022-12-02T16:26:00Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5328.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5328",
"merged_at": "2022-12-02T16:26:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5328.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5328"
} | This PR reverts the triggering event for building documentation introduced by:
- #5250
Fix #5326. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5328/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5328/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5327/comments | https://api.github.com/repos/huggingface/datasets/issues/5327/events | https://github.com/huggingface/datasets/pull/5327 | 1,471,657,247 | PR_kwDODunzps5EE_3Q | 5,327 | Avoid unwanted behaviour when splits from script and metadata are not matching because of outdated metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5327). All of your documentation changes will be reflected on that endpoint."
] | 2022-12-01T17:05:23Z | 2022-12-01T17:41:02Z | null | CONTRIBUTOR | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5327.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5327",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5327.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5327"
} | will fix #5315 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5327/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5327/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5326/comments | https://api.github.com/repos/huggingface/datasets/issues/5326/events | https://github.com/huggingface/datasets/issues/5326 | 1,471,634,168 | I_kwDODunzps5Xt1r4 | 5,326 | No documentation for main branch is built | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-12-01T16:50:58Z | 2022-12-02T16:26:01Z | 2022-12-02T16:26:01Z | MEMBER | null | null | null | Since:
- #5250
- Commit: 703b84311f4ead83c7f79639f2dfa739295f0be6
the docs for main branch are no longer built.
The change introduced only triggers the docs building for releases. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5326/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5326/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5325/comments | https://api.github.com/repos/huggingface/datasets/issues/5325/events | https://github.com/huggingface/datasets/issues/5325 | 1,471,536,822 | I_kwDODunzps5Xtd62 | 5,325 | map(...batch_size=None) for IterableDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https://api.github.com/users/frankier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/frankier",
"id": 299380,
"login": "frankier",
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"organizations_url": "https://api.github.com/users/frankier/orgs",
"received_events_url": "https://api.github.com/users/frankier/received_events",
"repos_url": "https://api.github.com/users/frankier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/frankier"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | open | false | null | [] | null | [
"Hi! I agree it makes sense for `IterableDataset.map` to support the `batch_size=None` case. This should be super easy to fix."
] | 2022-12-01T15:43:42Z | 2022-12-01T17:37:03Z | null | CONTRIBUTOR | null | null | null | ### Feature request
Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too.
### Motivation
Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice.
One is that load_dataset(...) can return either IterableDataset or Dataset. mypy will then complain if batch_size=None even if we know it is Dataset. Of course we can do:
assert isinstance(d, datasets.DatasetDict)
But it is a mild inconvenience. What's more annoying is that whenever we use something like e.g. `combine_datasets(...)`, we end up with the union again, and so have to do the assert again.
Another is that we could actually end up with an IterableDataset small enough for memory in normal/correct usage, e.g. by filtering a massive IterableDataset.
For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this.
### Your contribution
Not this time. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5325/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5325/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5324/comments | https://api.github.com/repos/huggingface/datasets/issues/5324/events | https://github.com/huggingface/datasets/issues/5324 | 1,471,524,512 | I_kwDODunzps5Xta6g | 5,324 | Fix docstrings and types in documentation that appears on the website | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | open | false | null | [] | null | [
"I agree we have a mess with docstrings..."
] | 2022-12-01T15:34:53Z | 2022-12-01T16:35:36Z | null | CONTRIBUTOR | null | null | null | While I was working on https://github.com/huggingface/datasets/pull/5313 I've noticed that we have a mess in how we annotate types and format args and return values in the code. And some of it is displayed in the [Reference section](https://huggingface.co/docs/datasets/package_reference/builder_classes) of the documentation on the website.
Would be nice someday, maybe before releasing datasets 3.0.0, to unify it...... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5324/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5324/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5323/comments | https://api.github.com/repos/huggingface/datasets/issues/5323/events | https://github.com/huggingface/datasets/issues/5323 | 1,471,518,803 | I_kwDODunzps5XtZhT | 5,323 | Duplicated Keys in Taskmaster-2 Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/52380283?v=4",
"events_url": "https://api.github.com/users/liaeh/events{/privacy}",
"followers_url": "https://api.github.com/users/liaeh/followers",
"following_url": "https://api.github.com/users/liaeh/following{/other_user}",
"gists_url": "https://api.github.com/users/liaeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liaeh",
"id": 52380283,
"login": "liaeh",
"node_id": "MDQ6VXNlcjUyMzgwMjgz",
"organizations_url": "https://api.github.com/users/liaeh/orgs",
"received_events_url": "https://api.github.com/users/liaeh/received_events",
"repos_url": "https://api.github.com/users/liaeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liaeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liaeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liaeh"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @liaeh.\r\n\r\nWe are having a look at it. ",
"I have transferred the discussion to the Community tab of the dataset: https://huggingface.co/datasets/taskmaster2/discussions/1"
] | 2022-12-01T15:31:06Z | 2022-12-01T16:26:06Z | 2022-12-01T16:26:06Z | NONE | null | null | null | ### Describe the bug
Loading certain splits () of the taskmaster-2 dataset fails because of a DuplicatedKeysError. This occurs for the following domains: `'hotels', 'movies', 'music', 'sports'`. The domains `'flights', 'food-ordering', 'restaurant-search'` load fine.
Output:
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("taskmaster2", "music")
```
Output:
```
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1532, in GeneratorBasedBuilder._prepare_split_single(self, arg)
[1531](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1530) example = self.info.features.encode_example(record) if self.info.features is not None else record
-> [1532](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1531) writer.write(example, key)
[1533](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1532) num_examples_progress_update += 1
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:475, in ArrowWriter.write(self, example, key, writer_batch_size)
[474](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=473) if self._check_duplicates:
--> [475](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=474) self.check_duplicate_keys()
[476](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=475) # Re-intializing to empty list for next batch
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:492, in ArrowWriter.check_duplicate_keys(self)
[486](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=485) duplicate_key_indices = [
[487](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=486) str(self._num_examples + index)
[488](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=487) for index, (duplicate_hash, _) in enumerate(self.hkey_record)
[489](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=488) if duplicate_hash == hash
[490](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=489) ]
--> [492](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=491) raise DuplicatedKeysError(key, duplicate_key_indices)
[493](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=492) else:
DuplicatedKeysError: Found multiple examples generated with the same key
The examples at index 858, 859 have the key dlg-89174425-d57a-4db7-a92b-165c3bff6735
During handling of the above exception, another exception occurred:
DuplicatedKeysError Traceback (most recent call last)
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1541, in GeneratorBasedBuilder._prepare_split_single(self, arg)
[1540](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1539) num_shards = shard_id + 1
-> [1541](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1540) num_examples, num_bytes = writer.finalize()
[1542](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1541) writer.close()
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:563, in ArrowWriter.finalize(self, close_stream)
[562](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=561) if self._check_duplicates:
--> [563](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=562) self.check_duplicate_keys()
[564](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=563) # Re-intializing to empty list for next batch
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py:492, in ArrowWriter.check_duplicate_keys(self)
[486](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=485) duplicate_key_indices = [
[487](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=486) str(self._num_examples + index)
[488](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=487) for index, (duplicate_hash, _) in enumerate(self.hkey_record)
[489](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=488) if duplicate_hash == hash
[490](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=489) ]
--> [492](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=491) raise DuplicatedKeysError(key, duplicate_key_indices)
[493](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/arrow_writer.py?line=492) else:
DuplicatedKeysError: Found multiple examples generated with the same key
The examples at index 858, 859 have the key dlg-89174425-d57a-4db7-a92b-165c3bff6735
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[23], line 1
----> 1 dataset = load_dataset("taskmaster2", "music")
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py:1741, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
[1738](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1737) try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
[1740](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1739) # Download and prepare data
-> [1741](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1740) builder_instance.download_and_prepare(
[1742](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1741) download_config=download_config,
[1743](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1742) download_mode=download_mode,
[1744](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1743) ignore_verifications=ignore_verifications,
[1745](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1744) try_from_hf_gcs=try_from_hf_gcs,
[1746](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1745) use_auth_token=use_auth_token,
[1747](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1746) num_proc=num_proc,
[1748](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1747) )
[1750](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1749) # Build dataset for splits
[1751](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1750) keep_in_memory = (
[1752](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1751) keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
[1753](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/load.py?line=1752) )
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:822, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
[820](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=819) if num_proc is not None:
[821](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=820) prepare_split_kwargs["num_proc"] = num_proc
--> [822](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=821) self._download_and_prepare(
[823](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=822) dl_manager=dl_manager,
[824](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=823) verify_infos=verify_infos,
[825](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=824) **prepare_split_kwargs,
[826](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=825) **download_and_prepare_kwargs,
[827](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=826) )
[828](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=827) # Sync info
[829](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=828) self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1555, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs)
[1554](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1553) def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs):
-> [1555](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1554) super()._download_and_prepare(
[1556](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1555) dl_manager, verify_infos, check_duplicate_keys=verify_infos, **prepare_splits_kwargs
[1557](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1556) )
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:913, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
[909](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=908) split_dict.add(split_generator.split_info)
[911](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=910) try:
[912](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=911) # Prepare split will record examples associated to the split
--> [913](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=912) self._prepare_split(split_generator, **prepare_split_kwargs)
[914](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=913) except OSError as e:
[915](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=914) raise OSError(
[916](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=915) "Cannot find data file. "
[917](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=916) + (self.manual_download_instructions or "")
[918](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=917) + "\nOriginal error:\n"
[919](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=918) + str(e)
[920](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=919) ) from None
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1396, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
[1394](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1393) gen_kwargs = split_generator.gen_kwargs
[1395](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1394) job_id = 0
-> [1396](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1395) for job_id, done, content in self._prepare_split_single(
[1397](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1396) {"gen_kwargs": gen_kwargs, "job_id": job_id, **_prepare_split_args}
[1398](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1397) ):
[1399](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1398) if done:
[1400](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1399) result = content
File ~/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py:1550, in GeneratorBasedBuilder._prepare_split_single(self, arg)
[1548](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1547) if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
[1549](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1548) e = e.__context__
-> [1550](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1549) raise DatasetGenerationError("An error occurred while generating the dataset") from e
[1552](file:///home/user/repos/tts-dataset/tts-dataset/venv/lib/python3.9/site-packages/datasets/builder.py?line=1551) yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
Loads the dataset
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5323/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5323/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5322/comments | https://api.github.com/repos/huggingface/datasets/issues/5322/events | https://github.com/huggingface/datasets/pull/5322 | 1,471,502,162 | PR_kwDODunzps5EEeQP | 5,322 | Raise error for simple `.tar` archives in the same way as for `.tar.gz` and `.gz` | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5322). All of your documentation changes will be reflected on that endpoint."
] | 2022-12-01T15:19:28Z | 2022-12-01T15:24:40Z | null | CONTRIBUTOR | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5322.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5322",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5322.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5322"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5322/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5322/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5321/comments | https://api.github.com/repos/huggingface/datasets/issues/5321/events | https://github.com/huggingface/datasets/pull/5321 | 1,471,430,667 | PR_kwDODunzps5EEOhE | 5,321 | Fix loading from HF GCP cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Do you know why this stopped working?\r\n\r\nIt comes from the changes in https://github.com/huggingface/datasets/pull/5107/files#diff-355ae5c229f95f86895404b72378ecd6e966c41cbeebb674af6fe6e9611bc126"
] | 2022-12-01T14:39:06Z | 2022-12-01T16:10:09Z | 2022-12-01T16:07:02Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5321.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5321",
"merged_at": "2022-12-01T16:07:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5321.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5321"
} | As reported in https://discuss.huggingface.co/t/error-loading-wikipedia-dataset/26599/4 it's not possible to download a cached version of Wikipedia from the HF GCP cache
I fixed it and added an integration test (runs in 10sec) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5321/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5321/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5320/comments | https://api.github.com/repos/huggingface/datasets/issues/5320/events | https://github.com/huggingface/datasets/pull/5320 | 1,471,360,910 | PR_kwDODunzps5ED_UQ | 5,320 | [Extract] Place the lock file next to the destination directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-01T13:55:49Z | 2022-12-01T15:36:44Z | 2022-12-01T15:33:58Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5320.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5320",
"merged_at": "2022-12-01T15:33:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5320.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5320"
} | Previously it was placed next to the archive to extract, but the archive can be in a read-only directory as noticed in https://github.com/huggingface/datasets/issues/5295
Therefore I moved the lock location to be next to the destination directory, which is required to have write permissions | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5320/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5320/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5319/comments | https://api.github.com/repos/huggingface/datasets/issues/5319/events | https://github.com/huggingface/datasets/pull/5319 | 1,470,945,515 | PR_kwDODunzps5ECkfc | 5,319 | Fix Text sample_by paragraph | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-01T09:08:09Z | 2022-12-01T15:21:44Z | 2022-12-01T15:19:00Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5319.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5319",
"merged_at": "2022-12-01T15:19:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5319.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5319"
} | Fix #5316. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5319/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5319/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5318/comments | https://api.github.com/repos/huggingface/datasets/issues/5318/events | https://github.com/huggingface/datasets/pull/5318 | 1,470,749,750 | PR_kwDODunzps5EB6RM | 5,318 | Origin/fix missing features error | {
"avatar_url": "https://avatars.githubusercontent.com/u/12104720?v=4",
"events_url": "https://api.github.com/users/eunseojo/events{/privacy}",
"followers_url": "https://api.github.com/users/eunseojo/followers",
"following_url": "https://api.github.com/users/eunseojo/following{/other_user}",
"gists_url": "https://api.github.com/users/eunseojo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eunseojo",
"id": 12104720,
"login": "eunseojo",
"node_id": "MDQ6VXNlcjEyMTA0NzIw",
"organizations_url": "https://api.github.com/users/eunseojo/orgs",
"received_events_url": "https://api.github.com/users/eunseojo/received_events",
"repos_url": "https://api.github.com/users/eunseojo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eunseojo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eunseojo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eunseojo"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"please review :) @lhoestq @ola13 thankoo",
"Thanks :) I just updated the test to make sure it works even when there's a column missing, and did a minor change to json.py to add the missing columns for the other kinds of JSON files as well (I moved the code to`self._cast_table`)",
"Thanks Unso! If @lhoestq is happy then I'm also happy :D"
] | 2022-12-01T06:18:39Z | 2022-12-04T05:52:07Z | 2022-12-04T05:49:39Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5318.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5318",
"merged_at": "2022-12-04T05:49:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5318.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5318"
} | This fixes the problem of when the dataset_load function reads a function with "features" provided but some read batches don't have columns that later show up. For instance, the provided "features" requires columns A,B,C but only columns B,C show. This fixes this by adding the column A with nulls. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5318/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5318/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5317/comments | https://api.github.com/repos/huggingface/datasets/issues/5317/events | https://github.com/huggingface/datasets/issues/5317 | 1,470,390,164 | I_kwDODunzps5XpF-U | 5,317 | `ImageFolder` performs poorly with large datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1086393?v=4",
"events_url": "https://api.github.com/users/salieri/events{/privacy}",
"followers_url": "https://api.github.com/users/salieri/followers",
"following_url": "https://api.github.com/users/salieri/following{/other_user}",
"gists_url": "https://api.github.com/users/salieri/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/salieri",
"id": 1086393,
"login": "salieri",
"node_id": "MDQ6VXNlcjEwODYzOTM=",
"organizations_url": "https://api.github.com/users/salieri/orgs",
"received_events_url": "https://api.github.com/users/salieri/received_events",
"repos_url": "https://api.github.com/users/salieri/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/salieri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/salieri/subscriptions",
"type": "User",
"url": "https://api.github.com/users/salieri"
} | [] | open | false | null | [] | null | [
"Hi ! ImageFolder is made for small scale datasets indeed. For large scale image datasets you better group your images in TAR archives or Arrow/Parquet files. This is true not just for ImageFolder loading performance, but also because having millions of files is not ideal for your filesystem or when moving the data around.\r\n\r\nOption 1. use TAR archives\r\n\r\nI'd suggest you to take a look at how we load [Imagenet](https://huggingface.co/datasets/imagenet-1k/tree/main) for example. The dataset is sharded in multiple TAR archives and there is a [script](https://huggingface.co/datasets/imagenet-1k/blob/main/imagenet-1k.py) that iterates over the archives to load the images.\r\n\r\nOption 2. use Arrow/Parquet\r\n\r\nYou can load your images as an Arrow Dataset with\r\n```python\r\nfrom datasets import Dataset, Image, load_from_disk, load_dataset\r\n\r\nds = Dataset.from_dict({\"image\": list(glob.glob(\"path/to/dir/**/*.jpg\"))})\r\n\r\ndef add_metadata(example):\r\n ...\r\n\r\nds = ds.map(add_metadata, num_proc=16) # num_proc for multiprocessing\r\nds = ds.cast_column(\"image\", Image())\r\n\r\n# save as Arrow locally\r\nds.save_to_disk(\"output_dir\")\r\nreloaded = load_from_disk(\"output_dir\")\r\n\r\n# OR save as Parquet on the HF Hub\r\nds.push_to_hub(\"username/dataset_name\")\r\nreloaded = load_dataset(\"username/dataset_name\")\r\n# reloaded = load_dataset(\"username/dataset_name\", num_proc=16) # to use multiprocessing\r\n```\r\n\r\nPS: maybe we can actually have something similar to ImageFolder but for image archives at one point ?",
"@lhoestq Thanks!\r\n\r\nPerhaps it'd be worth adding a note on the documentation that `ImageFolder` is not intended for large datasets? This limitation is not intuitively obvious to someone who has not used it before, I think.",
"Thanks for the feedback @salieri! I opened #5329 to make it clear `ImageFolder` is not intended for large datasets. Please feel free to comment if you have any other feedback! 🙂 "
] | 2022-12-01T00:04:21Z | 2022-12-01T21:49:26Z | null | NONE | null | null | null | ### Describe the bug
While testing image dataset creation, I'm seeing significant performance bottlenecks with imagefolders when scanning a directory structure with large number of images.
## Setup
* Nested directories (5 levels deep)
* 3M+ images
* 1 `metadata.jsonl` file
## Performance Degradation Point 1
Degradation occurs because [`get_data_files_patterns`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L231-L243) runs the exact same scan for many different types of patterns, and there doesn't seem to be a way to easily limit this. It's controlled by the definition of [`ALL_DEFAULT_PATTERNS`](https://github.com/huggingface/datasets/blob/main/src/datasets/data_files.py#L82-L85).
One scan with 3M+ files takes about 10-15 minutes to complete on my setup, so having those extra scans really slows things down – from 10 minutes to 60+. Most of the scans return no matches, but they still take a significant amount of time to complete – hence the poor performance.
As a side effect, when this scan is run on 3M+ image files, Python also consumes up to 12 GB of RAM, which is not ideal.
## Performance Degradation Point 2
The second performance bottleneck is in [`PackagedDatasetModuleFactory.get_module`](https://github.com/huggingface/datasets/blob/d7dfbc83d68e87ba002c5eb2555f7a932e59038a/src/datasets/load.py#L707-L711), which calls `DataFilesDict.from_local_or_remote`.
It runs for a long time (60min+), consuming significant amounts of RAM – even more than the point 1 above. Based on `iostat -d 2`, it performs **zero** disk operations, which to me suggests that there is a code based bottleneck there that could be sorted out.
### Steps to reproduce the bug
```python
from datasets import load_dataset
import os
import huggingface_hub
dataset = load_dataset(
'imagefolder',
data_dir='/some/path',
# just to spell it out:
split=None,
drop_labels=True,
keep_in_memory=False
)
dataset.push_to_hub('account/dataset', private=True)
```
### Expected behavior
While it's certainly possible to write a custom loader to replace `ImageFolder` with, it'd be great if the off-the-shelf `ImageFolder` would by default have a setup that can scale to large datasets.
Or perhaps there could be a dedicated loader just for large datasets that trades off flexibility for performance? As in, maybe you have to define explicitly how you want it to work rather than it trying to guess your data structure like `_get_data_files_patterns()` does?
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-4.14.296-222.539.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.7.10
- PyArrow version: 10.0.1
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5317/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5317/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5316/comments | https://api.github.com/repos/huggingface/datasets/issues/5316/events | https://github.com/huggingface/datasets/issues/5316 | 1,470,115,681 | I_kwDODunzps5XoC9h | 5,316 | Bug in sample_by="paragraph" | {
"avatar_url": "https://avatars.githubusercontent.com/u/1243668?v=4",
"events_url": "https://api.github.com/users/adampauls/events{/privacy}",
"followers_url": "https://api.github.com/users/adampauls/followers",
"following_url": "https://api.github.com/users/adampauls/following{/other_user}",
"gists_url": "https://api.github.com/users/adampauls/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adampauls",
"id": 1243668,
"login": "adampauls",
"node_id": "MDQ6VXNlcjEyNDM2Njg=",
"organizations_url": "https://api.github.com/users/adampauls/orgs",
"received_events_url": "https://api.github.com/users/adampauls/received_events",
"repos_url": "https://api.github.com/users/adampauls/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adampauls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adampauls/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adampauls"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @adampauls.\r\n\r\nWe are having a look at it. "
] | 2022-11-30T19:24:13Z | 2022-12-01T15:19:02Z | 2022-12-01T15:19:02Z | NONE | null | null | null | ### Describe the bug
I think [this line](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/text/text.py#L96) is wrong and should be `batch = f.read(self.config.chunksize)`. Otherwise it will never terminate because even when `f` is finished reading, `batch` will still be truthy from the last iteration.
### Steps to reproduce the bug
```
> cat test.txt
a b c
d e f
````
```python
>>> import datasets
>>> datasets.load_dataset("text", data_files={"train":"test.txt"}, sample_by="paragraph")
```
This will go on forever.
### Expected behavior
Terminates very quickly.
### Environment info
`version = "2.6.1"` but I think the bug is still there on main. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5316/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5316/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5315/comments | https://api.github.com/repos/huggingface/datasets/issues/5315/events | https://github.com/huggingface/datasets/issues/5315 | 1,470,026,797 | I_kwDODunzps5XntQt | 5,315 | Adding new splits to a dataset script with existing old splits info in metadata's `dataset_info` fails | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
] | null | [
"EDIT:\r\nI think in this case, the metadata files (either README or JSON) should not be read (i.e. `self.info.splits` should be None).\r\n\r\nOne idea: \r\n- I think ideally we should set this behavior when we pass `--save_info` to the CLI `test`\r\n- However, currently, the builder is unaware of this: `save_info` arg is not passed to it",
"> I think in this case\r\n\r\n@albertvillanova You mean in cases when the script was changed? \r\n\r\nI suggest that we:\r\n* add a check on the slice (like 'split_name[n%]) kind of format here: https://github.com/huggingface/datasets/blob/main/src/datasets/splits.py#L523 to catch things like this. \r\n* Error here happens before splits verification, but in `_prepare_split`, and `_prepare_split` doesn't perform any verification and don't know about it. so we can pass this parameter and take splits from `split_generator`, not from `split.info` in case when `verify_infos` is False\r\n* we can check if split **names** from split_generators and self.info.splits are the same **before** preparing splits (if `verify_info=True`) so that we don't spend time on generating unwanted data. \r\n* provide some user-friendly warnings about `ignore_verifications` parameter so that users know that if something is not matching they can ignore it\r\n\r\nI started it here: https://github.com/huggingface/datasets/pull/5327/files\r\n\r\nWhat do you think @albertvillanova ?",
"I edited my previous comment:\r\n- First I proposed setting `self.info.splits` to None when `ignore_verifications=True`\r\n - I thought it was the easiest implementation because `ignore_verifications` is passed to `DatasetBuilder.download_and_prepare`\r\n - However, afterwards, I realized this might not be a good idea for this use case:\r\n - A user wants to optimize the loading of the dataset, and passes `ignore_verifications=False` to avoid all the verifications\r\n - In this case, we want `self.info.splits` to be read from metadata file\r\n- Then, I thought that it might be better to set `self.info.splits` to None when we pass `--save_info` to the CLI test: if we are going to save the info to the metadata file, it makes no sense to read the info from the metadata file\r\n - This implementation is not so easy because the Builder knows nothing about `--save_info`\r\n\r\nI agree with you there are 2 things to be addressed here:\r\n- One is what I have just commented: `self.info.splits` should be None in this case\r\n- The other, a validation should be implemented when calling `make_file_instructions` and/or `SplitDict.__getitem__`, so that when passing \"training\" to it, we get a more descriptive error other than `TypeError: expected str, bytes or os.PathLike object, not NoneType` "
] | 2022-11-30T18:02:15Z | 2022-12-02T07:02:53Z | null | CONTRIBUTOR | null | null | null | ### Describe the bug
If you first create a custom dataset with a specific set of splits, generate metadata with `datasets-cli test ... --save_info`, then change your script to include more splits, it fails.
That's what happened in https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/discussions/2#6385fd1269634850f8ddff48.
### Steps to reproduce the bug
1. create a dataset with a custom split that returns, for example, only `"train"` split in `_splits_generators'`. specifically, if really want to reproduce, copy `https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py
2. run `datasets-cli test dataset_script.py --save_info --all_configs` - this would generate metadata yaml in `README.md` that would contain info about splits, for example, like this:
```
splits:
- name: train
num_bytes: 2973286
num_examples: 19747
```
3. make changes to your script so that it returns another set of splits, for example, `"train"` and `"test"` (uncomment [these lines](https://huggingface.co/datasets/mrdbourke/food_vision_199_classes/blob/main/food_vision_199_classes.py#L271))
4. run `load_dataset` and get the following error:
```python
Traceback (most recent call last):
File "/home/daniel/code/pytorch/env/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/commands/test.py", line 141, in run
builder.download_and_prepare(
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare
self._download_and_prepare(
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare
super()._download_and_prepare(
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 913, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/builder.py", line 1356, in _prepare_split
split_info = self.info.splits[split_generator.name]
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/splits.py", line 525, in __getitem__
instructions = make_file_instructions(
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 111, in make_file_instructions
name2filenames = {
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/arrow_reader.py", line 112, in <dictcomp>
info.name: filenames_for_dataset_split(
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 78, in filenames_for_dataset_split
prefix = filename_prefix_for_split(dataset_name, split)
File "/home/daniel/code/pytorch/env/lib/python3.8/site-packages/datasets/naming.py", line 57, in filename_prefix_for_split
if os.path.basename(name) != name:
File "/home/daniel/code/pytorch/env/lib/python3.8/posixpath.py", line 143, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
5. bonus: try to regenerate metadata in `README.md` with `datasets-cli` as in step 2 and get the same error.
This is because `dataset.info.splits` contains only `"train"` split so when we are doing `self.info.splits[split_generator.name]` it tries to infer smth like `info.splits['train[50%]']` and that's not the case and it fails.
### Expected behavior
to be discussed?
This can be solved by removing splits information from metadata file first. But I wonder if there is a better way.
### Environment info
- Datasets version: 2.7.1
- Python version: 3.8.13 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5315/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5315/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5314/comments | https://api.github.com/repos/huggingface/datasets/issues/5314/events | https://github.com/huggingface/datasets/issues/5314 | 1,469,685,118 | I_kwDODunzps5XmZ1- | 5,314 | Datasets: classification_report() got an unexpected keyword argument 'suffix' | {
"avatar_url": "https://avatars.githubusercontent.com/u/42126634?v=4",
"events_url": "https://api.github.com/users/JonathanAlis/events{/privacy}",
"followers_url": "https://api.github.com/users/JonathanAlis/followers",
"following_url": "https://api.github.com/users/JonathanAlis/following{/other_user}",
"gists_url": "https://api.github.com/users/JonathanAlis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JonathanAlis",
"id": 42126634,
"login": "JonathanAlis",
"node_id": "MDQ6VXNlcjQyMTI2NjM0",
"organizations_url": "https://api.github.com/users/JonathanAlis/orgs",
"received_events_url": "https://api.github.com/users/JonathanAlis/received_events",
"repos_url": "https://api.github.com/users/JonathanAlis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JonathanAlis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JonathanAlis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JonathanAlis"
} | [] | open | false | null | [] | null | [
"This seems similar to https://github.com/huggingface/datasets/issues/2512 Can you try to update seqeval ? ",
"@JonathanAlis also note that the metrics are deprecated in our `datasets` library.\r\n\r\nPlease, use the new library 🤗 Evaluate instead: https://huggingface.co/docs/evaluate"
] | 2022-11-30T14:01:03Z | 2022-12-01T15:00:46Z | null | NONE | null | null | null | https://github.com/huggingface/datasets/blob/main/metrics/seqeval/seqeval.py
> import datasets
predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
seqeval = datasets.load_metric("seqeval")
results = seqeval.compute(predictions=predictions, references=references)
print(list(results.keys()))
print(results["overall_f1"])
print(results["PER"]["f1"])
It raises the error:
> TypeError: classification_report() got an unexpected keyword argument 'suffix'
For context, versions on my pip list -v
> datasets 1.12.1
seqeval 1.2.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5314/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5314/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5313/comments | https://api.github.com/repos/huggingface/datasets/issues/5313/events | https://github.com/huggingface/datasets/pull/5313 | 1,468,484,136 | PR_kwDODunzps5D6Qfb | 5,313 | Fix description of streaming in the docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-29T18:00:28Z | 2022-12-01T14:55:30Z | 2022-12-01T14:00:34Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5313.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5313",
"merged_at": "2022-12-01T14:00:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5313.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5313"
} | We say that "the data is being downloaded progressively" which is not true, it's just streamed, so I fixed it. Probably I missed some other places where it is written?
Also changed docstrings for `StreamingDownloadManager`'s `download` and `extract` to reflect the same, as these docstrings are displayed in the documentation cc @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5313/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5313/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5312/comments | https://api.github.com/repos/huggingface/datasets/issues/5312/events | https://github.com/huggingface/datasets/pull/5312 | 1,468,352,562 | PR_kwDODunzps5D5zxI | 5,312 | Add DatasetDict.to_pandas | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"The current implementation is what I had in mind, i.e. concatenate all splits by default.\r\n\r\nHowever, I think most tabular datasets would come as a single split. So for that usecase, it wouldn't change UX if we raise when there are more than one splits.\r\n\r\nAnd for multiple splits, the user either passes a list, or they can pass `splits=\"all\"` to have all splits concatenated.",
"I think it's better to raise an error in cases when there are multiple splits but no split is specified so that users know for sure with which data they are working. I imagine a case when a user loads a dataset that they don't know much about (like what splits it has), and if they get a concatenation of everything, it might lead to incorrect processing or interpretations and it would be hard to notice it.\r\n(\"explicit is better than implicit\")",
"I just changed to raise an error if there are multiple splits. The error shows an example of how to choose a split to convert.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5312). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-29T16:30:02Z | 2022-12-01T16:09:44Z | null | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5312.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5312",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5312.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5312"
} | From discussions in https://github.com/huggingface/datasets/issues/5189, for tabular data it doesn't really make sense to have to do
```python
df = load_dataset(...)["train"].to_pandas()
```
because many datasets are not split.
In this PR I added `to_pandas` to `DatasetDict` which returns the DataFrame:
If there's only one split, you don't need to specify the split name:
```python
df = load_dataset(...).to_pandas()
```
EDIT: and if a dataset has multiple splits:
```python
df = load_dataset(...).to_pandas(splits=["train", "test"])
# or
df = load_dataset(...).to_pandas(splits="all")
# raises an error because you need to select the split(s) to convert
load_dataset(...).to_pandas()
```
I do have one question though @merveenoyan @adrinjalali @mariosasko:
Should we raise an error if there are multiple splits and ask the user to choose one explicitly ?
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5312/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5312/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5311/comments | https://api.github.com/repos/huggingface/datasets/issues/5311/events | https://github.com/huggingface/datasets/pull/5311 | 1,467,875,153 | PR_kwDODunzps5D4Mm3 | 5,311 | Add `features` param to `IterableDataset.map` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5311). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-29T11:08:34Z | 2022-12-02T19:22:17Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5311.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5311",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5311.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5311"
} | ## Description
As suggested by @lhoestq in #3888, we should be adding the param `features` to `IterableDataset.map` so that the features can be preserved (not turned into `None` as that's the default behavior) whenever the user passes those as param, so as to be consistent with `Dataset.map`, as it provides the `features` param so that those are not inferred by default, but specified by the user, and later validated by `ArrowWriter`.
This is internally handled already by the functions relying on `IterableDataset.map` such as `rename_column`, `rename_columns`, and `remove_columns` as described in #5287.
## Usage Example
```python
from datasets import load_dataset, Features
ds = load_dataset("rotten_tomatoes", split="validation", streaming=True)
print(ds.info.features)
ds = ds.map(
lambda x: {"target": x["label"]},
features=Features(
{"target": ds.info.features["label"], "label": ds.info.features["label"], "text": ds.info.features["text"]}
),
)
print(ds.info.features)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5311/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5311/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5310/comments | https://api.github.com/repos/huggingface/datasets/issues/5310/events | https://github.com/huggingface/datasets/pull/5310 | 1,467,719,635 | PR_kwDODunzps5D3rGw | 5,310 | Support xPath for Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-29T09:20:47Z | 2022-11-30T12:00:09Z | 2022-11-30T11:57:16Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5310.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5310",
"merged_at": "2022-11-30T11:57:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5310.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5310"
} | This PR implements a string representation of `xPath`, which is valid for local paths (also windows) and remote URLs.
Additionally, some `os.path` methods are fixed for remote URLs on Windows machines.
Now, on Windows machines:
```python
In [2]: str(xPath("C:\\dir\\file.txt"))
Out[2]: 'C:\\dir\\file.txt'
In [3]: str(xPath("http://domain.com/file.txt"))
Out[3]: 'http://domain.com/file.txt'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5310/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5310/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5309/comments | https://api.github.com/repos/huggingface/datasets/issues/5309/events | https://github.com/huggingface/datasets/pull/5309 | 1,466,758,987 | PR_kwDODunzps5D0g1y | 5,309 | Close stream in `ArrowWriter.finalize` before inference error | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5309). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-28T16:59:39Z | 2022-11-28T17:05:59Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5309.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5309",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5309.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5309"
} | Ensure the file stream is closed in `ArrowWriter.finalize` before raising the `SchemaInferenceError` to avoid the `PermissionError` on Windows in `incomplete_dir`'s `shutil.rmtree`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5309/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5309/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5308 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5308/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5308/comments | https://api.github.com/repos/huggingface/datasets/issues/5308/events | https://github.com/huggingface/datasets/pull/5308 | 1,466,552,281 | PR_kwDODunzps5Dz0Tv | 5,308 | Support `topdown` parameter in `xwalk` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5308). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-28T14:42:41Z | 2022-11-30T12:44:35Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5308.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5308",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5308.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5308"
} | Add support for the `topdown` parameter in `xwalk` when `fsspec>=2022.11.0` is installed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5308/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5308/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5307/comments | https://api.github.com/repos/huggingface/datasets/issues/5307/events | https://github.com/huggingface/datasets/pull/5307 | 1,466,477,427 | PR_kwDODunzps5Dzj8r | 5,307 | Use correct dataset type in `from_generator` docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-28T13:59:10Z | 2022-11-28T15:30:37Z | 2022-11-28T15:27:26Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5307.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5307",
"merged_at": "2022-11-28T15:27:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5307.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5307"
} | Use the correct dataset type in the `from_generator` docs (example with sharding). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5307/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5307/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5306/comments | https://api.github.com/repos/huggingface/datasets/issues/5306/events | https://github.com/huggingface/datasets/issues/5306 | 1,465,968,639 | I_kwDODunzps5XYOf_ | 5,306 | Can't use custom feature description when loading a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/clefourrier",
"id": 22726840,
"login": "clefourrier",
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/clefourrier"
} | [] | closed | false | null | [] | null | [
"Forgot to actually convert the feature dict to a Feature object. Closing."
] | 2022-11-28T07:55:44Z | 2022-11-28T08:11:45Z | 2022-11-28T08:11:44Z | CONTRIBUTOR | null | null | null | ### Describe the bug
I have created a feature dictionary to describe my datasets' column types, to use when loading the dataset, following [the doc](https://huggingface.co/docs/datasets/main/en/about_dataset_features). It crashes at dataset load.
### Steps to reproduce the bug
```python
# Creating features
task_list = [f"motif_G{i}" for i in range(19, 53)]
features = {t: Sequence(feature=Value(dtype="float64")) for t in task_list}
for col_name in ["class_label"]:
features[col_name] = Sequence(feature=Value(dtype="int64"))
for col_name in ["num_nodes"]:
features[col_name] = Value(dtype="int64")
for col_name in ["num_bridges", "num_cycles", "avg_shortest_path_len"]:
features[col_name] = Sequence(feature=Value(dtype="float64"))
for col_name in ["edge_attr", "node_feat", "edge_index"]:
features[col_name] = Sequence(feature=Sequence(feature=Value(dtype="int64")))
print(features)
dataset = load_dataset(path=f"graphs-datasets/unbalanced-motifs-500K", split="train", features=features)
```
Last line will crash and say 'TypeError: argument of type 'Sequence' is not iterable'.
Full stack:
```
Traceback (most recent call last):
File "pretrain_tokengt.py", line 131, in <module>
main(output_folder = "../workspace/pretraining",
File "pretrain_tokengt.py", line 52, in main
dataset = load_dataset(path=f"graphs-datasets/{dataset_name}", split="train", features=features)
File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1718, in load_dataset
builder_instance = load_dataset_builder(
File "huggingface_env/lib/python3.8/site-packages/datasets/load.py", line 1514, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "huggingface_env/lib/python3.8/site-packages/datasets/builder.py", line 321, in __init__
info.update(self._info())
File "huggingface_env/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 62, in _info
return datasets.DatasetInfo(features=self.config.features)
File "<string>", line 20, in __init__
File "huggingface_env/lib/python3.8/site-packages/datasets/info.py", line 155, in __post_init__
self.features = Features.from_dict(self.features)
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1599, in from_dict
obj = generate_from_dict(dic)
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1282, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "huggingface_env/lib/python3.8/site-packages/datasets/features/features.py", line 1281, in generate_from_dict
if "_type" not in obj or isinstance(obj["_type"], dict):
TypeError: argument of type 'Sequence' is not iterable
```
### Expected behavior
For it not to crash.
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5306/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5306/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5305/comments | https://api.github.com/repos/huggingface/datasets/issues/5305/events | https://github.com/huggingface/datasets/issues/5305 | 1,465,627,826 | I_kwDODunzps5XW7Sy | 5,305 | Dataset joelito/mc4_legal does not work with multiple files | {
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JoelNiklaus",
"id": 3775944,
"login": "JoelNiklaus",
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JoelNiklaus"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting @JoelNiklaus.\r\n\r\nPlease note that since we moved all dataset loading scripts to the Hub, the issues and pull requests relative to specific datasets are directly handled on the Hub, in their Community tab. I'm transferring this issue there: https://huggingface.co/datasets/joelito/mc4_legal/discussions\r\n\r\nI am also having a look at the bug in your script.",
"Issue transferred to: https://huggingface.co/datasets/joelito/mc4_legal/discussions/1"
] | 2022-11-28T00:16:16Z | 2022-11-28T07:22:42Z | 2022-11-28T07:22:42Z | CONTRIBUTOR | null | null | null | ### Describe the bug
The dataset https://huggingface.co/datasets/joelito/mc4_legal works for languages like bg with a single data file, but not for languages with multiple files like de. It shows zero rows for the de dataset.
joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main) [1]> python test_mc4_legal.py (debug)
Found cached dataset mc4_legal (/Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/de/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f)
Dataset({
features: ['index', 'url', 'timestamp', 'matches', 'text'],
num_rows: 0
})
joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main)> python test_mc4_legal.py (debug)
Downloading and preparing dataset mc4_legal/bg to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f...
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1240.55it/s]
Dataset mc4_legal downloaded and prepared to /Users/joelniklaus/.cache/huggingface/datasets/mc4_legal/bg/0.0.0/fb6952a097180f8c936e2a7605525ff670354a344fc1a2c70107684d3f7cb02f. Subsequent calls will reuse this data.
Dataset({
features: ['index', 'url', 'timestamp', 'matches', 'text'],
num_rows: 204
})
### Steps to reproduce the bug
import datasets
from datasets import load_dataset, get_dataset_config_names
language = "bg"
test = load_dataset("joelito/mc4_legal", language, split='train')
### Expected behavior
It should display the correct number of rows for the de dataset which should be a large number (thousands or more).
### Environment info
Package Version
------------------------ --------------
absl-py 1.3.0
aiohttp 3.8.1
aiosignal 1.2.0
astunparse 1.6.3
async-timeout 4.0.2
attrs 22.1.0
beautifulsoup4 4.11.1
blinker 1.4
blis 0.7.8
Bottleneck 1.3.4
brotlipy 0.7.0
cachetools 5.2.0
catalogue 2.0.7
certifi 2022.5.18.1
cffi 1.15.1
chardet 4.0.0
charset-normalizer 2.1.0
click 8.0.4
conllu 4.5.2
cryptography 38.0.1
cymem 2.0.6
datasets 2.6.1
dill 0.3.5.1
docker-pycreds 0.4.0
fasttext 0.9.2
fasttext-langdetect 1.0.3
filelock 3.0.12
flatbuffers 20210226132247
frozenlist 1.3.0
fsspec 2022.5.0
gast 0.4.0
gcloud 0.18.3
gitdb 4.0.9
GitPython 3.1.27
google-auth 2.9.0
google-auth-oauthlib 0.4.6
google-pasta 0.2.0
googleapis-common-protos 1.57.0
grpcio 1.47.0
h5py 3.7.0
httplib2 0.21.0
huggingface-hub 0.8.1
idna 3.4
importlib-metadata 4.12.0
Jinja2 3.1.2
joblib 1.0.1
keras 2.9.0
Keras-Preprocessing 1.1.2
langcodes 3.3.0
lxml 4.9.1
Markdown 3.3.7
MarkupSafe 2.1.1
mkl-fft 1.3.1
mkl-random 1.2.2
mkl-service 2.4.0
multidict 6.0.2
multiprocess 0.70.13
murmurhash 1.0.7
numexpr 2.8.1
numpy 1.22.3
oauth2client 4.1.3
oauthlib 3.2.1
opt-einsum 3.3.0
packaging 21.3
pandas 1.4.2
pathtools 0.1.2
pathy 0.6.1
pip 21.1.2
preshed 3.0.6
promise 2.3
protobuf 4.21.9
psutil 5.9.1
pyarrow 8.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pybind11 2.9.2
pycountry 22.3.5
pycparser 2.21
pydantic 1.8.2
PyJWT 2.4.0
pylzma 0.5.0
pyOpenSSL 22.0.0
pyparsing 3.0.4
PySocks 1.7.1
python-dateutil 2.8.2
pytz 2021.3
PyYAML 6.0
regex 2021.4.4
requests 2.28.1
requests-oauthlib 1.3.1
responses 0.18.0
rsa 4.8
sacremoses 0.0.45
scikit-learn 1.1.1
scipy 1.8.1
sentencepiece 0.1.96
sentry-sdk 1.6.0
setproctitle 1.2.3
setuptools 65.5.0
shortuuid 1.0.9
six 1.16.0
smart-open 5.2.1
smmap 5.0.0
soupsieve 2.3.2.post1
spacy 3.3.1
spacy-legacy 3.0.9
spacy-loggers 1.0.2
srsly 2.4.3
tabulate 0.8.9
tensorboard 2.9.1
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.9.1
tensorflow-estimator 2.9.0
termcolor 2.1.0
thinc 8.0.17
threadpoolctl 3.1.0
tokenizers 0.12.1
torch 1.13.0
tqdm 4.64.0
transformers 4.20.1
typer 0.4.1
typing-extensions 4.3.0
Unidecode 1.3.6
urllib3 1.26.12
wandb 0.12.20
wasabi 0.9.1
web-anno-tsv 0.0.1
Werkzeug 2.1.2
wget 3.2
wheel 0.35.1
wrapt 1.14.1
xxhash 3.0.0
yarl 1.8.1
zipp 3.8.0
Python 3.8.10
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5305/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5305/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5304/comments | https://api.github.com/repos/huggingface/datasets/issues/5304/events | https://github.com/huggingface/datasets/issues/5304 | 1,465,110,367 | I_kwDODunzps5XU89f | 5,304 | timit_asr doesn't load the test split. | {
"avatar_url": "https://avatars.githubusercontent.com/u/17842800?v=4",
"events_url": "https://api.github.com/users/seyong92/events{/privacy}",
"followers_url": "https://api.github.com/users/seyong92/followers",
"following_url": "https://api.github.com/users/seyong92/following{/other_user}",
"gists_url": "https://api.github.com/users/seyong92/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/seyong92",
"id": 17842800,
"login": "seyong92",
"node_id": "MDQ6VXNlcjE3ODQyODAw",
"organizations_url": "https://api.github.com/users/seyong92/orgs",
"received_events_url": "https://api.github.com/users/seyong92/received_events",
"repos_url": "https://api.github.com/users/seyong92/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/seyong92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seyong92/subscriptions",
"type": "User",
"url": "https://api.github.com/users/seyong92"
} | [] | open | false | null | [] | null | [
"The [timit_asr.py](https://huggingface.co/datasets/timit_asr/blob/main/timit_asr.py) script iterates over the WAV files per split directory using this:\r\n```python\r\nwav_paths = sorted(Path(data_dir).glob(f\"**/{split}/**/*.wav\"))\r\nwav_paths = wav_paths if wav_paths else sorted(Path(data_dir).glob(f\"**/{split.upper()}/**/*.WAV\"))\r\n```\r\n\r\nCan you check that there is a directory named \"test\" somewhere in your timit data directory ?"
] | 2022-11-26T10:18:22Z | 2022-12-01T13:28:59Z | null | NONE | null | null | null | ### Describe the bug
When I use the function ```timit = load_dataset('timit_asr', data_dir=data_dir)```, it only loads train split, not test split.
I tried to change the directory and filename to lower case to upper case for the test split, but it does not work at all.
```python
DatasetDict({
train: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 4620
})
test: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 0
})
})
```
The directory structure of both splits are same. (DIALECT_REGION / SPEAKER_CODE / DATA_FILES)
### Steps to reproduce the bug
1. just use ```timit = load_dataset('timit_asr', data_dir=data_dir)```
### Expected behavior
```python
DatasetDict({
train: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 4620
})
test: Dataset({
features: ['file', 'audio', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
num_rows: 1680
})
})
```
### Environment info
- ubuntu 20.04
- python 3.9.13
- datasets 2.7.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5304/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5304/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5303/comments | https://api.github.com/repos/huggingface/datasets/issues/5303/events | https://github.com/huggingface/datasets/pull/5303 | 1,464,837,251 | PR_kwDODunzps5DuVTa | 5,303 | Skip dataset verifications by default | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5303). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-25T18:39:09Z | 2022-11-25T18:44:23Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5303.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5303",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5303.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5303"
} | Skip the dataset verifications (split and checksum verifications, duplicate keys check) by default unless a dataset is being tested (`datasets-cli test/run_beam`). The main goal is to avoid running the checksum check in the default case due to how expensive it can be for large datasets.
PS: Maybe we should deprecate `ignore_verifications`, which is `True` now by default, and give it a different name? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5303/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5303/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5302/comments | https://api.github.com/repos/huggingface/datasets/issues/5302/events | https://github.com/huggingface/datasets/pull/5302 | 1,464,778,901 | PR_kwDODunzps5DuJJp | 5,302 | Improve `use_auth_token` docstring and deprecate `use_auth_token` in `download_and_prepare` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5302). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-25T17:09:21Z | 2022-11-28T12:40:12Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5302.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5302",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5302.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5302"
} | Clarify in the docstrings what happens when `use_auth_token` is `None` and deprecate the `use_auth_token` param in `download_and_prepare`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5302/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5302/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5301/comments | https://api.github.com/repos/huggingface/datasets/issues/5301/events | https://github.com/huggingface/datasets/pull/5301 | 1,464,749,156 | PR_kwDODunzps5DuCzR | 5,301 | Return a split Dataset in load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5301). All of your documentation changes will be reflected on that endpoint.",
"Just noticed that now we have to deal with indexed & split datasets. The remaining tests are failing because one should be able to get an indexed dataset when accessing the split of a dataset made of indexed splits (right now the index is just trashed)"
] | 2022-11-25T16:35:54Z | 2022-11-30T16:53:34Z | null | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5301.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5301",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5301.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5301"
} | ...instead of a DatasetDict.
```python
# now supported
ds = load_dataset("squad")
ds[0]
for example in ds:
pass
# still works
ds["train"]
ds["validation"]
# new
ds.splits # Dict[str, Dataset] | None
# soon to be supported (not in this PR)
ds = load_dataset("dataset_with_no_splits")
ds[0]
for example in ds:
pass
```
I implemented `Dataset.__getitem__` and `IterableDataset.__getitem__` to be able to get a split from a dataset.
The splits are defined by the `ds.info.splits` dictionary.
Therefore a dataset is a table that optionally has some splits defined in the dataset info. And a split dataset is the concatenation of all its splits.
I made as little breaking changes as possible. Notable breaking changes:
- `load_dataset("potato").keys() / .items() / .values() /` don't work anymore, since we don't return a dict
- same for `for split_name in load_dataset("potato")`, since we now iterate on the examples
- ..
TODO:
- [x] Update push_to_hub
- [x] Update save_to_disk/load_from_disk
- [ ] check for other breaking changes
- [ ] fix existing tests
- [ ] add new tests
- [ ] docs
This is related to https://github.com/huggingface/datasets/issues/5189, to extend `load_dataset` to return datasets without splits | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5301/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5301/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5300/comments | https://api.github.com/repos/huggingface/datasets/issues/5300/events | https://github.com/huggingface/datasets/pull/5300 | 1,464,697,136 | PR_kwDODunzps5Dt3uK | 5,300 | Use same `num_proc` for dataset download and generation | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5300). All of your documentation changes will be reflected on that endpoint.",
"I noticed this bug the other day and was going to look into it! \"Where are these processes coming from?\" ;-)"
] | 2022-11-25T15:37:42Z | 2022-11-25T15:52:04Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5300.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5300",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5300.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5300"
} | Use the same `num_proc` value for data download and generation. Additionally, do not set `num_proc` to 16 in `DownloadManager` by default (`num_proc` now has to be specified explicitly). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5300/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5300/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5299/comments | https://api.github.com/repos/huggingface/datasets/issues/5299/events | https://github.com/huggingface/datasets/pull/5299 | 1,464,695,091 | PR_kwDODunzps5Dt3Sk | 5,299 | Fix xopen for Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-25T15:35:28Z | 2022-11-29T08:23:58Z | 2022-11-29T08:21:24Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5299.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5299",
"merged_at": "2022-11-29T08:21:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5299.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5299"
} | This PR fixes a bug in `xopen` function for Windows pathnames.
Fix #5298. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5299/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5299/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5298/comments | https://api.github.com/repos/huggingface/datasets/issues/5298/events | https://github.com/huggingface/datasets/issues/5298 | 1,464,681,871 | I_kwDODunzps5XTUWP | 5,298 | Bug in xopen with Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-11-25T15:21:32Z | 2022-11-29T08:21:25Z | 2022-11-29T08:21:25Z | MEMBER | null | null | null | Currently, `xopen` function has a bug with local Windows pathnames:
From its implementation:
```python
def xopen(file: str, mode="r", *args, **kwargs):
file = _as_posix(PurePath(file))
main_hop, *rest_hops = file.split("::")
if is_local_path(main_hop):
return open(file, mode, *args, **kwargs)
```
On a Windows machine, if we pass the argument:
```python
xopen("C:\\Users\\USERNAME\\filename.txt")
```
it returns
```python
open("C:/Users/USERNAME/filename.txt")
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5298/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5298/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5297/comments | https://api.github.com/repos/huggingface/datasets/issues/5297/events | https://github.com/huggingface/datasets/pull/5297 | 1,464,554,491 | PR_kwDODunzps5DtZjg | 5,297 | Fix xjoin for Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-25T13:30:17Z | 2022-11-29T08:07:39Z | 2022-11-29T08:05:12Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5297.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5297",
"merged_at": "2022-11-29T08:05:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5297.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5297"
} | This PR fixes a bug in `xjoin` function with Windows pathnames.
Fix #5296. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5297/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5297/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5296/comments | https://api.github.com/repos/huggingface/datasets/issues/5296/events | https://github.com/huggingface/datasets/issues/5296 | 1,464,553,580 | I_kwDODunzps5XS1Bs | 5,296 | Bug in xjoin with Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-11-25T13:29:33Z | 2022-11-29T08:05:13Z | 2022-11-29T08:05:13Z | MEMBER | null | null | null | Currently, `xjoin` function has a bug with local Windows pathnames: instead of returning the OS-dependent join pathname, it always returns it in POSIX format.
```python
from datasets.download.streaming_download_manager import xjoin
path = xjoin("C:\\Users\\USERNAME", "filename.txt")
```
Join path should be:
```python
"C:\\Users\\USERNAME\\filename.txt"
```
However it is:
```python
"C:/Users/USERNAME/filename.txt"
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5296/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5296/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5295/comments | https://api.github.com/repos/huggingface/datasets/issues/5295/events | https://github.com/huggingface/datasets/issues/5295 | 1,464,006,743 | I_kwDODunzps5XQvhX | 5,295 | Extractions failed when .zip file located on read-only path (e.g., SageMaker FastFile mode) | {
"avatar_url": "https://avatars.githubusercontent.com/u/2340781?v=4",
"events_url": "https://api.github.com/users/verdimrc/events{/privacy}",
"followers_url": "https://api.github.com/users/verdimrc/followers",
"following_url": "https://api.github.com/users/verdimrc/following{/other_user}",
"gists_url": "https://api.github.com/users/verdimrc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/verdimrc",
"id": 2340781,
"login": "verdimrc",
"node_id": "MDQ6VXNlcjIzNDA3ODE=",
"organizations_url": "https://api.github.com/users/verdimrc/orgs",
"received_events_url": "https://api.github.com/users/verdimrc/received_events",
"repos_url": "https://api.github.com/users/verdimrc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/verdimrc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/verdimrc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/verdimrc"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Hi ! Thanks for reporting. Indeed the lock file should be placed in a directory with write permission (e.g. in the directory where the archive is extracted).",
"I opened https://github.com/huggingface/datasets/pull/5320 to fix this - it places the lock file in the cache directory instead of trying to put in next to the ZIP where it's read-only"
] | 2022-11-25T03:59:43Z | 2022-12-01T13:56:40Z | null | NONE | null | null | null | ### Describe the bug
Hi,
`load_dataset()` does not work .zip files located on a read-only directory. Looks like it's because Dataset creates a lock file in the [same directory](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/utils/extract.py) as the .zip file.
Encountered this when attempting `load_dataset()` on a datadir with SageMaker FastFile mode.
### Steps to reproduce the bug
```python
# Showing relevant lines only.
hyperparameters = {
"dataset_name": "ydshieh/coco_dataset_script",
"dataset_config_name": 2017,
"data_dir": "/opt/ml/input/data/coco",
"cache_dir": "/tmp/huggingface-cache", # Fix dataset complains out-of-space.
...
}
estimator = PyTorch(
base_job_name="clip",
source_dir="../src/sm-entrypoint",
entry_point="run_clip.py", # Transformers/src/examples/pytorch/contrastive-image-text/run_clip.py
framework_version="1.12",
py_version="py38",
hyperparameters=hyperparameters,
instance_count=1,
instance_type="ml.p3.16xlarge",
volume_size=100,
distribution={"smdistributed": {"dataparallel": {"enabled": True}}},
)
fast_file = lambda x: TrainingInput(x, input_mode='FastFile')
estimator.fit(
{
"pre-trained": fast_file("s3://vm-sagemakerr-us-east-1/clip/pre-trained-checkpoint/"),
"coco": fast_file("s3://vm-sagemakerr-us-east-1/clip/coco-zip-files/"),
}
)
```
Error message:
```text
ErrorMessage "OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock'
"""
The above exception was the direct cause of the following exception
Traceback (most recent call last)
File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.8/site-packages/mpi4py/__main__.py", line 7, in <module>
main()
File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 198, in main
run_command_line(args)
File "/opt/conda/lib/python3.8/site-packages/mpi4py/run.py", line 47, in run_command_line
run_path(sys.argv[0], run_name='__main__')
File "/opt/conda/lib/python3.8/runpy.py", line 265, in run_path
return _run_module_code(code, init_globals, run_name,
File "/opt/conda/lib/python3.8/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "run_clip_smddp.py", line 594, in <module>
File "run_clip_smddp.py", line 327, in main
dataset = load_dataset(
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1741, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare
super()._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 891, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/ydshieh--coco_dataset_script/e033205c0266a54c10be132f9264f2a39dcf893e798f6756d224b1ff5078998f/coco_dataset_script.py", line 123, in _split_generators
archive_path = dl_manager.download_and_extract(_DL_URLS)
File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 447, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py", line 419, in extract
extracted_paths = map_nested(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 472, in map_nested
mapped = pool.map(_single_map_nested, split_kwds)
File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 364, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/opt/conda/lib/python3.8/multiprocessing/pool.py", line 771, in get
raise self._value
OSError: [Errno 30] Read-only file system: '/opt/ml/input/data/coco/image_info_test2017.zip.lock'"
```
### Expected behavior
`load_dataset()` to succeed, just like when .zip file is passed in SageMaker File mode.
### Environment info
* datasets-2.7.1
* transformers-4.24.0
* python-3.8
* torch-1.12
* SageMaker PyTorch DLC | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5295/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5295/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5294/comments | https://api.github.com/repos/huggingface/datasets/issues/5294/events | https://github.com/huggingface/datasets/pull/5294 | 1,463,679,582 | PR_kwDODunzps5DqgLW | 5,294 | Support streaming datasets with pathlib.Path.with_suffix | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-24T18:04:38Z | 2022-11-29T07:09:08Z | 2022-11-29T07:06:32Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5294.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5294",
"merged_at": "2022-11-29T07:06:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5294.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5294"
} | This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`.
Fix #5293. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5294/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5294/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5293/comments | https://api.github.com/repos/huggingface/datasets/issues/5293/events | https://github.com/huggingface/datasets/issues/5293 | 1,463,669,201 | I_kwDODunzps5XPdHR | 5,293 | Support streaming datasets with pathlib.Path.with_suffix | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-11-24T17:52:08Z | 2022-11-29T07:06:33Z | 2022-11-29T07:06:33Z | MEMBER | null | null | null | Extend support for streaming datasets that use `pathlib.Path.with_suffix`.
This feature will be useful e.g. for datasets containing text files and annotated files with the same name but different extension. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5293/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5293/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5292/comments | https://api.github.com/repos/huggingface/datasets/issues/5292/events | https://github.com/huggingface/datasets/issues/5292 | 1,463,053,832 | I_kwDODunzps5XNG4I | 5,292 | Missing documentation build for versions 2.7.1 and 2.6.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"- Build docs for 2.6.2:\r\n - Commit: a6a5a1cf4cdf1e0be65168aed5a327f543001fe8\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539470622/jobs/5941404044\r\n- Build docs for 2.7.1:\r\n - Commit: 5ef1ab1cc06c2b7a574bf2df454cd9fcb071ccb2\r\n - Build docs GH Action: https://github.com/huggingface/datasets/actions/runs/3539574442/jobs/5941636792"
] | 2022-11-24T09:42:10Z | 2022-11-24T10:10:02Z | 2022-11-24T10:10:02Z | MEMBER | null | null | null | After the patch releases [2.7.1](https://github.com/huggingface/datasets/releases/tag/2.7.1) and [2.6.2](https://github.com/huggingface/datasets/releases/tag/2.6.2), the online docs were not properly built (the build_documentation workflow was not triggered).
There was a fix by:
- #5291
However, both documentations were built from main branch, instead of their corresponding version branch.
We are rebuilding them. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5292/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5291/comments | https://api.github.com/repos/huggingface/datasets/issues/5291/events | https://github.com/huggingface/datasets/pull/5291 | 1,462,983,472 | PR_kwDODunzps5DoKNC | 5,291 | [build doc] for v2.7.1 & v2.6.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"doc versions are built https://huggingface.co/docs/datasets/index"
] | 2022-11-24T08:54:47Z | 2022-11-24T09:14:10Z | 2022-11-24T09:11:15Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5291.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5291",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5291.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5291"
} | Do NOT merge. Using this PR to build docs for [v2.7.1](https://github.com/huggingface/datasets/pull/5291/commits/f4914af20700f611b9331a9e3ba34743bbeff934) & [v2.6.2](https://github.com/huggingface/datasets/pull/5291/commits/025f85300a0874eeb90a20393c62f25ac0accaa0) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5291/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5290/comments | https://api.github.com/repos/huggingface/datasets/issues/5290/events | https://github.com/huggingface/datasets/pull/5290 | 1,462,716,766 | PR_kwDODunzps5DnQsS | 5,290 | fix error where reading breaks when batch missing an assigned column feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/12104720?v=4",
"events_url": "https://api.github.com/users/eunseojo/events{/privacy}",
"followers_url": "https://api.github.com/users/eunseojo/followers",
"following_url": "https://api.github.com/users/eunseojo/following{/other_user}",
"gists_url": "https://api.github.com/users/eunseojo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eunseojo",
"id": 12104720,
"login": "eunseojo",
"node_id": "MDQ6VXNlcjEyMTA0NzIw",
"organizations_url": "https://api.github.com/users/eunseojo/orgs",
"received_events_url": "https://api.github.com/users/eunseojo/received_events",
"repos_url": "https://api.github.com/users/eunseojo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eunseojo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eunseojo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eunseojo"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5290). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-24T03:53:46Z | 2022-11-25T03:21:54Z | null | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5290.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5290",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5290.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5290"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5290/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5290/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5289/comments | https://api.github.com/repos/huggingface/datasets/issues/5289/events | https://github.com/huggingface/datasets/pull/5289 | 1,462,543,139 | PR_kwDODunzps5Dmrk9 | 5,289 | Added support for JXL images. | {
"avatar_url": "https://avatars.githubusercontent.com/u/445208?v=4",
"events_url": "https://api.github.com/users/alexjc/events{/privacy}",
"followers_url": "https://api.github.com/users/alexjc/followers",
"following_url": "https://api.github.com/users/alexjc/following{/other_user}",
"gists_url": "https://api.github.com/users/alexjc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alexjc",
"id": 445208,
"login": "alexjc",
"node_id": "MDQ6VXNlcjQ0NTIwOA==",
"organizations_url": "https://api.github.com/users/alexjc/orgs",
"received_events_url": "https://api.github.com/users/alexjc/received_events",
"repos_url": "https://api.github.com/users/alexjc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alexjc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexjc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alexjc"
} | [] | open | false | null | [] | null | [
"I'm fine with the addition of jxl in the list of known image extensions, this way users that have the plugin can work with their JXL datasets. WDYT @mariosasko ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5289). All of your documentation changes will be reflected on that endpoint.",
"I think we should wait for official support from Pillow. Plus, the linked plugin doesn't support `Image.save`, which is one of the requirements for a format to be included in `IMAGE_EXTENSIONS`.\r\n\r\n@alexjc In the meantime, one option is to add these lines to the card:\r\n```python\r\nimport importlib\r\nimport datasets\r\n\r\nif \".jxl\" not in datasets.packaged_modules.imagefolder.IMAGE_EXTENSIONS:\r\n datasets.packaged_modules.imagefolder.IMAGE_EXTENSIONS.append(\".jxl\")\r\n\r\nif \"jxl\" not in datasets.packaged_modules._EXTENSION_TO_MODULE:\r\n datasets.packaged_modules._EXTENSION_TO_MODULE[\"jxl\"] = (\"imagefolder\", {})\r\n\r\nimportlib.reload(datasets.load)\r\nds = datasets.load_dataset(\"texturedesign/td01_natural-ground-textures\")\r\n```\r\nAnd you can add a note to the card that this dataset requires the \"jxlpy\" package to work. \r\n\r\nIn this case, you can also disable the viewer to avoid the discrepancy between the data displayed in the preview and the loaded data.\r\n\r\nAnother option is to define the loading script and add `jxlpy` to the list of dependencies [here](https://github.com/huggingface/datasets-server/blob/3012da62054a025467616abc14b0b46e1f11ea13/workers/first_rows/pyproject.toml#L8) to enable the viewer. This option requires more work, so let us know if you need help.",
"Thank you both for your thoughtful replies!\r\n\r\nOne questions and and update:\r\n* The jxlpy plugin does support saving, in the `_save` function of the JXLImagePlugin file. Did it not work? I'm working on the upgrade to the latest JXL, so it'd be good to know if it failed so I can fix it.\r\n* I wrote to the Pillow maintainer and the preferred solution would be to keep JXL as a separate plugin because they're a small team don't have the resources to maintain more code.\r\n\r\nWith that in mind, let me share the minimal set of features I'd need for this to work within the `datasets` library:\r\n1. Using `load_dataset()` with the HuggingFace dataset name correctly downloads the JXL files so they are available locally. Even if the `file_name` field is left intact and not loaded as a PIL image, this is the first step.\r\n2. With minimal monkey-patching, having the `load_dataset` correctly expand `file_name` into PIL `image` fields if JXL support is available.\r\n\r\nIf both of these work, then I can use HuggingFace's hub and the `datasets` library for an MVP even if not all features are there. I don't need automatic thumbnails or previews of the dataset on the server.\r\n\r\n\r\nGiven the reply from the Pillow maintainer, what solution can we come up with that works in a more permanent way than waiting for Pillow integration (which may not happen) — assuming users install the `jxlpy` plugin separately?",
"Link to my upgrade for the latest `libjxl`, pending review and merge. I tested load/save via Pillow extensively for this: https://github.com/olokelo/jxlpy/pull/13",
"After more research, here's my latest suggestion:\r\n* Depending on the build of pillow, the source (pip or conda), the platform even, certain formats may or may not be available — despite them being in the list. For example, webp support is not consistently available.\r\n* I'd suggest adding JXL to the list and simply catching the `PIL.UnidentifiedImageError` — printing a useful error message that sends them to a Wiki page to find out what to do.\r\n* On that page would be included instructions how to install support for the format and what to do for the dataset to load correctly on any platform, both with or without conda, etc.\r\n\r\nWhat do you think?",
"> The jxlpy plugin does support saving, in the _save function of the JXLImagePlugin file. Did it not work? I'm working on the upgrade to the latest JXL, so it'd be good to know if it failed so I can fix it.\r\n\r\nMy bad, I was referring to [this](https://github.com/google/brunsli/blob/2dd949e53ed05796eb44a31cc759fbf9e6c53e2f/contrib/py/jxl_library_patches/jxl_pillow.py) version of the plugin.\r\n\r\nI still think this involves too much work:\r\n* would require a new doc page\r\n* unofficial plugins have to be imported explicitly, leading to messier code on our side\r\n* etc.\r\n\r\nFor now, it seems more reasonable to create a loading script (faster than ImageFolder, as ImageFolder has to resolve the image files first) for this particular case and add `jxlpy` to the list of the `datasets-server`'s dependencies. Also, one additional advantage of this approach is that it reports if any of the modules imported in a script is missing, which is handy in your case for the plugin lib. WDYT?",
"OK, let me try it it and I'll report back.\r\n\r\nWill the JXL files (even if unknown format) be automatically downloaded if they are linked from the `.jsonl` file?\r\n\r\n(I had trouble getting that working before this patch.)",
"> Will the JXL files (even if unknown format) be automatically downloaded if they are linked from the .jsonl file?\r\n\r\nNo, they need to be downloaded explicitly.\r\n\r\nFeel free to use 🤗 Hub discussions in your dataset repo to ping us for help (our usernames are the same there)",
"Is it possible to add support for JXL files being downloaded without needing to add server-side rendering support?",
"In the loading script, data files are downloaded with `DownloadManager` (`dl_manager` in `_split_generators`), which doesn't have any requirements regarding the actual type of the downloaded files.\r\n\r\nPS: Let's use the forum or Hub discussions for further questions to avoid pinging other participants"
] | 2022-11-23T23:16:33Z | 2022-11-29T18:49:46Z | null | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5289.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5289",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5289.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5289"
} | JPEG-XL is the most advanced of the next-generation of image codecs, supporting both lossless and lossy files — with better compression and quality than PNG and JPG respectively. It has reduced the disk sizes and bandwidth required for many of the datasets I use.
Pillow does not yet support JXL, but there's a plugin as a separate Python library that does (`pip install jxlpy`), and I've tested that this change works as expected when the plugin is imported.
Dataset used for testing, you must `git pull` as loading it from Python won't work until `datasets-server` is also changed to support JXL files:
https://huggingface.co/datasets/texturedesign/td01_natural-ground-textures
The case where the plugin is not imported first raises an error:
```
PIL.UnidentifiedImageError: cannot identify image file 'td01/train/set01/01_145523.jxl'
```
In order to enable support for JXL even before pillow supports this, should this exception be handled with a better error message? I'd expect/hope JXL support to follow in one of the pillow quarterly releases in the next 6-9 months. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5289/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5289/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5288/comments | https://api.github.com/repos/huggingface/datasets/issues/5288/events | https://github.com/huggingface/datasets/issues/5288 | 1,462,134,067 | I_kwDODunzps5XJmUz | 5,288 | Lossy json serialization - deserialization of dataset info | {
"avatar_url": "https://avatars.githubusercontent.com/u/57542204?v=4",
"events_url": "https://api.github.com/users/anuragprat1k/events{/privacy}",
"followers_url": "https://api.github.com/users/anuragprat1k/followers",
"following_url": "https://api.github.com/users/anuragprat1k/following{/other_user}",
"gists_url": "https://api.github.com/users/anuragprat1k/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anuragprat1k",
"id": 57542204,
"login": "anuragprat1k",
"node_id": "MDQ6VXNlcjU3NTQyMjA0",
"organizations_url": "https://api.github.com/users/anuragprat1k/orgs",
"received_events_url": "https://api.github.com/users/anuragprat1k/received_events",
"repos_url": "https://api.github.com/users/anuragprat1k/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anuragprat1k/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anuragprat1k/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anuragprat1k"
} | [] | open | false | null | [] | null | [
"Hi ! JSON is a lossy format indeed. If you want to keep the feature types or other metadata I'd encourage you to store them as well. For example you can use `dataset.info.write_to_directory` and `DatasetInfo.from_directory` to store the feature types, split info, description, license etc."
] | 2022-11-23T17:20:15Z | 2022-11-25T12:53:51Z | null | NONE | null | null | null | ### Describe the bug
Saving a dataset to disk as json (using `to_json`) and then loading it again (using `load_dataset`) results in features whose labels are not type-cast correctly. In the code snippet below, `features.label` should have a label of type `ClassLabel` but has type `Value` instead.
### Steps to reproduce the bug
```
from datasets import load_dataset
def test_serdes_from_json(d):
dataset = load_dataset(d, split="train")
dataset.to_json('_test')
dataset_loaded = load_dataset("json", data_files='_test', split='train')
try:
assert dataset_loaded.info.features == dataset.info.features, "features unequal!"
except Exception as ex:
print(f'{ex}')
print(f'expected {dataset.info.features}, \nactual { dataset_loaded.info.features }')
test_serdes_from_json('rotten_tomatoes')
```
Output
```
features unequal!
expected {'text': Value(dtype='string', id=None), 'label': ClassLabel(names=['neg', 'pos'], id=None)},
actual {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}
```
### Expected behavior
The deserialized `features.label` should have type `ClassLabel`.
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.10.144-127.601.amzn2.x86_64-x86_64-with-glibc2.17
- Python version: 3.7.13
- PyArrow version: 7.0.0
- Pandas version: 1.2.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5288/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5288/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5287/comments | https://api.github.com/repos/huggingface/datasets/issues/5287/events | https://github.com/huggingface/datasets/pull/5287 | 1,461,971,889 | PR_kwDODunzps5Dkttf | 5,287 | Fix methods using `IterableDataset.map` that lead to `features=None` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._",
"Maybe other options are:\r\n* Keep the `info.features` to `None` if those were initially `None`\r\n* Infer the features with pre-fetching just if the `info.features` is `None`\r\n* If the `info.features` are there, make sure that after `map` features is not `None`",
"Hi @lhoestq something that's still not clear to me is: should we infer the features always when applying a `map` if those are initially `None`, or just assume that if the features are initially `None` those should be left that way unless the user specifically sets those (or during iter)?\r\n\r\nIn this PR I'm using `from datasets.iterable_dataset import _infer_features_from_batch` to infer the features when those are `None` using pre-fetch of `self._head()`, but I'm not sure if that's the expected behavior.\r\n\r\nThanks in advance for your help!",
"Also, the PR still has some more work to do, but probably the most relevant thing to fix right now is that the `features` are being set to `None` in the functions `IterableDataset.rename_column`, `IterableDataset.rename_columns`, and `IterableDataset.remove_columns` when the `features` originally had a value. So once that's fixed maybe we can focus on improving the current `map`'s behavior, so as to avoid this from happening also when the user uses `map` directly and not through the functions mentioned above.",
"> Cool thank you ! Resolving the features can be expensive sometimes, so maybe we don't resolve the features and we can just rename/remove columns if the features are known (i.e. if they're not None). What do you think ?\r\n\r\nThanks for the feedback! Makes sense to me 👍🏻 I'll commit the comments now!",
"Already done @lhoestq, feel free to merge whenever you want! Also before merging, can you please link the following issues https://github.com/huggingface/datasets/issues/3888, https://github.com/huggingface/datasets/issues/5245, and https://github.com/huggingface/datasets/issues/5284, so that those are closed upon merge? Thanks!"
] | 2022-11-23T15:33:25Z | 2022-11-28T15:43:14Z | 2022-11-28T12:53:22Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5287.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5287",
"merged_at": "2022-11-28T12:53:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5287.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5287"
} | As currently `IterableDataset.map` is setting the `info.features` to `None` every time as we don't know the output of the dataset in advance, `IterableDataset` methods such as `rename_column`, `rename_columns`, and `remove_columns`. that internally use `map` lead to the features being `None`.
This PR is related to #3888, #5245, and #5284
## ✅ Current solution
The code in this PR is basically making sure that if the features were there since the beginning and a `rename_column`/`rename_columns` happens, those are kept and the rename is applied to the `Features` too. Also, if the features were not there before applying `rename_column`, `rename_columns` or `remove_columns`, a batch is prefetched and the features are being inferred (that could potentially be part of `IterableDataset.__init__` in case the `info.features` value is `None`).
## 💡 Ideas
Some ideas were proposed in https://github.com/huggingface/datasets/issues/3888, but probably the most consistent solution even though it may take some time is to actually do the type inferencing during the `IterableDataset.__init__` in case the provided `info.features` is `None`, otherwise, we can just use the provided features.
Additionally, as mentioned at https://github.com/huggingface/datasets/issues/3888, we could also include a `features` parameter to the `map` function, but that's probably more tedious.
Also thanks to @lhoestq for sharing some ideas in both https://github.com/huggingface/datasets/issues/3888 and https://github.com/huggingface/datasets/issues/5245 :hugs: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5287/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5287/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5286/comments | https://api.github.com/repos/huggingface/datasets/issues/5286/events | https://github.com/huggingface/datasets/issues/5286 | 1,461,908,087 | I_kwDODunzps5XIvJ3 | 5,286 | FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json | {
"avatar_url": "https://avatars.githubusercontent.com/u/32490135?v=4",
"events_url": "https://api.github.com/users/roritol/events{/privacy}",
"followers_url": "https://api.github.com/users/roritol/followers",
"following_url": "https://api.github.com/users/roritol/following{/other_user}",
"gists_url": "https://api.github.com/users/roritol/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/roritol",
"id": 32490135,
"login": "roritol",
"node_id": "MDQ6VXNlcjMyNDkwMTM1",
"organizations_url": "https://api.github.com/users/roritol/orgs",
"received_events_url": "https://api.github.com/users/roritol/received_events",
"repos_url": "https://api.github.com/users/roritol/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/roritol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roritol/subscriptions",
"type": "User",
"url": "https://api.github.com/users/roritol"
} | [] | closed | false | null | [] | null | [
"I found a solution \r\n\r\nIf you specifically install datasets==1.18 and then run\r\n\r\nimport datasets\r\nwiki = datasets.load_dataset('wikipedia', '20200501.en')\r\nthen this should work (it worked for me.)"
] | 2022-11-23T14:54:15Z | 2022-11-25T11:33:14Z | 2022-11-25T11:33:14Z | NONE | null | null | null | ### Describe the bug
I follow the steps provided on the website [https://huggingface.co/datasets/wikipedia](https://huggingface.co/datasets/wikipedia)
$ pip install apache_beam mwparserfromhell
>>> from datasets import load_dataset
>>> load_dataset("wikipedia", "20220301.en")
however this results in the following error:
raise MissingBeamOptions(
datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')`
If I then prompt the system with:
>>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')
the following error occurs:
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
Here is the exact code:
Python 3.10.6 (main, Nov 2 2022, 18:53:38) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> load_dataset('wikipedia', '20220301.en')
Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...
Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 15.3k/15.3k [00:00<00:00, 22.2MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1879, in _download_and_prepare
raise MissingBeamOptions(
datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')`
>>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')
Downloading and preparing dataset wikipedia/20220301.en to /home/[EDITED]/.cache/huggingface/datasets/wikipedia/20220301.en/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...
Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 15.3k/15.3k [00:00<00:00, 18.8MB/s]
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 1741, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 822, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1909, in _download_and_prepare
super()._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 891, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/rorytol/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py", line 945, in _split_generators
downloaded_files = dl_manager.download_and_extract({"info": info_url})
File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 447, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 311, in download
downloaded_path_or_paths = map_nested(
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 444, in map_nested
mapped = [
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 445, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py", line 338, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/file_utils.py", line 530, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json
### Steps to reproduce the bug
$ pip install apache_beam mwparserfromhell
>>> from datasets import load_dataset
>>> load_dataset("wikipedia", "20220301.en")
>>> load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')
### Expected behavior
Download the dataset
### Environment info
Running linux on a remote workstation operated through a macbook terminal
Python 3.10.6
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5286/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5286/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5285/comments | https://api.github.com/repos/huggingface/datasets/issues/5285/events | https://github.com/huggingface/datasets/pull/5285 | 1,461,521,215 | PR_kwDODunzps5DjLgG | 5,285 | Save file name in embed_storage | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I updated the tests, met le know if it sounds good to you now :)"
] | 2022-11-23T10:55:54Z | 2022-11-24T14:11:41Z | 2022-11-24T14:08:37Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5285.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5285",
"merged_at": "2022-11-24T14:08:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5285.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5285"
} | Having the file name is useful in case we need to check the extension of the file (e.g. mp3), or in general in case it includes some metadata information (track id, image id etc.)
Related to https://github.com/huggingface/datasets/issues/5276 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5285/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5285/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5284/comments | https://api.github.com/repos/huggingface/datasets/issues/5284/events | https://github.com/huggingface/datasets/issues/5284 | 1,461,519,733 | I_kwDODunzps5XHQV1 | 5,284 | Features of IterableDataset set to None by remove column | {
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
}
] | null | [
"Related to https://github.com/huggingface/datasets/issues/5245",
"#self-assign",
"Thanks @lhoestq and @alvarobartt!\r\n\r\nThis would be extremely helpful to have working for the Whisper fine-tuning event - we're **only** training using streaming mode, so it'll be quite important to have this feature working to make training as easy as possible!\r\n\r\n_c.f._ https://twitter.com/sanchitgandhi99/status/1592188332171493377",
"> Thanks @lhoestq and @alvarobartt!\n> \n> \n> \n> This would be extremely helpful to have working for the Whisper fine-tuning event - we're **only** training using streaming mode, so it'll be quite important to have this feature working to make training as easy as possible!\n> \n> \n> \n> _c.f._ https://twitter.com/sanchitgandhi99/status/1592188332171493377\n\nI'm almost done with at least a temporary fix to `rename_column`, `rename_columns`, and `remove_columns`, just trying to figure out how to extend it to the `map` function itself!\n\nI'll probably open the PR for review either tomorrow or Sunday hopefully! Glad I can help you and HuggingFace 🤗 ",
"Awesome - thank you so much for this PR @alvarobartt! Is much appreciated!",
"@sanchit-gandhi PR is ready and open for review at #5287, but there's still one issue I may need @lhoestq's input :hugs:",
"Let us know @sanchit-gandhi if you need a new release of `datasets` soon with this fix included :)",
"Thanks for the fix guys! We can direct people to install `datasets` from main if that's easier!"
] | 2022-11-23T10:54:59Z | 2022-11-28T15:18:08Z | 2022-11-28T12:53:24Z | CONTRIBUTOR | null | null | null | ### Describe the bug
The `remove_column` method of the IterableDataset sets the dataset features to None.
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
# load LS in streaming mode
dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
# check original features
print("Original features: ", dataset.features.keys())
# define features to remove: we KEEP audio and text
COLUMNS_TO_REMOVE = ['chapter_id', 'speaker_id', 'file', 'id']
dataset = dataset.remove_columns(COLUMNS_TO_REMOVE)
# check processed features, uh-oh!
print("Processed features: ", dataset.features)
# streaming the first audio sample still works
print("First sample:", next(iter(ds)))
```
**Print Output:**
```
Original features: dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])
Processed features: None
First sample: {'audio': {'path': '2277-149896-0000.flac', 'array': array([ 0.00186157, 0.0005188 , 0.00024414, ..., -0.00097656,
-0.00109863, -0.00146484]), 'sampling_rate': 16000}, 'text': "HE WAS IN A FEVERED STATE OF MIND OWING TO THE BLIGHT HIS WIFE'S ACTION THREATENED TO CAST UPON HIS ENTIRE FUTURE"}
```
### Expected behavior
The features should be those **not** removed by the `remove_column` method, i.e. audio and text.
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
(Running on Google Colab for a blog post: https://colab.research.google.com/drive/1ySCQREPZEl4msLfxb79pYYOWjUZhkr9y#scrollTo=8pRDGiVmH2ml)
cc @polinaeterna @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5284/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5284/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5283/comments | https://api.github.com/repos/huggingface/datasets/issues/5283/events | https://github.com/huggingface/datasets/pull/5283 | 1,460,291,003 | PR_kwDODunzps5De5M1 | 5,283 | Release: 2.6.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-22T17:36:24Z | 2022-11-22T17:50:12Z | 2022-11-22T17:47:02Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5283.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5283",
"merged_at": "2022-11-22T17:47:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5283.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5283"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5283/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5283/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5282/comments | https://api.github.com/repos/huggingface/datasets/issues/5282/events | https://github.com/huggingface/datasets/pull/5282 | 1,460,238,928 | PR_kwDODunzps5Det2_ | 5,282 | Release: 2.7.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-11-22T16:58:54Z | 2022-11-22T17:21:28Z | 2022-11-22T17:21:27Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5282",
"merged_at": "2022-11-22T17:21:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5282"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5282/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5281/comments | https://api.github.com/repos/huggingface/datasets/issues/5281/events | https://github.com/huggingface/datasets/issues/5281 | 1,459,930,271 | I_kwDODunzps5XBMSf | 5,281 | Support cloud storage in load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Or for example an archive on GitHub releases! Before I added support for JXL (locally only, PR still pending) I was considering hosting my files on GitHub instead..."
] | 2022-11-22T14:00:10Z | 2022-11-25T15:54:18Z | null | MEMBER | null | null | null | Would be nice to be able to do
```python
data_files=["s3://..."]
storage_options = {...}
load_dataset(..., data_files=data_files, storage_options=storage_options)
```
or even
```python
load_dataset("gs://...")
```
The idea would be to use `fsspec` as in `download_and_prepare` and `save_to_disk`.
This has been requested several times already. Some users want to use their data from private cloud storage to train models
related:
https://github.com/huggingface/datasets/issues/3490
https://github.com/huggingface/datasets/issues/5244
[forum](https://discuss.huggingface.co/t/how-to-use-s3-path-with-load-dataset-with-streaming-true/25739/2) | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5281/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5281/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5280/comments | https://api.github.com/repos/huggingface/datasets/issues/5280/events | https://github.com/huggingface/datasets/issues/5280 | 1,459,823,179 | I_kwDODunzps5XAyJL | 5,280 | Import error | {
"avatar_url": "https://avatars.githubusercontent.com/u/40760055?v=4",
"events_url": "https://api.github.com/users/feketedavid1012/events{/privacy}",
"followers_url": "https://api.github.com/users/feketedavid1012/followers",
"following_url": "https://api.github.com/users/feketedavid1012/following{/other_user}",
"gists_url": "https://api.github.com/users/feketedavid1012/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/feketedavid1012",
"id": 40760055,
"login": "feketedavid1012",
"node_id": "MDQ6VXNlcjQwNzYwMDU1",
"organizations_url": "https://api.github.com/users/feketedavid1012/orgs",
"received_events_url": "https://api.github.com/users/feketedavid1012/received_events",
"repos_url": "https://api.github.com/users/feketedavid1012/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/feketedavid1012/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/feketedavid1012/subscriptions",
"type": "User",
"url": "https://api.github.com/users/feketedavid1012"
} | [] | open | false | null | [] | null | [
"Hi ! Can you \r\n```python\r\nimport platform\r\nprint(platform.python_version())\r\n```\r\nto see that it returns ?",
"Hi,\n\n3.8.13\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Quentin Lhoest ***@***.***>\nSent: Tuesday, November 22, 2022 2:37:02 PM\nTo: huggingface/datasets ***@***.***>\nCc: feketedavid1012 ***@***.***>; Author ***@***.***>\nSubject: Re: [huggingface/datasets] Import error (Issue #5280)\n\n\nHi ! Can you\n\nimport platform\nprint(platform.python_version())\n\nto see that it returns ?\n\n—\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/5280#issuecomment-1323691385>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AJW7F5YGG32W6WABYC25NJTWJTD75ANCNFSM6AAAAAASHZJ2AU>.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\n",
"Then it should work as expected if you use the same python when using `datasets`\r\n\r\nPlease make sure you're running your code in the right environment",
"It's the right environment. But in if statement I have\n\"3.8.13\" < 3.7\nAnd in the error message is Python>=3.7 which is true in my case (3.8.13 is greater then 3.7), so I don't understand my python should be below the 3.7 which case the if statement is right, but the message is wrong, or above 3.7 which case if statement is wrong, error message is right.\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Quentin Lhoest ***@***.***>\nSent: Tuesday, November 22, 2022 2:41:43 PM\nTo: huggingface/datasets ***@***.***>\nCc: feketedavid1012 ***@***.***>; Author ***@***.***>\nSubject: Re: [huggingface/datasets] Import error (Issue #5280)\n\n\nThen it should work as expected if you use the same python when using datasets\n\nPlease make sure you're running your code in the right environment\n\n—\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/5280#issuecomment-1323697094>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AJW7F54JURTAJJWWDO2QGI3WJTERPANCNFSM6AAAAAASHZJ2AU>.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\n",
"If you're having an error then you're not running your code in the right environment."
] | 2022-11-22T12:56:43Z | 2022-11-22T13:57:49Z | null | NONE | null | null | null | https://github.com/huggingface/datasets/blob/cd3d8e637cfab62d352a3f4e5e60e96597b5f0e9/src/datasets/__init__.py#L28
Hy,
I have error at the above line. I have python version 3.8.13, the message says I need python>=3.7, which is True, but I think the if statement not working properly (or the message wrong) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5280/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5280/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5279/comments | https://api.github.com/repos/huggingface/datasets/issues/5279/events | https://github.com/huggingface/datasets/pull/5279 | 1,459,635,002 | PR_kwDODunzps5Dcoue | 5,279 | Warn about checksums | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm also in favor of disabling this by default - it's kinda impractical",
"Great, thanks for the quick turnaround on this!"
] | 2022-11-22T10:58:48Z | 2022-11-23T11:43:50Z | 2022-11-23T09:47:02Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5279.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5279",
"merged_at": "2022-11-23T09:47:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5279.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5279"
} | It takes a lot of time on big datasets to compute the checksums, we should at least add a warning to notify the user about this step. I also mentioned how to disable it, and added a tqdm bar (delay=5 seconds)
cc @ola13 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5279/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5279/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5278/comments | https://api.github.com/repos/huggingface/datasets/issues/5278/events | https://github.com/huggingface/datasets/issues/5278 | 1,459,574,490 | I_kwDODunzps5W_1ba | 5,278 | load_dataset does not read jsonl metadata file properly | {
"avatar_url": "https://avatars.githubusercontent.com/u/81414263?v=4",
"events_url": "https://api.github.com/users/065294847/events{/privacy}",
"followers_url": "https://api.github.com/users/065294847/followers",
"following_url": "https://api.github.com/users/065294847/following{/other_user}",
"gists_url": "https://api.github.com/users/065294847/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/065294847",
"id": 81414263,
"login": "065294847",
"node_id": "MDQ6VXNlcjgxNDE0MjYz",
"organizations_url": "https://api.github.com/users/065294847/orgs",
"received_events_url": "https://api.github.com/users/065294847/received_events",
"repos_url": "https://api.github.com/users/065294847/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/065294847/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/065294847/subscriptions",
"type": "User",
"url": "https://api.github.com/users/065294847"
} | [] | closed | false | null | [] | null | [
"Can you try to remove \"drop_labels=false\" ? It may force the loader to infer the labels instead of reading the metadata",
"Hi, thanks for responding. I tried that, but it does not change anything.",
"Can you try updating `datasets` ? Metadata support was added in `datasets` 2.4",
"Probably the issue, will report back asap!",
"Okay, now it seems to actually load the metadata and create the train_split, but it still says only returns \"image\" and \"label\", which is always 0 since all images are from same folder",
"> Can you try updating `datasets` ? Metadata support was added in `datasets` 2.4\r\n\r\nUpdate: This was the issue."
] | 2022-11-22T10:24:46Z | 2022-11-23T11:38:35Z | 2022-11-23T11:38:35Z | NONE | null | null | null | ### Describe the bug
Hi, I'm following [this page](https://huggingface.co/docs/datasets/image_dataset) to create a dataset of images and captions via an image folder and a metadata.json file, but I can't seem to get the dataloader to recognize the "text" column. It just spits out "image" and "label" as features.
Below is code to reproduce my exact example/problem.
### Steps to reproduce the bug
```ruby
dataset_link="19Unu89Ih_kP6zsE7f9Mkw8dy3NwHopRF"
id = dataset_link
output = 'Godardv01.zip'
gdown.download(id=id, output=output, quiet=False)
ds = load_dataset("imagefolder", data_dir="/kaggle/working/Volumes/TOSHIBA/Godard_imgs/Volumes/TOSHIBA/Godard_imgs/Full/train", split="train", drop_labels=False)
print(ds)
```
### Expected behavior
I would expect that it returned "image" and "text" columns from the code above.
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 5.0.0
- Pandas version: 1.3.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5278/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5278/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5277/comments | https://api.github.com/repos/huggingface/datasets/issues/5277/events | https://github.com/huggingface/datasets/pull/5277 | 1,459,388,551 | PR_kwDODunzps5Dbybu | 5,277 | Remove YAML integer keys from class_label metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Also note that this approach is valid when metadata keys are str, but also if they are int.\r\n- This will be helpful for any community dataset using old integer keys in their metadata",
"perfect !"
] | 2022-11-22T08:34:07Z | 2022-11-22T13:58:26Z | 2022-11-22T13:55:49Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5277.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5277",
"merged_at": "2022-11-22T13:55:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5277.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5277"
} | Fix partially #5275. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5277/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5277/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5276/comments | https://api.github.com/repos/huggingface/datasets/issues/5276/events | https://github.com/huggingface/datasets/issues/5276 | 1,459,363,442 | I_kwDODunzps5W_B5y | 5,276 | Bug in downloading common_voice data and snall chunk of it to one's own hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/48530104?v=4",
"events_url": "https://api.github.com/users/capsabogdan/events{/privacy}",
"followers_url": "https://api.github.com/users/capsabogdan/followers",
"following_url": "https://api.github.com/users/capsabogdan/following{/other_user}",
"gists_url": "https://api.github.com/users/capsabogdan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/capsabogdan",
"id": 48530104,
"login": "capsabogdan",
"node_id": "MDQ6VXNlcjQ4NTMwMTA0",
"organizations_url": "https://api.github.com/users/capsabogdan/orgs",
"received_events_url": "https://api.github.com/users/capsabogdan/received_events",
"repos_url": "https://api.github.com/users/capsabogdan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/capsabogdan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capsabogdan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/capsabogdan"
} | [] | open | false | null | [] | null | [
"Sounds like one of the file is not a valid one, can you make sure you uploaded valid mp3 files ?",
"Well I just sharded the original commonVoice dataset and pushed a small chunk of it in a private rep\n\nWhat did go wrong?\n\nHolen Sie sich Outlook für iOS<https://aka.ms/o0ukef>\n________________________________\nVon: Quentin Lhoest ***@***.***>\nGesendet: Tuesday, November 22, 2022 3:03:40 PM\nAn: huggingface/datasets ***@***.***>\nCc: capsabogdan ***@***.***>; Author ***@***.***>\nBetreff: Re: [huggingface/datasets] Bug in downloading common_voice data and snall chunk of it to one's own hub (Issue #5276)\n\n\nSounds like one of the file is not a valid one, can you make sure you uploaded valid mp3 files ?\n\n—\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/5276#issuecomment-1323727434>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ALSIFOAPAL2V4TBJTSPMAULWJTHDZANCNFSM6AAAAAASHQJ63U>.\nYou are receiving this because you authored the thread.Message ID: ***@***.***>\n",
"It should be all good then !\r\nCould you share a link to your repository for me to investigate what went wrong ?",
"https://huggingface.co/datasets/DTU54DL/common-voice-test16k\n\nAm Di., 22. Nov. 2022 um 16:43 Uhr schrieb Quentin Lhoest <\n***@***.***>:\n\n> It should be all good then !\n> Could you share a link to your repository for me to investigate what went\n> wrong ?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5276#issuecomment-1323876682>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALSIFOEUJRZWXAM7DYA5VJDWJTS3NANCNFSM6AAAAAASHQJ63U>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n",
"I see ! This is a bug with MP3 files.\r\n\r\nWhen we store audio data in parquet, we store the bytes and the file name. From the file name extension we know if it's a WAV, an MP3 or else. But here it looks like the paths are all None.\r\n\r\nIt looks like it comes from here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/features/audio.py#L212\r\n\r\nCc @polinaeterna maybe we should simply put the file name instead of None values ?",
"@lhoestq I remember we wanted to avoid storing redundant data but maybe it's not that crucial indeed to store one more string value. \r\nOr we can store paths only for mp3s, considering that for other formats we don't have such a problem with reading from bytes without format specified. ",
"It doesn't cost much to always store the file name IMO",
"thanks for the help!\n\ncan I do anything on my side? we are doing a DL project and we need the\ndata really quick.\n\nthanks\nbogdan\n\n> Message ID: ***@***.***>\n>\n",
"I opened a pull requests here: https://github.com/huggingface/datasets/pull/5285, we'll do a new release soon with this fix.\r\n\r\nOtherwise if you're really in a hurry you can install `datasets` from this PR",
"[image: image.png]\n\n> Message ID: ***@***.***>\n>\n",
"any idea on what's going wrong here?\n\nthanks\n\nAm So., 27. Nov. 2022 um 13:53 Uhr schrieb Bogdan Capsa <\n***@***.***>:\n\n> [image: image.png]\n>\n>> Message ID: ***@***.***>\n>>\n>\n",
"hi @capsabogdan! \r\ncould you please share more specifically what problem do you have now?",
"I have attached this screenshot above . can u pls help? So can not pip from pull request\r\n\r\n![image](https://user-images.githubusercontent.com/48530104/204354027-6173e6d1-e3d4-4085-a363-e924cfe1a7f4.png)\r\n",
"The pull request has been merged on `main`.\r\nYou can install `datasets` from `main` using\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```",
"I've tried to load this dataset DTU54DL/common-voice-test16k, but am\ngetting the same error.\n\nSo the bug fix will fix only if I upload a new dataset, or also loading\npreviously uploaded datasets?\n\nthanks\n\nAm Mo., 28. Nov. 2022 um 19:51 Uhr schrieb Quentin Lhoest <\n***@***.***>:\n\n> The pull request has been merged on main.\n> You can install datasets from main using\n>\n> pip install git+https://github.com/huggingface/datasets.git\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5276#issuecomment-1329587334>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALSIFOCNYYIGHM2EX3ZIO6DWKT5MXANCNFSM6AAAAAASHQJ63U>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n",
"> So the bug fix will fix only if I upload a new dataset, or also loading\r\npreviously uploaded datasets?\r\n\r\nYou have to reupload the dataset, sorry for the inconvenience",
"thank you so much for the help! works like a charm!\n\nAm Di., 29. Nov. 2022 um 12:15 Uhr schrieb Quentin Lhoest <\n***@***.***>:\n\n> So the bug fix will fix only if I upload a new dataset, or also loading\n> previously uploaded datasets?\n>\n> You have to reupload the dataset, sorry for the inconvenience\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5276#issuecomment-1330468393>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALSIFOBKEFZO57BAKY4IGW3WKXQUZANCNFSM6AAAAAASHQJ63U>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n"
] | 2022-11-22T08:17:53Z | 2022-11-30T16:59:49Z | null | NONE | null | null | null | ### Describe the bug
I'm trying to load the common voice dataset. Currently there is no implementation to download just par tof the data, and I need just one part of it, without downloading the entire dataset
Help please?
![image](https://user-images.githubusercontent.com/48530104/203260511-26df766f-6013-4eaf-be26-8aa13794def2.png)
### Steps to reproduce the bug
So here is what I have done:
1. Download common_voice data
2. Trim part of it and publish it to my own repo.
3. Download data from my own repo, but am getting this error.
### Expected behavior
There shouldn't be an error in downloading part of the data and publishing it to one's own repo
### Environment info
common_voice 11 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5276/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5276/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5275/comments | https://api.github.com/repos/huggingface/datasets/issues/5275/events | https://github.com/huggingface/datasets/issues/5275 | 1,459,358,919 | I_kwDODunzps5W_AzH | 5,275 | YAML integer keys are not preserved Hub server-side | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-11-22T08:14:47Z | 2022-11-23T08:44:16Z | null | MEMBER | null | null | null | After an internal discussion (https://github.com/huggingface/moon-landing/issues/4563):
- YAML integer keys are not preserved server-side: they are transformed to strings
- See for example this Hub PR: https://huggingface.co/datasets/acronym_identification/discussions/1/files
- Original:
```yaml
class_label:
names:
0: B-long
1: B-short
```
- Returned by the server:
```yaml
class_label:
names:
'0': B-long
'1': B-short
```
- They are planning to enforce only string keys
- Other projects already use interger-transformed-to string keys: e.g. `transformers` models `id2label`: https://huggingface.co/roberta-large-mnli/blob/main/config.json
```yaml
"id2label": {
"0": "CONTRADICTION",
"1": "NEUTRAL",
"2": "ENTAILMENT"
}
```
On the other hand, at `datasets` we are currently using YAML integer keys for `dataset_info` `class_label`.
Please note (thanks @lhoestq for pointing out) that previous versions (2.6 and 2.7) of `datasets` need being patched:
```python
In [18]: Features._from_yaml_list([{'dtype': {'class_label': {'names': {'0': 'neg', '1': 'pos'}}}, 'name': 'label'}])
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-18-974f07eea526> in <module>
----> 1 Features._from_yaml_list(ry)
~/Desktop/hf/nlp/src/datasets/features/features.py in _from_yaml_list(cls, yaml_data)
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
1744
-> 1745 return cls.from_dict(from_yaml_inner(yaml_data))
1746
1747 def encode_example(self, example):
~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj)
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
-> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
1742 else:
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
~/Desktop/hf/nlp/src/datasets/features/features.py in <dictcomp>(.0)
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
-> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
1742 else:
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj)
1734 return {"_type": snakecase_to_camelcase(obj["dtype"])}
1735 else:
-> 1736 return from_yaml_inner(obj["dtype"])
1737 else:
1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
~/Desktop/hf/nlp/src/datasets/features/features.py in from_yaml_inner(obj)
1736 return from_yaml_inner(obj["dtype"])
1737 else:
-> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
~/Desktop/hf/nlp/src/datasets/features/features.py in unsimplify(feature)
1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict):
1705 label_ids = sorted(feature["class_label"]["names"])
-> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)):
1707 raise ValueError(
1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing."
TypeError: can only concatenate str (not "int") to str
```
TODO:
- [x] Remove YAML integer keys from `dataset_info` metadata
- [x] Make a patch release for affected `datasets` versions: 2.6 and 2.7
- [ ] Communicate on the fix
- [ ] Wait for adoption
- [ ] Bulk edit the Hub to fix this in all canonical datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5275/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5275/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5274/comments | https://api.github.com/repos/huggingface/datasets/issues/5274/events | https://github.com/huggingface/datasets/issues/5274 | 1,458,646,455 | I_kwDODunzps5W8S23 | 5,274 | load_dataset possibly broken for gated datasets? | {
"avatar_url": "https://avatars.githubusercontent.com/u/20826878?v=4",
"events_url": "https://api.github.com/users/TristanThrush/events{/privacy}",
"followers_url": "https://api.github.com/users/TristanThrush/followers",
"following_url": "https://api.github.com/users/TristanThrush/following{/other_user}",
"gists_url": "https://api.github.com/users/TristanThrush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TristanThrush",
"id": 20826878,
"login": "TristanThrush",
"node_id": "MDQ6VXNlcjIwODI2ODc4",
"organizations_url": "https://api.github.com/users/TristanThrush/orgs",
"received_events_url": "https://api.github.com/users/TristanThrush/received_events",
"repos_url": "https://api.github.com/users/TristanThrush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TristanThrush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TristanThrush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TristanThrush"
} | [] | closed | false | null | [] | null | [
"@BradleyHsu",
"Btw, thanks very much for finding the hub rollback temporary fix and bringing the issue to our attention @KhoomeiK!",
"I see the same issue when calling `load_dataset('poloclub/diffusiondb', 'large_random_1k')` with `datasets==2.7.1` and `huggingface-hub=0.11.0`. No issue with `datasets=2.6.1` and `huggingface_hub==0.10.1`.\r\n\r\nhttps://github.com/poloclub/diffusiondb/issues/7",
"I fixed my issue by specifying `repo_type` in `hf_hub_url()`. https://github.com/poloclub/diffusiondb/commit/9eb91c79aaca98b0515a0ce45778b8af65b84652\r\n\r\nI opened a PR on the Winoground's repo: https://huggingface.co/datasets/facebook/winoground/discussions/2",
"This is a bug in the script, indeed. The most robust fix is to use a relative path instead of `hf_hub_url`, which does not depend on `huggingface_hub`'s version 🙂. I've opened a PR here: https://huggingface.co/datasets/facebook/winoground/discussions/3.",
"Awesome, big thanks to both @xiaohk and @mariosasko!"
] | 2022-11-21T21:59:53Z | 2022-11-28T02:50:42Z | 2022-11-28T02:50:42Z | MEMBER | null | null | null | ### Describe the bug
When trying to download the [winoground dataset](https://huggingface.co/datasets/facebook/winoground), I get this error unless I roll back the version of huggingface-hub:
```
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in validate_repo_id(repo_id)
165 if repo_id.count("/") > 1:
166 raise HFValidationError(
--> 167 "Repo id must be in the form 'repo_name' or 'namespace/repo_name':"
168 f" '{repo_id}'. Use `repo_type` argument if needed."
169 )
HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'datasets/facebook/winoground'. Use `repo_type` argument if needed
```
### Steps to reproduce the bug
Install requirements:
```
pip install transformers
pip install datasets
# It works if you uncomment the following line, rolling back huggingface hub:
# pip install huggingface-hub==0.10.1
```
Then:
```
from datasets import load_dataset
auth_token = "" # Replace with an auth token, which you can get from your huggingface account: Profile -> Settings -> Access Tokens -> New Token
winoground = load_dataset("facebook/winoground", use_auth_token=auth_token)["test"]
```
### Expected behavior
Downloading of the datset
### Environment info
Just a google colab; see here: https://colab.research.google.com/drive/15wwOSte2CjTazdnCWYUm2VPlFbk2NGc0?usp=sharing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5274/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5274/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5273/comments | https://api.github.com/repos/huggingface/datasets/issues/5273/events | https://github.com/huggingface/datasets/issues/5273 | 1,458,018,050 | I_kwDODunzps5W55cC | 5,273 | download_mode="force_redownload" does not refresh cached dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/28439912?v=4",
"events_url": "https://api.github.com/users/nomisto/events{/privacy}",
"followers_url": "https://api.github.com/users/nomisto/followers",
"following_url": "https://api.github.com/users/nomisto/following{/other_user}",
"gists_url": "https://api.github.com/users/nomisto/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nomisto",
"id": 28439912,
"login": "nomisto",
"node_id": "MDQ6VXNlcjI4NDM5OTEy",
"organizations_url": "https://api.github.com/users/nomisto/orgs",
"received_events_url": "https://api.github.com/users/nomisto/received_events",
"repos_url": "https://api.github.com/users/nomisto/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nomisto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nomisto/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nomisto"
} | [] | open | false | null | [] | null | [] | 2022-11-21T14:12:43Z | 2022-11-21T14:13:03Z | null | NONE | null | null | null | ### Describe the bug
`load_datasets` does not refresh dataset when features are imported from external file, even with `download_mode="force_redownload"`. The bug is not limited to nested fields, however it is more likely to occur with nested fields.
### Steps to reproduce the bug
To reproduce the bug 3 files are needed: `dataset.py` (contains dataset loading script), `schema.py` (contains features of dataset) and `main.py` (to run `load_datasets`)
`dataset.py`
```python
import datasets
from schema import features
class NewDataset(datasets.GeneratorBasedBuilder):
def _info(self):
return datasets.DatasetInfo(
features=features
)
def _split_generators(self, dl_manager):
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN
)
]
def _generate_examples(self):
data = [
{"id": 0, "nested": []},
{"id": 1, "nested": []}
]
for key, example in enumerate(data):
yield key, example
```
`schema.py`
```python
import datasets
features = datasets.Features(
{
"id": datasets.Value("int32"),
"nested": [
{"text": datasets.Value("string")}
]
}
)
```
`main.py`
```python
import datasets
a = datasets.load_dataset("dataset.py")
print(a["train"].info.features)
```
Now if `main.py` is run it prints the following correct output: `{'id': Value(dtype='int32', id=None), 'nested': [{'text': Value(dtype='string', id=None)}]}`. However, if f.e. the label of the feature "text" is changed to something else, f.e. to
`schema.py`
```python
import datasets
features = datasets.Features(
{
"id": datasets.Value("int32"),
"nested": [
{"textfoo": datasets.Value("string")}
]
}
)
```
`main.py` still prints `{'id': Value(dtype='int32', id=None), 'nested': [{'text': Value(dtype='string', id=None)}]}`, even if run with `download_mode="force_redownload"`. The only fix is to delete the folder in the cache.
### Expected behavior
The cached dataset is deleted and refreshed when using `load_datasets` with `download_mode="force_redownload"`.
### Environment info
- `datasets` version: 2.7.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyArrow version: 10.0.0
- Pandas version: 1.3.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5273/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5273/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5272/comments | https://api.github.com/repos/huggingface/datasets/issues/5272/events | https://github.com/huggingface/datasets/issues/5272 | 1,456,940,021 | I_kwDODunzps5W1yP1 | 5,272 | Use pyarrow Tensor dtype | {
"avatar_url": "https://avatars.githubusercontent.com/u/18228395?v=4",
"events_url": "https://api.github.com/users/franz101/events{/privacy}",
"followers_url": "https://api.github.com/users/franz101/followers",
"following_url": "https://api.github.com/users/franz101/following{/other_user}",
"gists_url": "https://api.github.com/users/franz101/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/franz101",
"id": 18228395,
"login": "franz101",
"node_id": "MDQ6VXNlcjE4MjI4Mzk1",
"organizations_url": "https://api.github.com/users/franz101/orgs",
"received_events_url": "https://api.github.com/users/franz101/received_events",
"repos_url": "https://api.github.com/users/franz101/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/franz101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/franz101/subscriptions",
"type": "User",
"url": "https://api.github.com/users/franz101"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! We're using the Arrow format for the datasets, and PyArrow tensors are not part of the Arrow format AFAIK:\r\n\r\n> There is no direct support in the arrow columnar format to store Tensors as column values.\r\n\r\nsource: https://github.com/apache/arrow/issues/4802#issuecomment-508494694",
"@wesm @rok its been around three years. any updates, regarding dataset arrow tensor support? 🙏 I know you must be very busy, would appreciate to learn what is the state of art. I saw the PR is still open [#8510](https://github.com/apache/arrow/pull/8510)",
"Hey @franz101 & @lhoestq!\r\nThere is a plan and a PR to create an [ExtensionArray of Tensors](https://github.com/apache/arrow/pull/8510) of equal sizes as well as a plan to do the same for Tensors of different sizes [ARROW-8714](https://issues.apache.org/jira/browse/ARROW-8714).",
"The work stalled a little because it was not clear where TensorArray would live. However Arrow community recently agreed to make a [well-known-extension-type document](https://lists.apache.org/thread/sxd5fhc42hb6svs79t3fd79gkqj83pfh) and I would like https://github.com/apache/arrow/pull/8510 to land there and add an implementation to C++/Python + another language. Is that something you would find beneficial to you?",
"that is a great update, thank you.\r\nit looks like this feature would benefit datasets implementation of [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9f2ff14673cac1f1ad56d80221a793f5938b68c7/src/datasets/features/features.py#L585-L641). Is that correct @eladsegal @lhoestq?\r\n\r\n",
"TensorArray sounds great ! Looking forward to it :)\r\n\r\nWe've had our own ExtensionArray for fixed shape tensors for a while now, hoping to see something more standardized by the arrow community.\r\n\r\nAlso super interested in the extension array for tensors of different sizes cc @mariosasko "
] | 2022-11-20T15:18:41Z | 2022-11-21T17:57:55Z | null | NONE | null | null | null | ### Feature request
I was going the discussion of converting tensors to lists.
Is there a way to leverage pyarrow's Tensors for nested arrays / embeddings?
For example:
```python
import pyarrow as pa
import numpy as np
x = np.array([[2, 2, 4], [4, 5, 100]], np.int32)
pa.Tensor.from_numpy(x, dim_names=["dim1","dim2"])
```
[Apache docs](https://arrow.apache.org/docs/python/generated/pyarrow.Tensor.html)
Maybe this belongs into the pyarrow features / repo.
### Motivation
Working with big data, we need to make sure to use the best data structures and IO out there
### Your contribution
Can try to a PR if code changes necessary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5272/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5272/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5271/comments | https://api.github.com/repos/huggingface/datasets/issues/5271/events | https://github.com/huggingface/datasets/pull/5271 | 1,456,807,738 | PR_kwDODunzps5DTDX1 | 5,271 | Fix #5269 | {
"avatar_url": "https://avatars.githubusercontent.com/u/32936898?v=4",
"events_url": "https://api.github.com/users/Freed-Wu/events{/privacy}",
"followers_url": "https://api.github.com/users/Freed-Wu/followers",
"following_url": "https://api.github.com/users/Freed-Wu/following{/other_user}",
"gists_url": "https://api.github.com/users/Freed-Wu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Freed-Wu",
"id": 32936898,
"login": "Freed-Wu",
"node_id": "MDQ6VXNlcjMyOTM2ODk4",
"organizations_url": "https://api.github.com/users/Freed-Wu/orgs",
"received_events_url": "https://api.github.com/users/Freed-Wu/received_events",
"repos_url": "https://api.github.com/users/Freed-Wu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Freed-Wu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Freed-Wu"
} | [] | closed | false | null | [] | null | [
"See <https://github.com/huggingface/datasets/issues/5269>"
] | 2022-11-20T07:50:49Z | 2022-11-21T15:07:19Z | 2022-11-21T15:06:38Z | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5271.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5271",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5271.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5271"
} | ```
$ datasets-cli convert --datasets_directory <TAB>
datasets_directory
benchmarks/ docs/ metrics/ notebooks/ src/ templates/ tests/ utils/
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5271/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5271/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5270/comments | https://api.github.com/repos/huggingface/datasets/issues/5270/events | https://github.com/huggingface/datasets/issues/5270 | 1,456,508,990 | I_kwDODunzps5W0JA- | 5,270 | When len(_URLS) > 16, download will hang | {
"avatar_url": "https://avatars.githubusercontent.com/u/32936898?v=4",
"events_url": "https://api.github.com/users/Freed-Wu/events{/privacy}",
"followers_url": "https://api.github.com/users/Freed-Wu/followers",
"following_url": "https://api.github.com/users/Freed-Wu/following{/other_user}",
"gists_url": "https://api.github.com/users/Freed-Wu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Freed-Wu",
"id": 32936898,
"login": "Freed-Wu",
"node_id": "MDQ6VXNlcjMyOTM2ODk4",
"organizations_url": "https://api.github.com/users/Freed-Wu/orgs",
"received_events_url": "https://api.github.com/users/Freed-Wu/received_events",
"repos_url": "https://api.github.com/users/Freed-Wu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Freed-Wu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Freed-Wu"
} | [] | open | false | null | [] | null | [
"It can fix the bug temporarily.\r\n```python\r\nfrom datasets import DownloadConfig\r\nconfig = DownloadConfig(num_proc=8)\r\nIn [5]: dataset = load_dataset('Freed-Wu/kodak', split='test', download_config=config)\r\nDownloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/6cf51f2b3d686d24a33fe86945f9e16802def212325f9345cf3cbb1b9f5f4a57...\r\nDownloading data files #4: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.39obj/s]\r\nDownloading data files #2: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.38obj/s]\r\nDownloading data files #3: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.13obj/s]\r\nDownloading data files #7: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.09obj/s]\r\nDownloading data files #5: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.08obj/s]\r\nDownloading data files #0: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:02<00:00, 1.08obj/s]\r\nDownloading data files #1: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:10<00:00, 3.36s/obj]\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 492k/492k [00:01<00:00, 253kB/s]\r\nDownloading data files #6: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:13<00:00, 4.63s/obj]\r\nExtracting data files #0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1407.17obj/s]\r\nExtracting data files #1: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1325.91obj/s]\r\nExtracting data files #3: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1524.46obj/s]\r\nExtracting data files #2: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1404.66obj/s]\r\nExtracting data files #4: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1538.63obj/s]\r\nExtracting data files #6: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1711.73obj/s]\r\nExtracting data files #7: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 2144.33obj/s]\r\nExtracting data files #5: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1964.85obj/s]\r\nDataset kodak downloaded and prepared to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/6cf51f2b3d686d24a33fe86945f9e16802def212325f9345cf3cbb1b9f5f4a57. Subsequent calls will reuse this data.\r\n```",
"Thanks for reporting ! This sounds like an issue with python multiprocessing. If we switch to multithreading for the downloads it should be much more robust - let me know if this is something you'd like to contribute, I'd be happy to help and give you some pointers",
"> an issue with python multiprocessing\r\n\r\nIf it is an issue with multiprocessing, should we report it to upstream?",
"Debugging this would require quite some work in my opinion, and I've often failed to make reproducible examples, since it's pretty correlated to one's environment + hardware. So I wouldn't spend too much time on this unless we manage to reproduce this on another machine consistently.\r\n\r\nInstead I'd encourage a more pragmatic fix that is: not create tons of processes (on regular machines it may slow things down anyway), and instead use multithreading by default.",
"I am not expert of python. I hear about python has GIL, which result in multi processing is worse than multi threading. So I am not sure if this change makes sense?\r\n\r\nAnd if this is a bug of multi processing, why not report to upstream and let them fix? And even if change it to multi threading, how can we make sure it can truly fix this problem?",
"Just my 2c. No offense.",
"> Just my 2c. No offense.\r\n\r\nsure np ^^\r\n\r\n> I hear about python has GIL, which result in multi processing is worse than multi threading. So I am not sure if this change makes sense?\r\n\r\nHere the bottleneck speed is the bandwidth used to download the files. When downloading, the GIL is released, so multithreading gives the same speed as multiprocessing.\r\n\r\n> And if this is a bug of multi processing, why not report to upstream and let them fix?\r\n\r\nUsually to fix a bug it's important to be able to reproduce it. This way you can share it, experiment with it, and then make sure it's fixed. Here I'm afraid it's not easy to reproduce. Though I think that spawning too many processes for your machine can lead to this kind of issues.\r\n\r\n> And even if change it to multi threading, how can we make sure it can truly fix this problem?\r\n\r\nMultithreading is more robust in python because IIRC there are less locks involved which are often the cause of code hanging for no reason."
] | 2022-11-19T14:27:41Z | 2022-11-21T15:27:16Z | null | NONE | null | null | null | ### Describe the bug
```python
In [9]: dataset = load_dataset('Freed-Wu/kodak', split='test')
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.53k/2.53k [00:00<00:00, 1.88MB/s]
[11/19/22 22:16:21] WARNING Using custom data configuration default builder.py:379
Downloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/bd1cc3434212e3e654f7e16ad618f8a1470b5982b086c91b1d6bc7187183c6e9...
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 531k/531k [00:02<00:00, 239kB/s]
#10: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.06s/obj]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 534k/534k [00:02<00:00, 193kB/s]
#14: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.37s/obj]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 692k/692k [00:02<00:00, 269kB/s]
#12: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.44s/obj]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 566k/566k [00:02<00:00, 210kB/s]
#5: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.53s/obj]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 613k/613k [00:02<00:00, 235kB/s]
#13: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.53s/obj]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 786k/786k [00:02<00:00, 342kB/s]
#3: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.60s/obj]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 619k/619k [00:02<00:00, 254kB/s]
#4: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.68s/obj]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 737k/737k [00:02<00:00, 271kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 788k/788k [00:02<00:00, 285kB/s]
#6: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:05<00:00, 5.04s/obj]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 618k/618k [00:04<00:00, 153kB/s]
#0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:11<00:00, 5.69s/obj]
^CProcess ForkPoolWorker-47:
Process ForkPoolWorker-46:
Process ForkPoolWorker-36:
Process ForkPoolWorker-38:██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:05<00:00, 5.04s/obj]
Process ForkPoolWorker-37:
Process ForkPoolWorker-45:
Process ForkPoolWorker-39:
Process ForkPoolWorker-43:
Process ForkPoolWorker-33:
Process ForkPoolWorker-18:
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get
with self._rlock:
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get
with self._rlock:
File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get
with self._rlock:
KeyboardInterrupt
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
KeyboardInterrupt
File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get
with self._rlock:
File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get
with self._rlock:
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
KeyboardInterrupt
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get
with self._rlock:
File "/usr/lib/python3.10/multiprocessing/queues.py", line 365, in get
res = self._reader.recv_bytes()
File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get
with self._rlock:
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.10/multiprocessing/connection.py", line 221, in recv_bytes
buf = self._recv_bytes(maxlength)
KeyboardInterrupt
KeyboardInterrupt
File "/usr/lib/python3.10/multiprocessing/connection.py", line 419, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.10/multiprocessing/connection.py", line 384, in _recv
chunk = read(handle, remaining)
KeyboardInterrupt
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.10/multiprocessing/queues.py", line 364, in get
with self._rlock:
File "/usr/lib/python3.10/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
KeyboardInterrupt
Process ForkPoolWorker-20:
Process ForkPoolWorker-44:
Process ForkPoolWorker-22:
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested
return function(data_struct)
File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache
response = http_head(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head
response = _request_with_retry(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
KeyboardInterrupt
#1: 0%| | 0/2 [03:00<?, ?obj/s]
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested
return function(data_struct)
File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 659, in get_from_cache
http_get(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 442, in http_get
response = _request_with_retry(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested
return function(data_struct)
File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
KeyboardInterrupt
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache
response = http_head(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head
response = _request_with_retry(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
KeyboardInterrupt
#3: 0%| | 0/2 [03:00<?, ?obj/s]
#11: 0%| | 0/1 [00:49<?, ?obj/s]
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested
return function(data_struct)
File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache
response = http_head(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head
response = _request_with_retry(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 723, in send
history = [resp for resp in gen]
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 723, in <listcomp>
history = [resp for resp in gen]
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 266, in resolve_redirects
resp = self.send(
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
KeyboardInterrupt
#5: 0%| | 0/1 [03:00<?, ?obj/s]
KeyboardInterrupt
Process ForkPoolWorker-42:
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar
return list(map(*args))
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 215, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/usr/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested
return function(data_struct)
File "/usr/lib/python3.10/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 561, in get_from_cache
response = http_head(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 476, in http_head
response = _request_with_retry(
File "/usr/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 405, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "/usr/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "/usr/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/usr/lib/python3.10/site-packages/urllib3/util/connection.py", line 72, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.10/socket.py", line 955, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
KeyboardInterrupt
#9: 0%| | 0/1 [00:51<?, ?obj/s]
```
### Steps to reproduce the bug
```python
"""Kodak.
Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import datasets
NUMBER = 17
_DESCRIPTION = """\
The pictures below link to lossless, true color (24 bits per pixel, aka "full
color") images. It is my understanding they have been released by the Eastman
Kodak Company for unrestricted usage. Many sites use them as a standard test
suite for compression testing, etc. Prior to this site, they were only
available in the Sun Raster format via ftp. This meant that the images could
not be previewed before downloading. Since their release, however, the lossless
PNG format has been incorporated into all the major browsers. Since PNG
supports 24-bit lossless color (which GIF and JPEG do not), it is now possible
to offer this browser-friendly access to the images.
"""
_HOMEPAGE = "https://r0k.us/graphics/kodak/"
_LICENSE = "GPLv3"
_URLS = [
f"https://github.com/MohamedBakrAli/Kodak-Lossless-True-Color-Image-Suite/raw/master/PhotoCD_PCD0992/{i}.png"
for i in range(1, 1 + NUMBER)
]
class Kodak(datasets.GeneratorBasedBuilder):
"""Kodak datasets."""
VERSION = datasets.Version("0.0.1")
def _info(self):
features = datasets.Features(
{
"image": datasets.Image(),
}
)
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=features,
homepage=_HOMEPAGE,
license=_LICENSE,
)
def _split_generators(self, dl_manager):
"""Return SplitGenerators."""
file_paths = dl_manager.download_and_extract(_URLS)
return [
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"file_paths": file_paths,
},
),
]
def _generate_examples(self, file_paths):
"""Yield examples."""
for file_path in file_paths:
yield file_path, {"image": file_path}
```
### Expected behavior
When `len(_URLS) < 16`, it works.
```python
In [3]: dataset = load_dataset('Freed-Wu/kodak', split='test')
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.53k/2.53k [00:00<00:00, 3.02MB/s]
[11/19/22 22:04:28] WARNING Using custom data configuration default builder.py:379
Downloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/d26017602a592b5bfa7e008127cdf9dec5af220c9068005f1b4eda036031f475...
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 593k/593k [00:00<00:00, 2.88MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 621k/621k [00:03<00:00, 166kB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 531k/531k [00:01<00:00, 366kB/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:13<00:00, 1.18it/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 3832.38it/s]
Dataset kodak downloaded and prepared to /home/wzy/.cache/huggingface/datasets/Freed-Wu___kodak/default/0.0.1/d26017602a592b5bfa7e008127cdf9dec5af220c9068005f1b4eda036031f475. Subsequent calls will reuse this data.
```
### Environment info
- `datasets` version: 2.7.0
- Platform: Linux-6.0.8-arch1-1-x86_64-with-glibc2.36
- Python version: 3.10.8
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5270/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5270/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5269/comments | https://api.github.com/repos/huggingface/datasets/issues/5269/events | https://github.com/huggingface/datasets/issues/5269 | 1,456,485,799 | I_kwDODunzps5W0DWn | 5,269 | Shell completions | {
"avatar_url": "https://avatars.githubusercontent.com/u/32936898?v=4",
"events_url": "https://api.github.com/users/Freed-Wu/events{/privacy}",
"followers_url": "https://api.github.com/users/Freed-Wu/followers",
"following_url": "https://api.github.com/users/Freed-Wu/following{/other_user}",
"gists_url": "https://api.github.com/users/Freed-Wu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Freed-Wu",
"id": 32936898,
"login": "Freed-Wu",
"node_id": "MDQ6VXNlcjMyOTM2ODk4",
"organizations_url": "https://api.github.com/users/Freed-Wu/orgs",
"received_events_url": "https://api.github.com/users/Freed-Wu/received_events",
"repos_url": "https://api.github.com/users/Freed-Wu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Freed-Wu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Freed-Wu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Freed-Wu"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"I don't think we need completion on the datasets-cli, since we're mainly developing huggingface-cli",
"I see."
] | 2022-11-19T13:48:59Z | 2022-11-21T15:06:15Z | 2022-11-21T15:06:14Z | NONE | null | null | null | ### Feature request
Like <https://github.com/huggingface/huggingface_hub/issues/1197>, datasets-cli maybe need it, too.
### Motivation
See above.
### Your contribution
Maybe. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5269/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5269/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5268/comments | https://api.github.com/repos/huggingface/datasets/issues/5268/events | https://github.com/huggingface/datasets/pull/5268 | 1,455,633,978 | PR_kwDODunzps5DPIsp | 5,268 | Sharded save_to_disk + multiprocessing | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5268). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-18T18:50:01Z | 2022-11-30T13:06:13Z | null | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5268.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5268",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5268.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5268"
} | Added `num_shards=` and `num_proc=` to `save_to_disk()`
I also:
- deprecated the fs parameter in favor of storage_options (for consistency with the rest of the lib) in save_to_disk and load_from_disk
- always embed the image/audio data in arrow when doing `save_to_disk`
- added a tqdm bar in `save_to_disk`
- Use the MockFileSystem in tests for `save_to_disk` and `load_from_disk`
- removed the unused integration tests with S3, since we can now test with `mockfs` instead of `s3fs`
TODO:
- [x] implem save_to_disk for dataset dict
- [x] save_to_disk for dataset dict tests
- [x] deprecate fs in dataset dict load_from_disk as well
- [x] update docs
Close #5263
Close https://github.com/huggingface/datasets/issues/4196
Close https://github.com/huggingface/datasets/issues/4351 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5268/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5268/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5267/comments | https://api.github.com/repos/huggingface/datasets/issues/5267/events | https://github.com/huggingface/datasets/pull/5267 | 1,455,466,464 | PR_kwDODunzps5DOlFR | 5,267 | Fix `max_shard_size` docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-18T16:55:22Z | 2022-11-18T17:28:58Z | 2022-11-18T17:25:27Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5267.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5267",
"merged_at": "2022-11-18T17:25:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5267.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5267"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5267/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5267/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5266/comments | https://api.github.com/repos/huggingface/datasets/issues/5266/events | https://github.com/huggingface/datasets/pull/5266 | 1,455,281,310 | PR_kwDODunzps5DN9BT | 5,266 | Specify arguments as keywords in librosa.reshape to avoid future errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-18T14:58:47Z | 2022-11-21T15:45:02Z | 2022-11-21T15:41:57Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5266.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5266",
"merged_at": "2022-11-21T15:41:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5266.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5266"
} | Fixes a warning and future deprecation from `librosa.reshape`:
```
FutureWarning: Pass orig_sr=16000, target_sr=48000 as keyword args. From version 0.10 passing these as positional arguments will result in an error
array = librosa.resample(array, sampling_rate, self.sampling_rate, res_type="kaiser_best")
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5266/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5266/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5265/comments | https://api.github.com/repos/huggingface/datasets/issues/5265/events | https://github.com/huggingface/datasets/issues/5265 | 1,455,274,864 | I_kwDODunzps5Wvbtw | 5,265 | Get an IterableDataset from a map-style Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | open | false | null | [] | null | [
"I think `stream` could be misleading since the data is not being streamed from remote endpoints (one could think that's the case when they see `load_dataset` followed by `stream`). Hence, I prefer the second option.\r\n\r\nPS: When we resolve https://github.com/huggingface/datasets/issues/4542, we could add `as_tf_dataset` to the API for consistency and deprecate `to_tf_dataset`."
] | 2022-11-18T14:54:40Z | 2022-11-21T15:25:32Z | null | MEMBER | null | null | null | This is useful to leverage iterable datasets specific features like:
- fast approximate shuffling
- lazy map, filter etc.
Iterating over the resulting iterable dataset should be at least as fast at iterating over the map-style dataset.
Here are some ideas regarding the API:
```python
# 1.
# - consistency with load_dataset(..., streaming=True)
# - gives intuition that map/filter/etc. are done on-the-fly
ids = ds.stream()
# 2.
# - more explicit on the output type
# - but maybe sounds like a conversion tool rather than a step in a processing pipeline
ids = ds.as_iterable_dataset()
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5265/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5265/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5264/comments | https://api.github.com/repos/huggingface/datasets/issues/5264/events | https://github.com/huggingface/datasets/issues/5264 | 1,455,252,906 | I_kwDODunzps5WvWWq | 5,264 | `datasets` can't read a Parquet file in Python 3.9.13 | {
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Could you share the full stack trace please ?\r\n\r\n\r\nCan you also try running this code ? It can be useful to determine if the issue comes from `datasets` or `fsspec` (streaming) or `pyarrow` (parquet reading):\r\n```python\r\nds = load_dataset(\"parquet\", data_files=a_parquet_file_url, use_auth_token=True)\r\n```",
"Here's the full trace\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/loubna_huggingface_co/load.py\", line 15, in <module>\r\n ds_all = load_dataset(\"bigcode/the-stack-dedup-pjj\", data_dir=\"data/java\",use_auth_token=True, split=\"train\", revision=\"v1.1.a1\")\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1742, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 814, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 905, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 1502, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py\", line 1195, in __iter__\r\n for obj in iterable:\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py\", line 67, in _generate_tables\r\n parquet_file = pq.ParquetFile(f)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/parquet/__init__.py\", line 286, in __init__\r\n self.reader.open(\r\n File \"pyarrow/_parquet.pyx\", line 1227, in pyarrow._parquet.ParquetReader.open\r\n File \"pyarrow/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n```\r\n\r\nwhen running\r\n```python\r\nds = load_dataset(\"parquet\", data_files=\"https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/blob/v1.1.a1/data/java/data_0000.parquet\", use_auth_token=True)\r\n```\r\nI get 401 error, but that's the case for the python subset too which I can load properly\r\n```\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1719, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1497, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1134, in dataset_module_factory\r\n return PackagedDatasetModuleFactory(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 707, in get_module\r\n data_files = DataFilesDict.from_local_or_remote(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 795, in from_local_or_remote\r\n DataFilesList.from_local_or_remote(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 710, in _get_origin_metadata_locally_or_by_urls\r\n return thread_map(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py\", line 94, in thread_map\r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py\", line 76, in _executor_map\r\n return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs))\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py\", line 1183, in __iter__\r\n for obj in iterable:\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/_base.py\", line 609, in result_iterator\r\n yield fs.pop().result()\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/_base.py\", line 446, in result\r\n return self.__get_result()\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/_base.py\", line 391, in __get_result\r\n raise self._exception\r\n File \"/opt/conda/envs/venv/lib/python3.9/concurrent/futures/thread.py\", line 58, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/data_files.py\", line 701, in _get_single_origin_metadata_locally_or_by_urls\r\n return (request_etag(data_file, use_auth_token=use_auth_token),)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 411, in request_etag\r\n response.raise_for_status()\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/requests/models.py\", line 960, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/blob/v1.1.a1/data/python/data_0000.parquet```",
"Can you check you used the right token ? You shouldn't get a 401 using your token",
"I checked it’s the right token, when loading the full dataset I get the error after data extraction so I can access the files. \r\n```\r\nDownloading and preparing dataset parquet/bigcode--the-stack-dedup-pjj to /home/loubna_huggingface_co/.cache/huggingface/datasets/bigcode___parquet/bigcode--the-stack-dedup-pjj-872ffac7f4bb46ca/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...\r\nDownloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 22.38it/s]\r\nExtracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 49.91it/s]\r\nTraceback (most recent call last):\r\n File \"/home/loubna_huggingface_co/load_ds.py\", line 5, in <module>\r\n ds = load_dataset(\"bigcode/the-stack-dedup-pjj\", data_dir=\"data/java\", use_auth_token=True,split=\"train\", revision=\"v1.1.a1\")\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py\", line 1742, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 814, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 905, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py\", line 1502, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py\", line 1195, in __iter__\r\n for obj in iterable:\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py\", line 67, in _generate_tables\r\n parquet_file = pq.ParquetFile(f)\r\n File \"/opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/parquet/__init__.py\", line 286, in __init__\r\n self.reader.open(\r\n File \"pyarrow/_parquet.pyx\", line 1227, in pyarrow._parquet.ParquetReader.open\r\n File \"pyarrow/error.pxi\", line 100, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n```\r\nCould it be that I'm using a wrong url, I just copied it from the address bar",
"The URL is wrong indeed, the right one is the one with \"resolve\" (the one you get when clicking on \"download\")- otherwise you try to download an html page ;)\r\n```\r\nhttps://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/resolve/v1.1.a1/data/java/data_0000.parquet\r\n```",
"Ah thanks! So I tried it with the first parquet file and it works, is there a way to know which parquet file was causing the issue since there are a lot of shards?",
"I think you have to try them all :/\r\n\r\nAlternatively you can add a try/catch in `parquet.py` in `datasets` to raise the name of the file that fails at doing `parquet_file = pq.ParquetFile(f)` when you run your initial code\r\n```python\r\nload_dataset(\"bigcode/the-stack-dedup-pjj\", data_dir=\"data/java\", split=\"train\", revision=\"v1.1.a1\", use_auth_token=True)\r\n```\r\nbut it will still iterate on all the files until it fails",
"Ok I will do that",
"I did find the file, and I get the same error as before \r\n```\r\nDownloading data files: 100%|███████████████████| 1/1 [00:00<00:00, 8160.12it/s]\r\nExtracting data files: 100%|████████████████████| 1/1 [00:00<00:00, 1447.81it/s]\r\n \r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\nInput In [22], in <cell line: 7>()\r\n 4 data_features = (data[\"train\"].features)\r\n 6 url = \"/home/loubna_huggingface_co/.cache/huggingface/datasets/downloads/93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7\"\r\n----> 7 data = load_dataset(\"parquet\", \r\n 8 data_files=url,\r\n 9 split=\"train\",\r\n 10 features=data_features,\r\n 11 use_auth_token=True)\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/load.py:1742, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1739 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n 1741 # Download and prepare data\r\n-> 1742 builder_instance.download_and_prepare(\r\n 1743 download_config=download_config,\r\n 1744 download_mode=download_mode,\r\n 1745 ignore_verifications=ignore_verifications,\r\n 1746 try_from_hf_gcs=try_from_hf_gcs,\r\n 1747 use_auth_token=use_auth_token,\r\n 1748 )\r\n 1750 # Build dataset for splits\r\n 1751 keep_in_memory = (\r\n 1752 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 1753 )\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py:814, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)\r\n 808 if not downloaded_from_gcs:\r\n 809 prepare_split_kwargs = {\r\n 810 \"file_format\": file_format,\r\n 811 \"max_shard_size\": max_shard_size,\r\n 812 **download_and_prepare_kwargs,\r\n 813 }\r\n--> 814 self._download_and_prepare(\r\n 815 dl_manager=dl_manager,\r\n 816 verify_infos=verify_infos,\r\n 817 **prepare_split_kwargs,\r\n 818 **download_and_prepare_kwargs,\r\n 819 )\r\n 820 # Sync info\r\n 821 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py:905, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 901 split_dict.add(split_generator.split_info)\r\n 903 try:\r\n 904 # Prepare split will record examples associated to the split\r\n--> 905 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 906 except OSError as e:\r\n 907 raise OSError(\r\n 908 \"Cannot find data file. \"\r\n 909 + (self.manual_download_instructions or \"\")\r\n 910 + \"\\nOriginal error:\\n\"\r\n 911 + str(e)\r\n 912 ) from None\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/builder.py:1502, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, max_shard_size)\r\n 1500 total_num_examples, total_num_bytes = 0, 0\r\n 1501 try:\r\n-> 1502 for key, table in logging.tqdm(\r\n 1503 generator,\r\n 1504 unit=\" tables\",\r\n 1505 leave=False,\r\n 1506 disable=not logging.is_progress_bar_enabled(),\r\n 1507 ):\r\n 1508 if max_shard_size is not None and writer._num_bytes > max_shard_size:\r\n 1509 num_examples, num_bytes = writer.finalize()\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/tqdm/std.py:1195, in tqdm.__iter__(self)\r\n 1192 time = self._time\r\n 1194 try:\r\n-> 1195 for obj in iterable:\r\n 1196 yield obj\r\n 1197 # Update and possibly print the progressbar.\r\n 1198 # Note: does not call self.update(1) for speed optimisation.\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py:67, in Parquet._generate_tables(self, files)\r\n 65 for file_idx, file in enumerate(itertools.chain.from_iterable(files)):\r\n 66 with open(file, \"rb\") as f:\r\n---> 67 parquet_file = pq.ParquetFile(f)\r\n 68 try:\r\n 69 for batch_idx, record_batch in enumerate(\r\n 70 parquet_file.iter_batches(batch_size=self.config.batch_size, columns=self.config.columns)\r\n 71 ):\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/parquet/__init__.py:286, in ParquetFile.__init__(self, source, metadata, common_metadata, read_dictionary, memory_map, buffer_size, pre_buffer, coerce_int96_timestamp_unit, decryption_properties, thrift_string_size_limit, thrift_container_size_limit)\r\n 280 def __init__(self, source, *, metadata=None, common_metadata=None,\r\n 281 read_dictionary=None, memory_map=False, buffer_size=0,\r\n 282 pre_buffer=False, coerce_int96_timestamp_unit=None,\r\n 283 decryption_properties=None, thrift_string_size_limit=None,\r\n 284 thrift_container_size_limit=None):\r\n 285 self.reader = ParquetReader()\r\n--> 286 self.reader.open(\r\n 287 source, use_memory_map=memory_map,\r\n 288 buffer_size=buffer_size, pre_buffer=pre_buffer,\r\n 289 read_dictionary=read_dictionary, metadata=metadata,\r\n 290 coerce_int96_timestamp_unit=coerce_int96_timestamp_unit,\r\n 291 decryption_properties=decryption_properties,\r\n 292 thrift_string_size_limit=thrift_string_size_limit,\r\n 293 thrift_container_size_limit=thrift_container_size_limit,\r\n 294 )\r\n 295 self.common_metadata = common_metadata\r\n 296 self._nested_paths_by_prefix = self._build_nested_paths()\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/_parquet.pyx:1227, in pyarrow._parquet.ParquetReader.open()\r\n\r\nFile /opt/conda/envs/venv/lib/python3.9/site-packages/pyarrow/error.pxi:100, in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n```",
"Can you check the JSON file associated to `/home/loubna_huggingface_co/.cache/huggingface/datasets/downloads/93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7` ? In the JSON file we can know from where it was downloaded\r\n\r\nYou can find it at `/home/loubna_huggingface_co/.cache/huggingface/datasets/downloads/93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7.json`",
"It's this file `https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj/resolve/f48656daa9f3a3607dacf8b57a65810a6a7a7f73/data/java/data_0022.parquet` loading it gives the same error",
"I'm able to load it properly using\r\n```python\r\nds = load_dataset(\"parquet\", data_files=a_parquet_file_url, use_auth_token=token)\r\n```\r\n\r\nMy guess is that your download was corrupted. Please delete `93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7` and `93431bc4380de07de8b0ab533666cb5a6120cbe266779e0a63c86bf7717475d7.json` locally and try again",
"That worked, thanks! But I thought if something went wrong with a download `datasets` creates new cache for all the files, that's not the case? (at some point I even changed dataset versions so it was still using that cache?)",
"Cool !\r\n\r\n> But I thought if something went wrong with a download datasets creates new cache for all the files\r\n\r\nWe don't perform integrity verifications if we don't know in advance the hash of the file to download.\r\n\r\n> at some point I even changed dataset versions so it was still using that cache?\r\n\r\n`datasets` caches the files by URL and ETag. If the content of a file changes, then the ETag changes and so it redownloads the file",
"I see, thank you!\r\n"
] | 2022-11-18T14:44:01Z | 2022-11-22T11:18:08Z | 2022-11-22T11:18:08Z | NONE | null | null | null | ### Describe the bug
I have an error when trying to load this [dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj) (it's private but I can add you to the bigcode org). `datasets` can't read one of the parquet files in the Java subset
```python
from datasets import load_dataset
ds = load_dataset("bigcode/the-stack-dedup-pjj", data_dir="data/java", split="train", revision="v1.1.a1", use_auth_token=True)
````
```
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
It seems to be an issue with new Python versions, Because it works in these two environements:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
```
```
- `datasets` version: 2.6.1
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
```
But not in this:
```
- `datasets` version: 2.6.1
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
```
### Steps to reproduce the bug
Load the dataset in python 3.9.13
### Expected behavior
Load the dataset without the pyarrow error.
### Environment info
```
- `datasets` version: 2.6.1
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5264/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5264/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5263/comments | https://api.github.com/repos/huggingface/datasets/issues/5263/events | https://github.com/huggingface/datasets/issues/5263 | 1,455,252,626 | I_kwDODunzps5WvWSS | 5,263 | Save a dataset in a determined number of shards | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2022-11-18T14:43:54Z | 2022-11-18T14:55:26Z | null | MEMBER | null | null | null | This is useful to distribute the shards to training nodes.
This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5263/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5263/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5262/comments | https://api.github.com/repos/huggingface/datasets/issues/5262/events | https://github.com/huggingface/datasets/issues/5262 | 1,455,171,100 | I_kwDODunzps5WvCYc | 5,262 | AttributeError: 'Value' object has no attribute 'names' | {
"avatar_url": "https://avatars.githubusercontent.com/u/102913847?v=4",
"events_url": "https://api.github.com/users/emnaboughariou/events{/privacy}",
"followers_url": "https://api.github.com/users/emnaboughariou/followers",
"following_url": "https://api.github.com/users/emnaboughariou/following{/other_user}",
"gists_url": "https://api.github.com/users/emnaboughariou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emnaboughariou",
"id": 102913847,
"login": "emnaboughariou",
"node_id": "U_kgDOBiJXNw",
"organizations_url": "https://api.github.com/users/emnaboughariou/orgs",
"received_events_url": "https://api.github.com/users/emnaboughariou/received_events",
"repos_url": "https://api.github.com/users/emnaboughariou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emnaboughariou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emnaboughariou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emnaboughariou"
} | [] | closed | false | null | [] | null | [
"Hi ! It looks like your \"isDif\" column is a Sequence of Value(\"string\"), not a Sequence of ClassLabel.\r\n\r\nYou can convert your Value(\"string\") feature type to a ClassLabel feature type this way:\r\n```python\r\nfrom datasets import ClassLabel, Sequence\r\n\r\n# provide the label_names yourself\r\nlabel_names = [...]\r\n# OR get them from the dataset\r\nlabel_names = sorted(set(label for labels in raw_datasets[\"train\"][\"isDif\"] for label in labels))\r\n\r\n# Cast to ClassLabel\r\nraw_datasets = raw_datasets.cast_column(\"isDif\", Sequence(ClassLabel(names=label_names)))\r\n```\r\n",
"thank you \r\nit works 💯 "
] | 2022-11-18T13:58:42Z | 2022-11-22T10:09:24Z | 2022-11-22T10:09:23Z | NONE | null | null | null | Hello
I'm trying to build a model for custom token classification
I already followed the token classification course on huggingface
while adapting the code to my work, this message occures :
'Value' object has no attribute 'names'
Here's my code:
`raw_datasets`
generates
DatasetDict({
train: Dataset({
features: ['isDisf', 'pos', 'tokens', 'id'],
num_rows: 14
})
})
`raw_datasets["train"][3]["isDisf"]`
generates
['B_RM', 'I_RM', 'I_RM', 'B_RP', 'I_RP', 'O', 'O']
`dis_feature = raw_datasets["train"].features["isDisf"]
dis_feature`
generates
Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)
and
`label_names = dis_feature.feature.names
label_names`
generates
AttributeError Traceback (most recent call last)
[<ipython-input-28-972fd54a869a>](https://localhost:8080/#) in <module>
----> 1 label_names = dis_feature.feature.names
2 label_names
AttributeError: 'Value' object has
AttributeError: 'Value' object has no attribute 'names'
Thank you for your help | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5262/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5262/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5261/comments | https://api.github.com/repos/huggingface/datasets/issues/5261/events | https://github.com/huggingface/datasets/issues/5261 | 1,454,647,861 | I_kwDODunzps5WtCo1 | 5,261 | Add PubTables-1M | {
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NielsRogge",
"id": 48327001,
"login": "NielsRogge",
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NielsRogge"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | [
"cc @albertvillanova the author would like to add this dataset to the hub: https://github.com/microsoft/table-transformer/issues/68#issuecomment-1319114621. Could you help him out?"
] | 2022-11-18T07:56:36Z | 2022-11-18T08:02:18Z | null | CONTRIBUTOR | null | null | null | ### Name
PubTables-1M
### Paper
https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html
### Data
https://github.com/microsoft/table-transformer
### Motivation
Table Transformer is now available in 🤗 Transformer, and it was trained on PubTables-1M. It's a large dataset for table extraction and structure recognition in unstructured documents. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5261/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5261/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5260/comments | https://api.github.com/repos/huggingface/datasets/issues/5260/events | https://github.com/huggingface/datasets/issues/5260 | 1,453,921,697 | I_kwDODunzps5WqRWh | 5,260 | consumer-finance-complaints dataset not loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/8098496?v=4",
"events_url": "https://api.github.com/users/adiprasad/events{/privacy}",
"followers_url": "https://api.github.com/users/adiprasad/followers",
"following_url": "https://api.github.com/users/adiprasad/following{/other_user}",
"gists_url": "https://api.github.com/users/adiprasad/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adiprasad",
"id": 8098496,
"login": "adiprasad",
"node_id": "MDQ6VXNlcjgwOTg0OTY=",
"organizations_url": "https://api.github.com/users/adiprasad/orgs",
"received_events_url": "https://api.github.com/users/adiprasad/received_events",
"repos_url": "https://api.github.com/users/adiprasad/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adiprasad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adiprasad/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adiprasad"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @adiprasad.\r\n\r\nWe are having a look at it.",
"I have opened an issue in that dataset Community tab on the Hub: https://huggingface.co/datasets/consumer-finance-complaints/discussions/1\r\n\r\nPlease note that in the meantime, you can load the dataset by passing `ignore_verifications=True`:\r\n```python\r\n>>> ds = load_dataset(\"consumer-finance-complaints\", ignore_verifications=True)\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['Date Received', 'Product', 'Sub Product', 'Issue', 'Sub Issue', 'Complaint Text', 'Company Public Response', 'Company', 'State', 'Zip Code', 'Tags', 'Consumer Consent Provided', 'Submitted via', 'Date Sent To Company', 'Company Response To Consumer', 'Timely Response', 'Consumer Disputed', 'Complaint ID'],\r\n num_rows: 3079747\r\n })\r\n})\r\n```",
"PR fixing this issue: https://huggingface.co/datasets/consumer-finance-complaints/discussions/2"
] | 2022-11-17T20:10:26Z | 2022-11-18T10:16:53Z | null | NONE | null | null | null | ### Describe the bug
Error during dataset loading
### Steps to reproduce the bug
```
>>> import datasets
>>> cf_raw = datasets.load_dataset("consumer-finance-complaints")
Downloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8.42k/8.42k [00:00<00:00, 3.33MB/s]
Downloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.60k/5.60k [00:00<00:00, 2.90MB/s]
Downloading readme: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16.6k/16.6k [00:00<00:00, 510kB/s]
Downloading and preparing dataset consumer-finance-complaints/default to /root/.cache/huggingface/datasets/consumer-finance-complaints/default/0.0.0/30e483d37fb4b25bb98cad1bfd2dc48f6ed6d1f3371eb4568c625a61d1a79b69...
Downloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 511M/511M [00:04<00:00, 103MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/load.py", line 1741, in load_dataset
builder_instance.download_and_prepare(
File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 822, in download_and_prepare
self._download_and_prepare(
File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 1555, in _download_and_prepare
super()._download_and_prepare(
File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/builder.py", line 931, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/skunk-pod-storage-lee-2emartie-40ibm-2ecom-pvc/anaconda3/envs/datasets/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1605177353, num_examples=2455765, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=2043641693, num_examples=3079747, shard_lengths=[721000, 656000, 788000, 846000, 68747], dataset_name='consumer-finance-complaints')}]
```
### Expected behavior
dataset should load
### Environment info
>>> datasets.__version__
'2.7.0'
Python 3.8.10
"Ubuntu 20.04.4 LTS" | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5260/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5260/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5259/comments | https://api.github.com/repos/huggingface/datasets/issues/5259/events | https://github.com/huggingface/datasets/issues/5259 | 1,453,555,923 | I_kwDODunzps5Wo4DT | 5,259 | datasets 2.7 introduces sharding error | {
"avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4",
"events_url": "https://api.github.com/users/DCNemesis/events{/privacy}",
"followers_url": "https://api.github.com/users/DCNemesis/followers",
"following_url": "https://api.github.com/users/DCNemesis/following{/other_user}",
"gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DCNemesis",
"id": 3616964,
"login": "DCNemesis",
"node_id": "MDQ6VXNlcjM2MTY5NjQ=",
"organizations_url": "https://api.github.com/users/DCNemesis/orgs",
"received_events_url": "https://api.github.com/users/DCNemesis/received_events",
"repos_url": "https://api.github.com/users/DCNemesis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DCNemesis"
} | [] | closed | false | null | [] | null | [
"I notice a comment in the code says:\r\n`Having lists of different sizes makes sharding ambigious, raise an error in this case until we decide how to define sharding without ambiguity for users` \r\n \r\n ... which suggests this update was pushed knowing that it might break some things. But, it didn't seem to have a useful error message of an argument that could be passed to avoid the error.",
"Sorry for the inconvenience, I opened a PR in your repo to fix this: https://huggingface.co/datasets/sil-ai/bloom-speech/discussions/2\r\n\r\nBasically we've always considered lists in `gen_kwargs` to be a shard list that we can split and pass into different workers to generate the dataset (e.g. if you pass `num_proc=` in `load_dataset()` to generate the dataset in parallel), but it was documented only recently",
"@lhoestq Thanks for the help. It looks like that took care of it."
] | 2022-11-17T15:36:52Z | 2022-11-18T12:52:05Z | 2022-11-18T12:52:05Z | NONE | null | null | null | ### Describe the bug
dataset fails to load with runtime error
`RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize:
- key audio_files has length 46
- key data has length 0
To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length.`
### Steps to reproduce the bug
With datasets[audio] 2.7 loaded, and logged into hugging face,
`data = datasets.load_dataset('sil-ai/bloom-speech', 'bis', use_auth_token=True)`
creates the error.
Full stack trace:
```---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-7-8cb9ca0f79f0>](https://localhost:8080/#) in <module>
----> 1 data = datasets.load_dataset('sil-ai/bloom-speech', 'bis', use_auth_token=True)
5 frames
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
1745 try_from_hf_gcs=try_from_hf_gcs,
1746 use_auth_token=use_auth_token,
-> 1747 num_proc=num_proc,
1748 )
1749
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
824 verify_infos=verify_infos,
825 **prepare_split_kwargs,
--> 826 **download_and_prepare_kwargs,
827 )
828 # Sync info
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs)
1554 def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs):
1555 super()._download_and_prepare(
-> 1556 dl_manager, verify_infos, check_duplicate_keys=verify_infos, **prepare_splits_kwargs
1557 )
1558
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
911 try:
912 # Prepare split will record examples associated to the split
--> 913 self._prepare_split(split_generator, **prepare_split_kwargs)
914 except OSError as e:
915 raise OSError(
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
1362 fpath = path_join(self._output_dir, fname)
1363
-> 1364 num_input_shards = _number_of_shards_in_gen_kwargs(split_generator.gen_kwargs)
1365 if num_input_shards <= 1 and num_proc is not None:
1366 logger.warning(
[/usr/local/lib/python3.7/dist-packages/datasets/utils/sharding.py](https://localhost:8080/#) in _number_of_shards_in_gen_kwargs(gen_kwargs)
16 + "\n".join(f"\t- key {key} has length {length}" for key, length in lists_lengths.items())
17 + "\nTo fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, "
---> 18 + "and use tuples otherwise. In the end there should only be one single list, or several lists with the same length."
19 )
20 )
RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize:
- key audio_files has length 46
- key data has length 0
To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length.```
### Expected behavior
the dataset loads in datasets version 2.6.1 and should load with datasets 2.7
### Environment info
- `datasets` version: 2.7.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- PyArrow version: 6.0.1
- Pandas version: 1.3.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5259/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5259/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5258/comments | https://api.github.com/repos/huggingface/datasets/issues/5258/events | https://github.com/huggingface/datasets/issues/5258 | 1,453,516,636 | I_kwDODunzps5Woudc | 5,258 | Restore order of split names in dataset_info for canonical datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"The bulk edit is running...\r\n\r\nSee for example: \r\n- A single config: https://huggingface.co/datasets/acronym_identification/discussions/2\r\n- Multiple configs: https://huggingface.co/datasets/babi_qa/discussions/1",
"TODO:\r\n- [x] \"chr_en\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n - Fixing PR: https://huggingface.co/datasets/chr_en/discussions/1 \r\n- [x] \"conll2000\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"crime_and_punish\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"dart\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [x] \"iwslt2017\" has no metadata JSON file, but it has \"dataset_info\" YAML tag in its card\r\n- [ ] \"mc4\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n- [ ] \"the_pile\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card\r\n- [ ] \"timit_asr\" has no metadata JSON file, nor \"dataset_info\" YAML tag in its card",
"The bulk edit is finished."
] | 2022-11-17T15:13:15Z | 2022-11-19T06:51:38Z | 2022-11-19T06:51:37Z | MEMBER | null | null | null | After a bulk edit of canonical datasets to create the YAML `dataset_info` metadata, the split names were accidentally sorted alphabetically. See for example:
- https://huggingface.co/datasets/bc2gm_corpus/commit/2384629484401ecf4bb77cd808816719c424e57c
Note that this order is the one appearing in the preview of the datasets.
I'm making a bulk edit to align the order of the splits appearing in the metadata info with the order appearing in the loading script. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5258/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5258/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5257/comments | https://api.github.com/repos/huggingface/datasets/issues/5257/events | https://github.com/huggingface/datasets/pull/5257 | 1,452,656,891 | PR_kwDODunzps5DFENm | 5,257 | remove an unused statement | {
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WrRan",
"id": 7569098,
"login": "WrRan",
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"repos_url": "https://api.github.com/users/WrRan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WrRan"
} | [] | closed | false | null | [] | null | [] | 2022-11-17T04:00:50Z | 2022-11-18T11:04:08Z | 2022-11-18T11:04:08Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5257.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5257",
"merged_at": "2022-11-18T11:04:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5257.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5257"
} | remove the unused statement: `input_pairs = list(zip())` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5257/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5257/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5256/comments | https://api.github.com/repos/huggingface/datasets/issues/5256/events | https://github.com/huggingface/datasets/pull/5256 | 1,452,652,586 | PR_kwDODunzps5DFDY0 | 5,256 | fix wrong print | {
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WrRan",
"id": 7569098,
"login": "WrRan",
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"repos_url": "https://api.github.com/users/WrRan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WrRan"
} | [] | closed | false | null | [] | null | [] | 2022-11-17T03:54:26Z | 2022-11-18T11:05:32Z | 2022-11-18T11:05:32Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5256.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5256",
"merged_at": "2022-11-18T11:05:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5256.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5256"
} | print `encoded_dataset.column_names` not `dataset.column_names` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5256/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5256/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5255/comments | https://api.github.com/repos/huggingface/datasets/issues/5255/events | https://github.com/huggingface/datasets/issues/5255 | 1,452,631,517 | I_kwDODunzps5WlWXd | 5,255 | Add a Depth Estimation dataset - DIODE / NYUDepth / KITTI | {
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul"
}
] | null | [
"Also cc @mariosasko and @lhoestq ",
"Cool ! Let us know if you have questions or if we can help :)\r\n\r\nI guess we'll also have to create the NYU CS Department on the Hub ?",
"> I guess we'll also have to create the NYU CS Department on the Hub ?\r\n\r\nYes, you're right! Let me add it to my profile first, and then we can transfer. Meanwhile, if it's recommended to loop the dataset author in here, let me know. \r\n\r\nAlso, the NYU Depth dataset seems big. Any example scripts for creating image datasets that I could refer? ",
"You can check the imagenet-1k one.\r\n\r\nPS: If the licenses allows it, it'b be nice to host the dataset as sharded TAR archives (like imagenet-1k) instead of the ZIP format they use:\r\n- it will make streaming much faster\r\n- ZIP compression is not well suited for images\r\n- it will allow parallel processing of the dataset (you can pass a subset of shards to each worker)\r\n\r\n> if it's recommended to loop the dataset author in here, let me know.\r\n\r\nIt's recommended indeed, you can send them an email once you have the dataset ready and invite them to the org on the Hub",
"> You can check the imagenet-1k one.\r\n\r\nWhere can I find the script? Are you referring to https://huggingface.co/docs/datasets/image_process ? Or is there anything more specific? ",
"You can find it here: https://huggingface.co/datasets/imagenet-1k/blob/main/imagenet-1k.py",
"Update: started working on it here: https://huggingface.co/datasets/sayakpaul/nyu_depth_v2. \r\n\r\nI am facing an issue and I have detailed it here: https://huggingface.co/datasets/sayakpaul/nyu_depth_v2/discussions/1\r\n\r\nEdit: The issue is gone. \r\n\r\nHowever, since the dataset is distributed as a single TAR archive (following the [URL used in TensorFlow Datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py)) the loading is taking longer. How would suggest to shard the single TAR archive? \r\n\r\n@lhoestq \r\n\r\n",
"A Colab Notebook demonstrating the dataset loading part: \r\n\r\nhttps://colab.research.google.com/gist/sayakpaul/aa0958c8d4ad8518d52a78f28044d871/scratchpad.ipynb\r\n\r\n@osanseviero @lhoestq \r\n\r\nI will work on a notebook to work with the dataset including data visualization.",
"@osanseviero @lhoestq things seem to work fine with the current version of the dataset [here](https://huggingface.co/datasets/sayakpaul/nyu_depth_v2). Here's a notebook I developed to help with visualization: https://colab.research.google.com/drive/1K3ZU8XUPRDOYD38MQS9nreQXJYitlKSW?usp=sharing. \r\n\r\n@lhoestq I need your help with the following:\r\n\r\n> However, since the dataset is distributed as a single TAR archive (following the [URL used in TensorFlow Datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py)) the loading is taking longer. How would suggest to shard the single TAR archive?\r\n\r\n@osanseviero @lhoestq question for you:\r\n\r\nWhere should we host the dataset? I think hosting it under hf.co/datasets (that is HF is the org) is fine as we have ImageNet-1k hosted similarly. We could then reach out to Diana Wofk (author of [Fast Depth](https://github.com/dwofk/fast-depth) and the owner of the repo on which TFDS NYU Depth V2 is based) for a review. WDYT? ",
"> However, since the dataset is distributed as a single TAR archive (following the [URL used in TensorFlow Datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py)) the loading is taking longer. How would suggest to shard the single TAR archive?\r\n\r\nFirst you can separate the train data and the validation data.\r\n\r\nThen since the dataset is quite big, you can even shard the train split and the validation split in multiple TAR archives. Something around 16 archives for train and 4 for validation would be fine for example.\r\n\r\nAlso no need to gzip the TAR archives, the images are already compressed in png or jpeg.",
"> Then since the dataset is quite big, you can even shard the train split and the validation split in multiple TAR archives. Something around 16 archives for train and 4 for validation would be fine for example.\r\n\r\nYes, I got you. But this process seems to be manual and should be tailored for the given dataset. Do you have any script that you used to create the ImageNet-1k shards? \r\n\r\n> Also no need to gzip the TAR archives, the images are already compressed in png or jpeg.\r\n\r\nI was not going to do that. Not sure what brought it up. ",
"> Yes, I got you. But this process seems to be manual and should be tailored for the given dataset. Do you have any script that you used to create the ImageNet-1k shards?\r\n\r\nI don't, but I agree it'd be nice to have a script for that !\r\n\r\n> I was not going to do that. Not sure what brought it up.\r\n\r\nThe original dataset is gzipped for some reason",
"Oh, I am using this URL for the download: https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py#L24. ",
"> Where should we host the dataset? I think hosting it under hf.co/datasets (that is HF is the org) is fine as we have ImageNet-1k hosted similarly.\r\n\r\nMaybe you can create an org for NYU Courant (this is the institute of the lab of the main author of the dataset if I'm not mistaken), and invite the authors to join.\r\n\r\nWe don't add datasets without namespace anymore",
"Updates: https://huggingface.co/datasets/sayakpaul/nyu_depth_v2/discussions/5\r\n\r\nThe entire process (preparing multiple archives, preparing data loading script, etc.) was fun and engaging, thanks to the documentation. I believe we could work on a small blog post that would work as a reference for the future contributors following this path. What say? \r\n\r\nCc: @lhoestq @osanseviero ",
"> I believe we could work on a small blog post that would work as a reference for the future contributors following this path. What say?\r\n\r\n@polinaeterna already mentioned it would be nice to present this process for audio (it's exactly the same), I believe it can be useful to many people",
"Cool. Let's work on that after the NYU Depth Dataset is fully in on Hub (under the appropriate org). 🤗"
] | 2022-11-17T03:22:22Z | 2022-12-02T16:05:30Z | null | CONTRIBUTOR | null | null | null | ### Name
NYUDepth
### Paper
http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf
### Data
https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html
### Motivation
Depth estimation is an important problem in computer vision. We have a couple of Depth Estimation models on Hub as well:
* [GLPN](https://huggingface.co/docs/transformers/model_doc/glpn)
* [DPT](https://huggingface.co/docs/transformers/model_doc/dpt)
Would be nice to have a dataset for depth estimation. These datasets usually have three things: input image, depth map image, and depth mask (validity mask to indicate if a reading for a pixel is valid or not). Since we already have [semantic segmentation datasets on the Hub](https://huggingface.co/datasets?task_categories=task_categories:image-segmentation&sort=downloads), I don't think we need any extended utilities to support this addition.
Having this dataset would also allow us to author data preprocessing guides for depth estimation, particularly like the ones we have for other tasks ([example](https://huggingface.co/docs/datasets/image_classification)).
Ccing @osanseviero @nateraw @NielsRogge
Happy to work on adding it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5255/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5255/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5254/comments | https://api.github.com/repos/huggingface/datasets/issues/5254/events | https://github.com/huggingface/datasets/pull/5254 | 1,452,600,088 | PR_kwDODunzps5DE47u | 5,254 | typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WrRan",
"id": 7569098,
"login": "WrRan",
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"repos_url": "https://api.github.com/users/WrRan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WrRan"
} | [] | closed | false | null | [] | null | [] | 2022-11-17T02:39:57Z | 2022-11-18T10:53:45Z | 2022-11-18T10:53:45Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5254.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5254",
"merged_at": "2022-11-18T10:53:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5254.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5254"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5254/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5253/comments | https://api.github.com/repos/huggingface/datasets/issues/5253/events | https://github.com/huggingface/datasets/pull/5253 | 1,452,588,206 | PR_kwDODunzps5DE2io | 5,253 | typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WrRan",
"id": 7569098,
"login": "WrRan",
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"repos_url": "https://api.github.com/users/WrRan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WrRan"
} | [] | closed | false | null | [] | null | [] | 2022-11-17T02:22:58Z | 2022-11-18T10:53:11Z | 2022-11-18T10:53:10Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5253",
"merged_at": "2022-11-18T10:53:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5253"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5253/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5253/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5252/comments | https://api.github.com/repos/huggingface/datasets/issues/5252/events | https://github.com/huggingface/datasets/pull/5252 | 1,451,765,838 | PR_kwDODunzps5DCI1U | 5,252 | Support for decoding Image/Audio types in map when format type is not default one | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5252). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5252). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-16T15:02:13Z | 2022-12-02T13:52:50Z | null | CONTRIBUTOR | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5252.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5252",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5252.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5252"
} | Add support for decoding (lazily) the `Image`/`Audio` types in `map` for the formats (Numpy, TF, Jax, PyTorch) other than the default one (Python).
Additional improvements:
* make `Dataset`'s "iter" API cleaner by removing `_iter` and replacing `_iter_batches` with `iter(batch_size)` (also implemented for `IterableDataset`)
* iterate over arrow tables in `map` to avoid `_getitem` calls, which are much slower than `__iter__`/`iter(batch_size)`, when the `format_type` is not Python
* fix `_iter_batches` (now named `iter`) when `drop_last_batch=True` and `pyarrow<=8.0.0` is installed
TODO:
* [x] update the `iter` benchmark in the docs (the `BeamBuilder` cannot load the preprocessed datasets from our bucket, so wait for this to be fixed (cc @lhoestq)) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5252/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5252/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5251 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5251/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5251/comments | https://api.github.com/repos/huggingface/datasets/issues/5251/events | https://github.com/huggingface/datasets/issues/5251 | 1,451,761,321 | I_kwDODunzps5WiB6p | 5,251 | Docs are not generated after latest release | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | null | [] | null | [
"After a discussion with @mishig25:\r\n- He said that this action should be triggered if we call our release branch according to the regex `v*-release`, as transformers does\r\n- I said that our procedure is different: our release branch is *temporary* and it is deleted just after the release PR is merged to main\r\n - Indeed the release tag is not yet created when we make the release PR (not event when this is merged to main), but when we make the Release itself.\r\n\r\nI was thinking that maybe we could change the triggering event: use `release` instead of `push`.\r\n\r\nWhat do you think, @huggingface/datasets?",
"Why is it an issue if our branch is temporary ?",
"He says not; but the branch has no tag yet; does the doc building require the tag? Or just the version number in `__init__.py` or setup.py?",
"It uses `module.__version__` (i.e. the one defined in `__init__.py`) - no need to have a tag\r\n\r\nhttps://github.com/huggingface/doc-builder/blob/81575cf081964c30ea5fd39450f4820db963f18e/src/doc_builder/commands/build.py#L69",
"Thanks, @lhoestq.\r\n\r\n@mishig25 has manually forced the generation of the docs, that are live for 2.7.0 version: https://huggingface.co/docs/datasets/v2.7.0/en/index ",
"Cool ! this can be closed then ?",
"I was waiting for #5250 to be merged to close this.",
"just to confirm, is there anything I need to do from my side ? Or is everything good here ?"
] | 2022-11-16T14:59:31Z | 2022-11-22T16:27:50Z | 2022-11-22T16:27:50Z | MEMBER | null | null | null | After the latest `datasets` release version 0.7.0, the docs were not generated.
As we have changed the release procedure (so that now we do not push directly to main branch), maybe we should also change the corresponding GitHub action:
https://github.com/huggingface/datasets/blob/edf1902f954c5568daadebcd8754bdad44b02a85/.github/workflows/build_documentation.yml#L3-L8
Related to:
- #5250
CC: @mishig25 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5251/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5251/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5250/comments | https://api.github.com/repos/huggingface/datasets/issues/5250/events | https://github.com/huggingface/datasets/pull/5250 | 1,451,720,030 | PR_kwDODunzps5DB-1y | 5,250 | Change release procedure to use only pull requests | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5250). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5250). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5250). All of your documentation changes will be reflected on that endpoint.",
"Little recap:\r\n- The release-conda GH action was properly triggered by push-tag event: therefore I guess this event is also created when we publish a release and create a tag within it (as it is the case in the new procedure)\r\n - However, the package was only uploaded to huggingface channel and not to conda-forge channel\r\n - [x] Why? Need to address this.\r\n - Reply by @lhoestq: https://github.com/huggingface/datasets/pull/5250#discussion_r1025047531\r\n - We only maintain the huggingface channel\r\n - The conda-forge channel is maintained by the community; the 2.7.0 has been finally added as well to this channel \r\n- The generate-documentation GH action will be triggered by the push-to-branch event if we align the name of the release branch with the expected regex `v*-release`\r\n - [x] The naming has been aligned in the new procedure\r\n - [ ] Question: why do we have different triggering events for generate-doc and release-conda? Maybe we could set the same for both: either push-tag (when publishing the release), or push-to-branch\r\n - I think it will be better to use the push-tag event because in the new release procedure this happens later (when we publish the release), once we have already tested that everything works using the test-PyPI; on the contrary, the push-to-branch event happens before, even before opening the release PR: we could see afterwards that there is an issue, and cancel the Pull Request, but the docs and conda-package will already be published.\r\n- For the naming of the dev-version branch/PR, instead of having a complicated version naming, I'm proposing:\r\n - Using always the same branch name `dev-version`\r\n - Just include a step to delete this branch locally if it exists: `git branch -D dev-version`\r\n - The remote version will not exist because it is deleted once the PR is merged\r\n - This approach is approved by @lhoestq: https://github.com/huggingface/datasets/pull/5250#discussion_r1025048300",
"Just one question to be addressed: why do we have different triggering events for generate-doc and release-conda? Maybe we could set the same for both: either push-tag (when publishing the release), or push-to-branch\r\n\r\nI think it will be better to use the push-tag event because in the new release procedure this happens later (when we publish the release), once we have already tested that everything works using the test-PyPI; on the contrary, the push-to-branch event happens before, even before opening the release PR: we could see afterwards that there is an issue, and cancel the Pull Request, but the docs and conda-package will already be published.\r\n\r\nWe could even use the release-published event instead: [8694901](https://github.com/huggingface/datasets/pull/5250/commits/86949013c9dc59a07b55fad5b78104b8a03f60cd)\r\n",
"@lhoestq now that we have push-tag event for both build_documentation and release-conda, we have no constraint on the naming of the release branch:\r\n- we could name it simpler: maybe as you suggested above: https://github.com/huggingface/datasets/pull/5250#discussion_r1024119018\r\n `release-VERSION` instead of `vVERSION-release` (we do not use the prefix \"v\" anywhere in our repo)"
] | 2022-11-16T14:35:32Z | 2022-11-22T16:30:58Z | 2022-11-22T16:27:48Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5250.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5250",
"merged_at": "2022-11-22T16:27:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5250.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5250"
} | This PR changes the release procedure so that:
- it only make changes to main branch via pull requests
- it is no longer necessary to directly commit/push to main branch
Close #5251.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5250/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5250/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5249/comments | https://api.github.com/repos/huggingface/datasets/issues/5249/events | https://github.com/huggingface/datasets/issues/5249 | 1,451,692,247 | I_kwDODunzps5WhxDX | 5,249 | Protect the main branch from inadvertent direct pushes | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-11-16T14:19:03Z | 2022-11-16T14:36:14Z | null | MEMBER | null | null | null | We have decided to implement a protection mechanism in this repository, so that nobody (not even administrators) can inadvertently push accidentally directly to the main branch.
See context here:
- d7c942228b8dcf4de64b00a3053dce59b335f618
To do:
- [x] Protect main branch
- Settings > Branches > Branch protection rules > main > Edit
- [x] Check: Do not allow bypassing the above settings
- The above settings will apply to administrators and custom roles with the "bypass branch protections" permission.
- [x] Additionally, uncheck: Require approvals [under "Require a pull request before merging", which was already checked]
- Before, we could exceptionally merge a non-approved PR, using Administrator bypass
- Now that Administrator bypass is no longer possible, we would always need an approval to be able to merge; and pull request authors cannot approve their own pull requests. This could be an inconvenient in some exceptional circumstances when an urgent fix is needed
- Nevertheless, although it is no longer enforced, it is strongly recommended to merge PRs only if they have at least one approval
- [ ] #5250
- So that direct pushes to main branch are no longer necessary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5249/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5249/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5248/comments | https://api.github.com/repos/huggingface/datasets/issues/5248/events | https://github.com/huggingface/datasets/pull/5248 | 1,451,338,676 | PR_kwDODunzps5DAqwt | 5,248 | Complete doc migration | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5248). All of your documentation changes will be reflected on that endpoint.",
"Thanks for the fix @mishig25.\r\n\r\nI guess this is the reason why the docs are not generated for the latest release version 2.7.0? https://huggingface.co/docs/datasets/index "
] | 2022-11-16T10:41:04Z | 2022-11-16T15:06:50Z | 2022-11-16T10:41:10Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5248.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5248",
"merged_at": "2022-11-16T10:41:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5248.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5248"
} | Reverts huggingface/datasets#5214
Everything is handled on the doc-builder side now 😊 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5248/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5248/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5247/comments | https://api.github.com/repos/huggingface/datasets/issues/5247/events | https://github.com/huggingface/datasets/pull/5247 | 1,451,297,749 | PR_kwDODunzps5DAhto | 5,247 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5247). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-16T10:17:31Z | 2022-11-16T10:22:20Z | 2022-11-16T10:17:50Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5247",
"merged_at": "2022-11-16T10:17:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5247"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5247/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5247/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5246/comments | https://api.github.com/repos/huggingface/datasets/issues/5246/events | https://github.com/huggingface/datasets/pull/5246 | 1,451,226,055 | PR_kwDODunzps5DASLI | 5,246 | Release: 2.7.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-16T09:32:44Z | 2022-11-16T09:39:42Z | 2022-11-16T09:37:03Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5246.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5246",
"merged_at": "2022-11-16T09:37:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5246.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5246"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5246/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5246/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5245/comments | https://api.github.com/repos/huggingface/datasets/issues/5245/events | https://github.com/huggingface/datasets/issues/5245 | 1,450,376,433 | I_kwDODunzps5Wcvzx | 5,245 | Unable to rename columns in streaming dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4",
"events_url": "https://api.github.com/users/peregilk/events{/privacy}",
"followers_url": "https://api.github.com/users/peregilk/followers",
"following_url": "https://api.github.com/users/peregilk/following{/other_user}",
"gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/peregilk",
"id": 9079808,
"login": "peregilk",
"node_id": "MDQ6VXNlcjkwNzk4MDg=",
"organizations_url": "https://api.github.com/users/peregilk/orgs",
"received_events_url": "https://api.github.com/users/peregilk/received_events",
"repos_url": "https://api.github.com/users/peregilk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/peregilk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/peregilk"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
}
] | null | [
"Hi @peregilk this bug is directly related to https://github.com/huggingface/datasets/issues/3888, and still not fixed... But I'll try to have a look!",
"Thanks @alvarobartt. It is great if you are able to fix it, but when reading the explanation it seems like it is possible to work around it.\r\n\r\nWe also tried keeping the 'info.features' and then adding a modified version back after the remove/rename. Unforutunately that leads to a dataset that is not possible to iterate over.",
"So if you iterate over the `IterableDataset` as `next(iter(ds))` and then run `rename_columns` when checking that data it will work, but in the end, it's just renaming the column one example/batch at a time, not renaming the column name for all the entries in the dataset, which is the ideal.",
"@alvarobartt Thanks. My use case was that I wanted to do multiple things, ie removing all unnecessary columns, renaming some valid columns, and then using cast (in my case checking if the audio is not 16K and casting it). It is just convenient to look into the info.features between each of these operations. Alternatively, I will just plan ahead...;) To me it seems like all the operations are working.\r\n\r\nThanks for the advice. It was very useful.",
"If we know the features before renaming, then we know the features after renaming, so we can pass the new features to the returned dataset in `rename_column` indeed ! If anyone is interested in contributing, feel free to open a PR and I'd be happy to help / give some pointers :)",
"Sure @lhoestq thanks! I’ll try to work on that",
"#self-assign"
] | 2022-11-15T21:04:41Z | 2022-11-28T12:53:24Z | 2022-11-28T12:53:24Z | NONE | null | null | null | ### Describe the bug
Trying to rename column in a streaming datasets, destroys the features object.
### Steps to reproduce the bug
The following code illustrates the error:
```
from datasets import load_dataset
dataset = load_dataset('mc4', 'en', streaming=True, split='train')
dataset.info.features
# {'text': Value(dtype='string', id=None), 'timestamp': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)}
dataset = dataset.rename_column("text", "content")
dataset.info.features
# This returned object is now None!
```
### Expected behavior
This should just alter the renamed column.
### Environment info
datasets 2.6.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5245/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5245/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5244/comments | https://api.github.com/repos/huggingface/datasets/issues/5244/events | https://github.com/huggingface/datasets/issues/5244 | 1,450,019,225 | I_kwDODunzps5WbYmZ | 5,244 | Allow dataset streaming from private a private source when loading a dataset with a dataset loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Hubert-Bonisseur",
"id": 48770768,
"login": "Hubert-Bonisseur",
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Hubert-Bonisseur"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! What kind of private source ? We're exploring adding support for cloud storage and URIs like s3://, gs:// etc. with authentication in the download manager",
"Hello! It's a google cloud storage, so gs://, but I'm using it with https.\r\nBeing able to provide a file system like [here](https://huggingface.co/docs/datasets/main/filesystems#load-serialized-datasets) would be even more practical indeed.\r\nI've found a quite complicated workaround which consists of monkey patching all of the functions in streaming_download_manager.py to use my own _get_authentication_headers_for_url_ . \r\n\r\nA support for this use case would be greatly appreciated!\r\n\r\nFor reference my _get_authentication_headers_for_url_ looks like this:\r\n```\r\nimport os\r\nfrom typing import Optional, Union\r\n\r\nfrom datasets import config\r\nfrom huggingface_hub import HfFolder\r\nfrom gcsfs.credentials import GoogleCredentials\r\n\r\nDEFAULT_PROJECT = os.environ.get(\"GCSFS_DEFAULT_PROJECT\", \"\")\r\naccess = \"full_control\"\r\ngcs_token = os.environ.get(\"GCS_TOKEN\")\r\n\r\n\r\ndef get_authentication_headers_for_url(url: str, use_auth_token: Optional[Union[str, bool]] = None) -> dict:\r\n \"\"\"Handle the HF authentication\"\"\"\r\n headers = {}\r\n if url.startswith(config.HF_ENDPOINT):\r\n if use_auth_token is False:\r\n token = None\r\n elif isinstance(use_auth_token, str):\r\n token = use_auth_token\r\n else:\r\n token = HfFolder.get_token()\r\n elif url.startswith(\"https://storage.googleapis.com\"):\r\n credentials = GoogleCredentials(DEFAULT_PROJECT, access, gcs_token)\r\n credentials.maybe_refresh()\r\n token = credentials.credentials.token\r\n else:\r\n token = None\r\n if token:\r\n headers[\"authorization\"] = f\"Bearer {token}\"\r\n return headers\r\n```",
"I would be a big fan of this feature! @Hubert-Bonisseur if this doesn't become a supported feature, would you mind sharing your code? Thanks!",
"> I would be a big fan of this feature! @Hubert-Bonisseur if this doesn't become a supported feature, would you mind sharing your code? Thanks!\r\n\r\nI published it here:\r\nhttps://github.com/Hubert-Bonisseur/private-dataset-hub\r\n\r\nI modified the names of a lot of functions for privacy and I don't have time to test it again so you may get import errors, but you have the code. The custom_load_dataset is the function you are interested in I think.\r\n\r\nIt relies a lot on patching, if you find a better way to do this, I'd be interested.",
"Given the amount of patching it does, this is likely to break at one point. I'd encourage you to wait for a proper support in `datasets` directly if you can wait."
] | 2022-11-15T16:02:10Z | 2022-11-23T14:02:30Z | null | NONE | null | null | null | ### Feature request
Add arguments to the function _get_authentication_headers_for_url_ like custom_endpoint and custom_token in order to add flexibility when downloading files from a private source.
It should also be possible to provide these arguments from the dataset loading script, maybe giving them to the dl_manager
### Motivation
It is possible to share a dataset hosted on another platform by writing a dataset loading script. It works perfectly for publicly available resources.
For resources that require authentication, you can provide a [download_custom](https://huggingface.co/docs/datasets/package_reference/builder_classes#datasets.DownloadManager) method to the download_manager.
Unfortunately, this function doesn't work with **dataset streaming**.
A solution so as to allow dataset streaming from private sources would be a more flexible _get_authentication_headers_for_url_ function.
### Your contribution
Would you be interested in this improvement ?
If so I could provide a PR. I've got something working locally, but it's not very clean, I'd need some guidance regarding integration. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5244/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5244/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5243/comments | https://api.github.com/repos/huggingface/datasets/issues/5243/events | https://github.com/huggingface/datasets/issues/5243 | 1,449,523,962 | I_kwDODunzps5WZfr6 | 5,243 | Download only split data | {
"avatar_url": "https://avatars.githubusercontent.com/u/48530104?v=4",
"events_url": "https://api.github.com/users/capsabogdan/events{/privacy}",
"followers_url": "https://api.github.com/users/capsabogdan/followers",
"following_url": "https://api.github.com/users/capsabogdan/following{/other_user}",
"gists_url": "https://api.github.com/users/capsabogdan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/capsabogdan",
"id": 48530104,
"login": "capsabogdan",
"node_id": "MDQ6VXNlcjQ4NTMwMTA0",
"organizations_url": "https://api.github.com/users/capsabogdan/orgs",
"received_events_url": "https://api.github.com/users/capsabogdan/received_events",
"repos_url": "https://api.github.com/users/capsabogdan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/capsabogdan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/capsabogdan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/capsabogdan"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi @capsabogdan! Unfortunately, it's hard to implement because quite often datasets data is being hosted in a single archive for all splits :( So we have to download the whole archive to split it into splits. This is the case for CommonVoice too. \r\n\r\nHowever, for cases when data is distributed in separate archives ащк different splits I suppose it can (and will) be implemented someday. \r\n\r\n\r\nBtw for quick check of the dataset you can use [streaming](https://huggingface.co/docs/datasets/stream):\r\n```python\r\ncv = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"en\", split=\"test\", streaming=True)\r\ncv = iter(cv)\r\nprint(next(cv))\r\n\r\n>> {'client_id': 'a07b17f8234ded5e847443ea6f423cef745cbbc7537fb637d58326000aa751e829a21c4fd0a35fc17fb833aa7e95ebafce5efd19beeb8d843887b85e4eb35f5b',\r\n>> 'path': None,\r\n>> 'audio': {'path': 'cv-corpus-11.0-2022-09-21/en/clips/common_voice_en_100363.mp3',\r\n>> 'array': array([ 0.0000000e+00, 1.1748125e-14, 1.5450088e-14, ...,\r\n>> 1.3011958e-06, -6.3548953e-08, -9.9098514e-08], dtype=float32),\r\n>> ...}\r\n\r\n```",
"thank you for the answer but am not sure if this will not be helpful, as we\nneed maybe just 10% of the datasets for some experiment\n\ncan we get just a portion of the dataset with stream?\n\n\nis there really no solution? :(\n\nAm Di., 15. Nov. 2022 um 16:55 Uhr schrieb Polina Kazakova <\n***@***.***>:\n\n> Hi @capsabogdan <https://github.com/capsabogdan>! Unfortunately, it's\n> hard to implement because quite often datasets data is being hosted in a\n> single archive for all splits :( So we have to download the whole archive\n> to split it into splits. This is the case for CommonVoice too.\n>\n> However, for cases when data is distributed in separate archives in\n> different splits I suppose it can be implemented someday.\n>\n> Btw for quick check of the dataset you can use streaming\n> <https://huggingface.co/docs/datasets/stream>:\n>\n> cv = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"en\", split=\"test\", streaming=True)cv = iter(cv)print(next(cv))\n> >> {'client_id': 'a07b17f8234ded5e847443ea6f423cef745cbbc7537fb637d58326000aa751e829a21c4fd0a35fc17fb833aa7e95ebafce5efd19beeb8d843887b85e4eb35f5b',>> 'path': None,>> 'audio': {'path': 'cv-corpus-11.0-2022-09-21/en/clips/common_voice_en_100363.mp3',>> 'array': array([ 0.0000000e+00, 1.1748125e-14, 1.5450088e-14, ...,>> 1.3011958e-06, -6.3548953e-08, -9.9098514e-08], dtype=float32),>> ...}\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5243#issuecomment-1315512887>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ALSIFOC3JYRCTH54OBRUJULWIOW6PANCNFSM6AAAAAASAYO2LY>\n> .\n> You are receiving this because you were mentioned.Message ID:\n> ***@***.***>\n>\n",
"maybe it would be nice if you guys ould do some sort of shard before\nloading the dataset, so users can download just chunks of data :)\n\nI think this would be very helpful\n\nAm Di., 15. Nov. 2022 um 19:24 Uhr schrieb Bogdan Capsa <\n***@***.***>:\n\n> thank you for the answer but am not sure if this will not be helpful, as\n> we need maybe just 10% of the datasets for some experiment\n>\n> can we get just a portion of the dataset with stream?\n>\n>\n> is there really no solution? :(\n>\n> Am Di., 15. Nov. 2022 um 16:55 Uhr schrieb Polina Kazakova <\n> ***@***.***>:\n>\n>> Hi @capsabogdan <https://github.com/capsabogdan>! Unfortunately, it's\n>> hard to implement because quite often datasets data is being hosted in a\n>> single archive for all splits :( So we have to download the whole archive\n>> to split it into splits. This is the case for CommonVoice too.\n>>\n>> However, for cases when data is distributed in separate archives in\n>> different splits I suppose it can be implemented someday.\n>>\n>> Btw for quick check of the dataset you can use streaming\n>> <https://huggingface.co/docs/datasets/stream>:\n>>\n>> cv = load_dataset(\"mozilla-foundation/common_voice_11_0\", \"en\", split=\"test\", streaming=True)cv = iter(cv)print(next(cv))\n>> >> {'client_id': 'a07b17f8234ded5e847443ea6f423cef745cbbc7537fb637d58326000aa751e829a21c4fd0a35fc17fb833aa7e95ebafce5efd19beeb8d843887b85e4eb35f5b',>> 'path': None,>> 'audio': {'path': 'cv-corpus-11.0-2022-09-21/en/clips/common_voice_en_100363.mp3',>> 'array': array([ 0.0000000e+00, 1.1748125e-14, 1.5450088e-14, ...,>> 1.3011958e-06, -6.3548953e-08, -9.9098514e-08], dtype=float32),>> ...}\n>>\n>> —\n>> Reply to this email directly, view it on GitHub\n>> <https://github.com/huggingface/datasets/issues/5243#issuecomment-1315512887>,\n>> or unsubscribe\n>> <https://github.com/notifications/unsubscribe-auth/ALSIFOC3JYRCTH54OBRUJULWIOW6PANCNFSM6AAAAAASAYO2LY>\n>> .\n>> You are receiving this because you were mentioned.Message ID:\n>> ***@***.***>\n>>\n>\n"
] | 2022-11-15T10:15:54Z | 2022-11-15T20:12:24Z | null | NONE | null | null | null | ### Feature request
Is it possible to download only the data that I am requesting and not the entire dataset? I run out of disk spaceas it seems to download the entire dataset, instead of only the part needed.
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test",
cache_dir="cache/path...",
use_auth_token=True,
download_config=DownloadConfig(delete_extracted='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')
)
### Motivation
efficiency improvement
### Your contribution
n/a | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5243/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5243/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5242/comments | https://api.github.com/repos/huggingface/datasets/issues/5242/events | https://github.com/huggingface/datasets/issues/5242 | 1,449,069,382 | I_kwDODunzps5WXwtG | 5,242 | Failed Data Processing upon upload with zip file full of images | {
"avatar_url": "https://avatars.githubusercontent.com/u/82735473?v=4",
"events_url": "https://api.github.com/users/scrambled2/events{/privacy}",
"followers_url": "https://api.github.com/users/scrambled2/followers",
"following_url": "https://api.github.com/users/scrambled2/following{/other_user}",
"gists_url": "https://api.github.com/users/scrambled2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/scrambled2",
"id": 82735473,
"login": "scrambled2",
"node_id": "MDQ6VXNlcjgyNzM1NDcz",
"organizations_url": "https://api.github.com/users/scrambled2/orgs",
"received_events_url": "https://api.github.com/users/scrambled2/received_events",
"repos_url": "https://api.github.com/users/scrambled2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/scrambled2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scrambled2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/scrambled2"
} | [] | open | false | null | [] | null | [
"cc @abhishekkrthakur @SBrandeis "
] | 2022-11-15T02:47:52Z | 2022-11-15T17:59:23Z | null | NONE | null | null | null | I went to autotrain and under image classification arrived where it was time to prepare my dataset. Screenshot below
![image](https://user-images.githubusercontent.com/82735473/201814099-3cc5ff8a-88dc-4f5f-8140-f19560641d83.png)
I chose the method 2 option. I have a csv file with two columns. ~23,000 files.
I uploaded this and chose the image_relpath, and target columns.
The image uploader said that I could only upload 10,000 singular images at a time so the 2nd option was to zip the images up and upload a zip archive which I did.
That all uploaded.
Now I have the message below. It appears the zip archive does just uncompress on the Hugging Face end?
What am I missing here?
![image](https://user-images.githubusercontent.com/82735473/201813838-b50dbbbc-34e8-4d73-9c07-12f9e41c62eb.png)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5242/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5242/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5241/comments | https://api.github.com/repos/huggingface/datasets/issues/5241/events | https://github.com/huggingface/datasets/pull/5241 | 1,448,510,407 | PR_kwDODunzps5C3MTG | 5,241 | Support hfh rc version | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-14T18:05:47Z | 2022-11-15T16:11:30Z | 2022-11-15T16:09:31Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5241.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5241",
"merged_at": "2022-11-15T16:09:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5241.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5241"
} | otherwise the code doesn't work for hfh 0.11.0rc0
following #5237 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5241/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5241/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5240/comments | https://api.github.com/repos/huggingface/datasets/issues/5240/events | https://github.com/huggingface/datasets/pull/5240 | 1,448,478,617 | PR_kwDODunzps5C3Fe6 | 5,240 | Cleaner error tracebacks for dataset script errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq Good catch! This currently leads to an AttributeError (due to `writer` being None) on this line:\r\nhttps://github.com/huggingface/datasets/blob/fed1628d49a91f9ae259ddf6edbb252c7972d9a3/src/datasets/builder.py#L1552\r\n"
] | 2022-11-14T17:42:02Z | 2022-11-15T18:26:48Z | 2022-11-15T18:24:38Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5240.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5240",
"merged_at": "2022-11-15T18:24:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5240.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5240"
} | Make the traceback of the errors raised in `_generate_examples` cleaner for easier debugging. Additionally, initialize the `writer` in the for-loop to avoid the `ValueError` from `ArrowWriter.finalize` raised in the `finally` block when no examples are yielded before the `_generate_examples` error.
<details>
<summary>
The full traceback of the "SQLAlchemy ImportError" error that gets printed with these changes:
</summary>
```bash
ImportError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split_single(self, arg)
1759 _time = time.time()
-> 1760 for _, table in generator:
1761 # Only initialize the writer when we have the first record (to avoid having to do the clean-up if an error occurs before that)
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/packaged_modules/sql/sql.py in _generate_tables(self)
112 sql_reader = pd.read_sql(
--> 113 self.config.sql, self.config.con, chunksize=chunksize, **self.config.pd_read_sql_kwargs
114 )
/usr/local/lib/python3.7/dist-packages/pandas/io/sql.py in read_sql(sql, con, index_col, coerce_float, params, parse_dates, columns, chunksize)
598 """
--> 599 pandas_sql = pandasSQL_builder(con)
600
/usr/local/lib/python3.7/dist-packages/pandas/io/sql.py in pandasSQL_builder(con, schema, meta, is_cursor)
789 elif isinstance(con, str):
--> 790 raise ImportError("Using URI string without sqlalchemy installed.")
791 else:
ImportError: Using URI string without sqlalchemy installed.
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
<ipython-input-4-5af11af4737b> in <module>
----> 1 ds = Dataset.from_sql('''SELECT * from states WHERE state=="New York";''', "sqlite:///us_covid_data.db")
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in from_sql(sql, con, features, cache_dir, keep_in_memory, **kwargs)
1152 cache_dir=cache_dir,
1153 keep_in_memory=keep_in_memory,
-> 1154 **kwargs,
1155 ).read()
1156
/usr/local/lib/python3.7/dist-packages/datasets/io/sql.py in read(self)
47 # try_from_hf_gcs=try_from_hf_gcs,
48 base_path=base_path,
---> 49 use_auth_token=use_auth_token,
50 )
51
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
825 verify_infos=verify_infos,
826 **prepare_split_kwargs,
--> 827 **download_and_prepare_kwargs,
828 )
829 # Sync info
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
912 try:
913 # Prepare split will record examples associated to the split
--> 914 self._prepare_split(split_generator, **prepare_split_kwargs)
915 except OSError as e:
916 raise OSError(
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1652 job_id = 0
1653 for job_id, done, content in self._prepare_split_single(
-> 1654 {"gen_kwargs": gen_kwargs, "job_id": job_id, **_prepare_split_args}
1655 ):
1656 if done:
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split_single(self, arg)
1789 raise DatasetGenerationError(
1790 f"An error occured while generating the dataset"
-> 1791 ) from e
1792 finally:
1793 yield job_id, False, num_examples_progress_update
DatasetGenerationError: An error occurred while generating the dataset
```
</details>
PS: I've also considered raising the error as follows:
```python
tb = sys.exc_info()[2]
raise DatasetGenerationError(f"An error occurred while generating the dataset: {type(e).__name__}: {e}").with_traceback(tb) from None # this raises the DatasetGenerationError with "e"'s traceback
```
But it seems like "from e" is now the [preferred](https://docs.python.org/3/library/exceptions.html#BaseException.with_traceback) way to chain exceptions.
Fix https://github.com/huggingface/datasets/issues/5186
cc @nateraw
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5240/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5240/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5239/comments | https://api.github.com/repos/huggingface/datasets/issues/5239/events | https://github.com/huggingface/datasets/pull/5239 | 1,448,211,373 | PR_kwDODunzps5C2L_P | 5,239 | Add num_proc to from_csv/generator/json/parquet/text | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5239). All of your documentation changes will be reflected on that endpoint.",
"I ended up moving `num_proc` to `AbstractDatasetReader.__init__` :)\r\n\r\nLet me know if it sounds good to you now"
] | 2022-11-14T14:53:00Z | 2022-11-29T16:50:47Z | null | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5239.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5239",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5239.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5239"
} | Allow multiprocessing to from_* methods | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5239/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5239/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5238/comments | https://api.github.com/repos/huggingface/datasets/issues/5238/events | https://github.com/huggingface/datasets/pull/5238 | 1,448,211,251 | PR_kwDODunzps5C2L9h | 5,238 | Make `Version` hashable | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-14T14:52:55Z | 2022-11-14T15:30:02Z | 2022-11-14T15:27:35Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5238.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5238",
"merged_at": "2022-11-14T15:27:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5238.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5238"
} | Add `__hash__` to the `Version` class to make it hashable (and remove the unneeded methods), as `Version("0.0.0")` is the default value of `BuilderConfig.version` and the default fields of a dataclass need to be hashable in Python 3.11.
Fix https://github.com/huggingface/datasets/issues/5230 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5238/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5238/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5237/comments | https://api.github.com/repos/huggingface/datasets/issues/5237/events | https://github.com/huggingface/datasets/pull/5237 | 1,448,202,491 | PR_kwDODunzps5C2KGz | 5,237 | Encode path only for old versions of hfh | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-14T14:46:57Z | 2022-11-14T17:38:18Z | 2022-11-14T17:35:59Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5237.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5237",
"merged_at": "2022-11-14T17:35:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5237.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5237"
} | Next version of `huggingface-hub` 0.11 does encode the `path`, and we don't want to encode twice | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5237/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5237/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5236/comments | https://api.github.com/repos/huggingface/datasets/issues/5236/events | https://github.com/huggingface/datasets/pull/5236 | 1,448,190,801 | PR_kwDODunzps5C2Hnj | 5,236 | Handle ArrowNotImplementedError caused by try_type being Image or Audio in cast | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Not sure how we can have a test that is relevant for this though - feel free to add one if you have ideas\r\n\r\nYes, this was my reasoning for not adding a test. This change is pretty simple, so I think it's OK not to have a test for it."
] | 2022-11-14T14:38:59Z | 2022-11-14T16:04:29Z | 2022-11-14T16:01:48Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5236.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5236",
"merged_at": "2022-11-14T16:01:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5236.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5236"
} | Handle the `ArrowNotImplementedError` thrown when `try_type` is `Image` or `Audio` and the input array cannot be converted to their storage formats.
Reproducer:
```python
from datasets import Dataset
from PIL import Image
import requests
ds = Dataset.from_dict({"image": [Image.open(requests.get("https://upload.wikimedia.org/wikipedia/commons/e/e9/Felis_silvestris_silvestris_small_gradual_decrease_of_quality.png", stream=True).raw)]})
ds.map(lambda x: {"image": True}) # ArrowNotImplementedError
```
PS: This could also be fixed by raising `TypeError` in `{Image, Audio}.cast_storage` for unsupported types instead of passing the array to `array_cast.` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5236/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5236/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5235/comments | https://api.github.com/repos/huggingface/datasets/issues/5235/events | https://github.com/huggingface/datasets/pull/5235 | 1,448,052,660 | PR_kwDODunzps5C1pjc | 5,235 | Pin `typer` version in tests to <0.5 to fix Windows CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | [] | 2022-11-14T13:17:02Z | 2022-11-14T15:43:01Z | 2022-11-14T13:41:12Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5235.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5235",
"merged_at": "2022-11-14T13:41:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5235.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5235"
} | Otherwise `click` fails on Windows:
```
Traceback (most recent call last):
File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\spacy\__main__.py", line 4, in <module>
setup_cli()
File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\spacy\cli\_util.py", line 71, in setup_cli
command(prog_name=COMMAND)
File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\click\core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\typer\core.py", line 785, in main
**extra,
File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\typer\core.py", line 190, in _main
args = click.utils._expand_args(args)
AttributeError: module 'click.utils' has no attribute '_expand_args'
```
See https://github.com/tiangolo/typer/issues/427 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5235/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5235/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5234/comments | https://api.github.com/repos/huggingface/datasets/issues/5234/events | https://github.com/huggingface/datasets/pull/5234 | 1,447,999,062 | PR_kwDODunzps5C1diq | 5,234 | fix: dataset path should be absolute | {
"avatar_url": "https://avatars.githubusercontent.com/u/30353?v=4",
"events_url": "https://api.github.com/users/vigsterkr/events{/privacy}",
"followers_url": "https://api.github.com/users/vigsterkr/followers",
"following_url": "https://api.github.com/users/vigsterkr/following{/other_user}",
"gists_url": "https://api.github.com/users/vigsterkr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vigsterkr",
"id": 30353,
"login": "vigsterkr",
"node_id": "MDQ6VXNlcjMwMzUz",
"organizations_url": "https://api.github.com/users/vigsterkr/orgs",
"received_events_url": "https://api.github.com/users/vigsterkr/received_events",
"repos_url": "https://api.github.com/users/vigsterkr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vigsterkr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vigsterkr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vigsterkr"
} | [] | open | false | null | [] | null | [
"Good catch thanks ! Have you tried to use the absolue path in `MemoryMappedTable.__init__` in `table.py`?\r\n\r\nI think it can fix issues with relative paths at more levels than just fixing it `load_from_disk`. If it works I think it would be a more robust fix to this issue"
] | 2022-11-14T12:47:40Z | 2022-11-18T15:14:16Z | null | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5234.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5234",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5234.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5234"
} | cache_file_name depends on dataset's path.
A simple way where this could cause a problem:
```
import os
import datasets
def add_prefix(example):
example["text"] = "Review: " + example["text"]
return example
ds = datasets.load_from_disk("a/relative/path")
os.chdir("/tmp")
ds_1 = ds.map(add_prefix)
```
while it may feel that the `chdir` is quite constructed, there are many scenarios when the current working dir can/will change... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5234/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5234/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5233/comments | https://api.github.com/repos/huggingface/datasets/issues/5233/events | https://github.com/huggingface/datasets/pull/5233 | 1,447,906,868 | PR_kwDODunzps5C1JVh | 5,233 | Fix shards in IterableDataset.from_generator | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-14T11:42:09Z | 2022-11-14T14:16:03Z | 2022-11-14T14:13:22Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5233.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5233",
"merged_at": "2022-11-14T14:13:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5233.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5233"
} | Allow to define a sharded iterable dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5233/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5233/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5232/comments | https://api.github.com/repos/huggingface/datasets/issues/5232/events | https://github.com/huggingface/datasets/issues/5232 | 1,446,294,165 | I_kwDODunzps5WNLKV | 5,232 | Incompatible dill versions in datasets 2.6.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10574123?v=4",
"events_url": "https://api.github.com/users/vinaykakade/events{/privacy}",
"followers_url": "https://api.github.com/users/vinaykakade/followers",
"following_url": "https://api.github.com/users/vinaykakade/following{/other_user}",
"gists_url": "https://api.github.com/users/vinaykakade/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vinaykakade",
"id": 10574123,
"login": "vinaykakade",
"node_id": "MDQ6VXNlcjEwNTc0MTIz",
"organizations_url": "https://api.github.com/users/vinaykakade/orgs",
"received_events_url": "https://api.github.com/users/vinaykakade/received_events",
"repos_url": "https://api.github.com/users/vinaykakade/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vinaykakade/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinaykakade/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vinaykakade"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @vinaykakade.\r\n\r\nWe are discussing about making a release early this week.\r\n\r\nPlease note that in the meantime, in your specific case (as we also pointed out here: https://github.com/huggingface/datasets/issues/5162#issuecomment-1291720293), you can circumvent the issue by pinning `multiprocess` to 0.70.13 version (instead of using latest 0.70.14).\r\n\r\nDuplicate of:\r\n- https://github.com/huggingface/datasets/issues/5162",
"You can also make `pip-compile` work by using the backtracking resolver (instead of the legacy one): https://pip-tools.readthedocs.io/en/latest/#a-note-on-resolvers\r\n```\r\npip-compile --resolver=backtracking requirements.in\r\n```\r\nThis resolver will automatically use `multiprocess` 0.70.13 version.\r\n"
] | 2022-11-12T06:46:23Z | 2022-11-14T08:24:43Z | 2022-11-14T08:07:59Z | NONE | null | null | null | ### Describe the bug
datasets version 2.6.1 has a dependency on dill<0.3.6. This causes a conflict with dill>=0.3.6 used by multiprocess dependency in datasets 2.6.1
This issue is already fixed in https://github.com/huggingface/datasets/pull/5166/files, but not yet been released. Please release a new version of the datasets library to fix this.
### Steps to reproduce the bug
1. Create requirements.in with only dependency being datasets (or datasets[s3])
2. Run pip-compile
3. The output is as follows:
```
Could not find a version that matches dill<0.3.6,>=0.3.6 (from datasets[s3]==2.6.1->-r requirements.in (line 1))
Tried: 0.2, 0.2, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.2.3, 0.2.3, 0.2.4, 0.2.4, 0.2.5, 0.2.5, 0.2.6, 0.2.7, 0.2.7.1, 0.2.8, 0.2.8.1, 0.2.8.2, 0.2.9, 0.3.0, 0.3.1, 0.3.1.1, 0.3.2, 0.3.3, 0.3.3, 0.3.4, 0.3.4, 0.3.5, 0.3.5, 0.3.5.1, 0.3.5.1, 0.3.6, 0.3.6
Skipped pre-versions: 0.1a1, 0.2a1, 0.2a1, 0.2b1, 0.2b1
There are incompatible versions in the resolved dependencies:
dill<0.3.6 (from datasets[s3]==2.6.1->-r requirements.in (line 1))
dill>=0.3.6 (from multiprocess==0.70.14->datasets[s3]==2.6.1->-r requirements.in (line 1))
```
### Expected behavior
pip-compile produces requirements.txt without any conflicts
### Environment info
datasets version 2.6.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5232/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5232/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5231/comments | https://api.github.com/repos/huggingface/datasets/issues/5231/events | https://github.com/huggingface/datasets/issues/5231 | 1,445,883,267 | I_kwDODunzps5WLm2D | 5,231 | Using `set_format(type='torch', columns=columns)` makes Array2D/3D columns stop formatting correctly | {
"avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4",
"events_url": "https://api.github.com/users/plamb-viso/events{/privacy}",
"followers_url": "https://api.github.com/users/plamb-viso/followers",
"following_url": "https://api.github.com/users/plamb-viso/following{/other_user}",
"gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/plamb-viso",
"id": 99206017,
"login": "plamb-viso",
"node_id": "U_kgDOBenDgQ",
"organizations_url": "https://api.github.com/users/plamb-viso/orgs",
"received_events_url": "https://api.github.com/users/plamb-viso/received_events",
"repos_url": "https://api.github.com/users/plamb-viso/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions",
"type": "User",
"url": "https://api.github.com/users/plamb-viso"
} | [] | closed | false | null | [] | null | [
"In case others find this, the problem was not with set_format, but my usages of `to_pandas()` and `from_pandas()` which I was using during dataset splitting; somewhere in the chain of converting to and from pandas the `Array2D/Array3D` types get converted to series of `Sequence()` types"
] | 2022-11-11T18:54:36Z | 2022-11-11T20:42:29Z | 2022-11-11T18:59:50Z | NONE | null | null | null | I have a Dataset with two Features defined as follows:
```
'image': Array3D(dtype="int64", shape=(3, 224, 224)),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
```
On said dataset, if I `dataset.set_format(type='torch')` and then use the dataset in a dataloader, these columns are correctly cast to Tensors of (batch_size, 3, 224, 244) for example.
However, if I `dataset.set_format(type='torch', columns=['image', 'bbox'])` these columns are cast to Lists of tensors and miss the batch size completely (the 3 dimension is the list length).
I'm currently digging through datasets formatting code to try and find out why, but was curious if someone knew an immediate solution for this. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5231/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5231/timeline | null | completed | false |