url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.47B
| node_id
stringlengths 18
32
| number
int64 1
5.33k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
β | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5023/comments | https://api.github.com/repos/huggingface/datasets/issues/5023/events | https://github.com/huggingface/datasets/issues/5023 | 1,385,881,112 | I_kwDODunzps5Smt4Y | 5,023 | Text strings are split into lists of characters in xcsr dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-09-26T11:11:50Z | 2022-09-28T07:54:20Z | 2022-09-28T07:54:20Z | MEMBER | null | null | null | ## Describe the bug
Text strings are split into lists of characters.
Example for "X-CSQA-en":
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': ['T',
'h',
'e',
' ',
'd',
'e',
'n',
't',
'a',
'l',
' ',
'o',
'f',
'f',
'i',
'c',
'e',
' ',
'h',
'a',
'n',
'd',
'l',
'e',
'd',
' ',
'a',
' ',
'l',
'o',
't',
' ',
'o',
'f',
' ',
'p',
'a',
't',
'i',
'e',
'n',
't',
's',
' ',
'w',
'h',
'o',
' ',
'e',
'x',
'p',
'e',
'r',
'i',
'e',
'n',
'c',
'e',
'd',
' ',
't',
'r',
'a',
'u',
'm',
'a',
't',
'i',
'c',
' ',
'm',
'o',
'u',
't',
'h',
' ',
'i',
'n',
'j',
'u',
'r',
'y',
',',
' ',
'w',
'h',
'e',
'r',
'e',
' ',
'w',
'e',
'r',
'e',
' ',
't',
'h',
'e',
's',
'e',
' ',
'p',
'a',
't',
'i',
'e',
'n',
't',
's',
' ',
'c',
'o',
'm',
'i',
'n',
'g',
' ',
'f',
'r',
'o',
'm',
'?'],
'choices': [{'label': ['A'], 'text': ['t', 'o', 'w', 'n']},
{'label': ['B'], 'text': ['m', 'i', 'c', 'h', 'i', 'g', 'a', 'n']},
{'label': ['C'], 'text': ['h', 'o', 's', 'p', 'i', 't', 'a', 'l']},
{'label': ['D'], 'text': ['s', 'c', 'h', 'o', 'o', 'l', 's']},
{'label': ['E'],
'text': ['o',
'f',
'f',
'i',
'c',
'e',
' ',
'b',
'u',
'i',
'l',
'd',
'i',
'n',
'g']}]},
'answerKey': 'C'}
## Steps to reproduce the bug
```python
ds = load_dataset("datasets/xcsr", "X-CSQA-en", split="validation", streaming=True)
item = next(iter(ds))
item
```
## Expected results
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': 'The dental office handled a lot of patients who experienced traumatic mouth injury, where were these patients coming from?',
'choices': {'label': ['A', 'B', 'C', 'D', 'E'],
'text': ['town', 'michigan', 'hospital', 'schools', 'office building']}},
'answerKey': 'C'}
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5023/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5023/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5022/comments | https://api.github.com/repos/huggingface/datasets/issues/5022/events | https://github.com/huggingface/datasets/pull/5022 | 1,385,432,859 | PR_kwDODunzps4_kxYe | 5,022 | Fix languages of X-CSQA configs in xcsr dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @lhoestq, I had missed that... ",
"thx for the super fast work @albertvillanova ! any estimate for when the relevant release will happen?\r\n\r\nThanks again ",
"@thesofakillers after a recent change in our library (see #4059), now fixes in all datasets are immediately accessible. You can try it:\r\n```python\r\nfrench = datasets.load_dataset(\"xcsr\", \"X-CSQA-fr\")\r\n```\r\n\r\nPlease note there is an additional fix to that dataset in progress (to be merged today):\r\n- #5024"
] | 2022-09-26T05:13:39Z | 2022-09-26T12:27:20Z | 2022-09-26T10:57:30Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5022.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5022",
"merged_at": "2022-09-26T10:57:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5022.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5022"
} | Fix #5017.
CC: @yangxqiao, @yuchenlin | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5022/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5022/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5021/comments | https://api.github.com/repos/huggingface/datasets/issues/5021/events | https://github.com/huggingface/datasets/issues/5021 | 1,385,351,250 | I_kwDODunzps5SkshS | 5,021 | Split is inferred from filename and overrides metadata.jsonl | {
"avatar_url": "https://avatars.githubusercontent.com/u/102226344?v=4",
"events_url": "https://api.github.com/users/float-trip/events{/privacy}",
"followers_url": "https://api.github.com/users/float-trip/followers",
"following_url": "https://api.github.com/users/float-trip/following{/other_user}",
"gists_url": "https://api.github.com/users/float-trip/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/float-trip",
"id": 102226344,
"login": "float-trip",
"node_id": "U_kgDOBhfZqA",
"organizations_url": "https://api.github.com/users/float-trip/orgs",
"received_events_url": "https://api.github.com/users/float-trip/received_events",
"repos_url": "https://api.github.com/users/float-trip/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/float-trip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/float-trip/subscriptions",
"type": "User",
"url": "https://api.github.com/users/float-trip"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | [] | null | [
"Hi! What's the structure of your image folder? `datasets` by default tries to infer to what split each file belongs based on directory/file names. If it's OK to load all the images inside the `dataset` folder in the `train` split, you can do the following:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files=\"dataset/**\")\r\n```",
"Thanks! Specifying `data_files` worked for that case.\r\n\r\nI'm new to the library, so let me try rephrasing the issue. If there's no actual bug here, sorry for the trouble.\r\n\r\nI've uploaded an example [here](https://files.catbox.moe/nfj2pd.zip) with the following files: \r\n\r\n```\r\n.\r\nβββ bug.py\r\nβββ imagefolder\r\n βββ test\r\n β βββ metadata.jsonl\r\n β βββ dog.jpg\r\n β βββ personal trainer.jpg\r\n βββ train\r\n βββ metadata.jsonl\r\n βββ cat.jpg\r\n βββ testing center.jpg\r\n```\r\n\r\n`bug.py`\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imagefolder\")\r\n\r\nprint(dataset)\r\n# DatasetDict({\r\n# test: Dataset({\r\n# features: ['image', 'text'],\r\n# num_rows: 1\r\n# })\r\n# })\r\n\r\nfor split in dataset:\r\n print(\"Split:\", split)\r\n for n in dataset[split]:\r\n print(n['text'])\r\n\r\n\r\n# Split: test\r\n# testing center\r\n```\r\n\r\nAs far as I can tell, this conforms with the example given here: https://huggingface.co/docs/datasets/image_dataset#imagefolder. It appears to me that, even though `metadata.jsonl` is present, the inferred labels from the path are taking precedent. Does this sound like a bug/undocumented behavior?",
"This looks like a duplicate of https://github.com/huggingface/datasets/issues/4895 (the problem is explained in this comment: https://github.com/huggingface/datasets/issues/4895#issuecomment-1248269550).\r\n\r\nIn the meantime, you can do the following to fetch all the splits:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", data_files={\"train\": \"imagefolder/train/**\", \"test\": \"imagefolder/test/**\"})\r\n```\r\n"
] | 2022-09-26T03:22:14Z | 2022-09-29T08:07:50Z | 2022-09-29T08:07:50Z | NONE | null | null | null | ## Describe the bug
Including the strings "test" or "train" anywhere in a filename causes `datasets` to infer the split and silently ignore all other files.
This behavior is documented for directory names but not filenames: https://huggingface.co/docs/datasets/image_dataset#imagefolder
## Steps to reproduce the bug
`metadata.jsonl`
```json
{"file_name": "photo of a cat.jpg", "text": "a photo of a cat"}
{"file_name": "photo of a dog.jpg", "text": "a photo of a dog"}
{"file_name": "photo of a train.jpg", "text": "a photo of a train"}
{"file_name": "photo of test tubes.jpg", "text": "a photo of test tubes"}
```
`bug.py`
```python
from datasets import load_dataset
dataset = load_dataset("dataset")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['image', 'text'],
# num_rows: 1
# })
# test: Dataset({
# features: ['image', 'text'],
# num_rows: 1
# })
# })
for split in dataset:
for n in dataset[split]:
print(n['text'])
# a photo of a train
# a photo of test tubes
```
## Expected results
One single dataset with all four images / a warning for unused files / documentation of this behavior
## Actual results
Only the images with "test" or "train" in the name are loaded
## Environment info
- `datasets` version: 2.5.1
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5021/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5021/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5020/comments | https://api.github.com/repos/huggingface/datasets/issues/5020/events | https://github.com/huggingface/datasets/pull/5020 | 1,384,684,078 | PR_kwDODunzps4_istJ | 5,020 | Fix URLs of sbu_captions dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1070872?v=4",
"events_url": "https://api.github.com/users/donglixp/events{/privacy}",
"followers_url": "https://api.github.com/users/donglixp/followers",
"following_url": "https://api.github.com/users/donglixp/following{/other_user}",
"gists_url": "https://api.github.com/users/donglixp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/donglixp",
"id": 1070872,
"login": "donglixp",
"node_id": "MDQ6VXNlcjEwNzA4NzI=",
"organizations_url": "https://api.github.com/users/donglixp/orgs",
"received_events_url": "https://api.github.com/users/donglixp/received_events",
"repos_url": "https://api.github.com/users/donglixp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/donglixp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donglixp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/donglixp"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-24T14:00:33Z | 2022-09-28T07:20:20Z | 2022-09-28T07:18:23Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5020.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5020",
"merged_at": "2022-09-28T07:18:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5020.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5020"
} | Forbidden
You don't have permission to access /~vicente/sbucaptions/sbu-captions-all.tar.gz on this server.
Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request.
Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips PHP/5.4.16 mod_fcgid/2.3.9 mod_wsgi/3.4 Python/2.7.5 mod_perl/2.0.11 Perl/v5.16.3 Server at [www.cs.virginia.edu](mailto:csroot@virginia.edu) Port 443 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5020/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5020/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5019/comments | https://api.github.com/repos/huggingface/datasets/issues/5019/events | https://github.com/huggingface/datasets/pull/5019 | 1,384,673,718 | PR_kwDODunzps4_iq9b | 5,019 | Update swiss judgment prediction | {
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JoelNiklaus",
"id": 3775944,
"login": "JoelNiklaus",
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JoelNiklaus"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"Thank you very much for the detailed review @albertvillanova!\r\n\r\nI updated the PR with the requested changes. ",
"At the end, I had to manually fix the conflict, so that CI tests are launched.\r\n\r\nPLEASE NOTE: you should first pull to incorporate the previous commit\r\n```shell\r\ngit pull\r\n```",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you very much for the detailed feedback and your time @albertvillanova! \r\nYes, thanks. My other datasets are already on the hub: https://huggingface.co/joelito\r\n"
] | 2022-09-24T13:28:57Z | 2022-09-28T07:13:39Z | 2022-09-28T05:48:50Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5019.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5019",
"merged_at": "2022-09-28T05:48:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5019.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5019"
} | Hi,
I updated the dataset to include additional data made available recently. When I test it locally, it seems to work. However, I get the following error with the dummy data creation:
`Dummy data generation done but dummy data test failed since splits ['train', 'validation', 'test'] have 0 examples for config 'fr'`. Do you know why this could be the case?
Cheers,
Joel | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5019/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5019/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5018/comments | https://api.github.com/repos/huggingface/datasets/issues/5018/events | https://github.com/huggingface/datasets/pull/5018 | 1,384,146,585 | PR_kwDODunzps4_hA0V | 5,018 | Create all YAML dataset_info | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5018). All of your documentation changes will be reflected on that endpoint.",
"Closing since https://github.com/huggingface/datasets/pull/4974 removed all the datasets scripts.\r\n\r\nIndividual PRs must be opened on the Hugging face Hub to add the YAML metadata"
] | 2022-09-23T18:08:15Z | 2022-10-03T17:08:05Z | 2022-10-03T17:08:05Z | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5018.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5018",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5018.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5018"
} | Following https://github.com/huggingface/datasets/pull/4926
Creates all the `dataset_info` YAML fields in the dataset cards
The JSON are also updated using the simplified backward compatible format added in https://github.com/huggingface/datasets/pull/4926
Needs https://github.com/huggingface/datasets/pull/4926 to be merged first | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5018/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5018/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5017/comments | https://api.github.com/repos/huggingface/datasets/issues/5017/events | https://github.com/huggingface/datasets/issues/5017 | 1,384,022,463 | I_kwDODunzps5SfoG_ | 5,017 | xcsr: X-CSQA simply uses english for all alleged non-english data | {
"avatar_url": "https://avatars.githubusercontent.com/u/26286291?v=4",
"events_url": "https://api.github.com/users/thesofakillers/events{/privacy}",
"followers_url": "https://api.github.com/users/thesofakillers/followers",
"following_url": "https://api.github.com/users/thesofakillers/following{/other_user}",
"gists_url": "https://api.github.com/users/thesofakillers/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thesofakillers",
"id": 26286291,
"login": "thesofakillers",
"node_id": "MDQ6VXNlcjI2Mjg2Mjkx",
"organizations_url": "https://api.github.com/users/thesofakillers/orgs",
"received_events_url": "https://api.github.com/users/thesofakillers/received_events",
"repos_url": "https://api.github.com/users/thesofakillers/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thesofakillers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thesofakillers/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thesofakillers"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @thesofakillers. Good catch. We are fixing this. "
] | 2022-09-23T16:11:54Z | 2022-09-26T10:57:31Z | 2022-09-26T10:57:31Z | NONE | null | null | null | ## Describe the bug
All the alleged non-english subcollections for the X-CSQA task in the [xcsr benchmark dataset ](https://huggingface.co/datasets/xcsr) seem to be copies of the english subcollection, rather than translations. This is in contrast to the data description:
> we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR
## Steps to reproduce the bug
```python
# let's say you want to load the french X-CSQA subcollection
french = datasets.load_dataset("xcsr", "X-CSQA-fr")
# for good measure, let's load english too
english = datasets.load_dataset("xcsr", "X-CSQA-en")
# let's inspect
"".join(english['test'][0]['question']['stem'])
# output: 'The people wanted to stop the parade, so what did they set up to thwart it?'
"".join(french['test'][0]['question']['stem'])
# output: 'The people wanted to stop the parade, so what did they set up to thwart it?'
# what? Why are they both in english?
# I've checked this for validation and train splits too, across many datapoints. It's all the same english dataset
# maybe i need to look better?
french['test'].unique('lang')
# output: ['en']
# no, it's all english
```
## Expected results
Accessing a subcollection in language X should return a subcollection containg samples in language X
## Actual results
Accessing a subcollection in language X returns a subcollection containing samples in English.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5017/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5017/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5016/comments | https://api.github.com/repos/huggingface/datasets/issues/5016/events | https://github.com/huggingface/datasets/pull/5016 | 1,383,883,058 | PR_kwDODunzps4_gKny | 5,016 | Fix tar extraction vuln | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-23T14:22:21Z | 2022-09-29T12:42:26Z | 2022-09-29T12:40:28Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5016.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5016",
"merged_at": "2022-09-29T12:40:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5016.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5016"
} | Fix for CVE-2007-4559
Description:
Directory traversal vulnerability in the (1) extract and (2) extractall functions in the tarfile
module in Python allows user-assisted remote attackers to overwrite arbitrary files via a .. (dot dot)
sequence in filenames in a TAR archive, a related issue to CVE-2001-1267.
I fixed it by using the solution proposed in https://stackoverflow.com/questions/10060069/safely-extract-zip-or-tar-using-python
It blocks extraction of files with an absolute path or double dots and symlinks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5016/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5016/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5015/comments | https://api.github.com/repos/huggingface/datasets/issues/5015/events | https://github.com/huggingface/datasets/issues/5015 | 1,383,485,558 | I_kwDODunzps5SdlB2 | 5,015 | Transfer dataset scripts to Hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Sounds good ! Can I help with anything ?"
] | 2022-09-23T08:48:10Z | 2022-10-05T07:15:57Z | 2022-10-05T07:15:57Z | MEMBER | null | null | null | Before merging:
- #4974
TODO:
- [x] Create label: ["dataset contribution"](https://github.com/huggingface/datasets/pulls?q=label%3A%22dataset+contribution%22)
- [x] Create project: [Datasets: Transfer datasets to Hub](https://github.com/orgs/huggingface/projects/22/)
- [x] PRs:
- [x] Add dataset: we should recommend transfer all additions of datasets to the Hub, under the appropriate namespace; no more additions of datasets on GitHub
- [x] Update dataset: in general, we should merge bug fixes; enhancements should be considered on a case-by-case basis, depending on whether there is a more suitable namespace on the Hub
- [ ] Issues
Finally:
- [x] #4974
Let me know what you think! :hugs: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5015/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5015/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5014/comments | https://api.github.com/repos/huggingface/datasets/issues/5014/events | https://github.com/huggingface/datasets/issues/5014 | 1,383,422,639 | I_kwDODunzps5SdVqv | 5,014 | I need to read the custom dataset in conll format | {
"avatar_url": "https://avatars.githubusercontent.com/u/39985245?v=4",
"events_url": "https://api.github.com/users/506610466/events{/privacy}",
"followers_url": "https://api.github.com/users/506610466/followers",
"following_url": "https://api.github.com/users/506610466/following{/other_user}",
"gists_url": "https://api.github.com/users/506610466/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/506610466",
"id": 39985245,
"login": "506610466",
"node_id": "MDQ6VXNlcjM5OTg1MjQ1",
"organizations_url": "https://api.github.com/users/506610466/orgs",
"received_events_url": "https://api.github.com/users/506610466/received_events",
"repos_url": "https://api.github.com/users/506610466/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/506610466/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/506610466/subscriptions",
"type": "User",
"url": "https://api.github.com/users/506610466"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi! We don't currently have a builder for parsing custom `conll` datasets, but I guess we could add one as a packaged module (similarly to what [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conll_dataset_builder.py) did). @lhoestq @albertvillanova WDYT?\r\n\r\nIn the meantime, you can use `Dataset.from_generator` to create a dataset as follows:\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# 2009 version\r\nINPUT_COLUMNS = \"ID FORM LEMMA PLEMMA POS PPOS FEAT PFEAT HEAD PHEAD DEPREL PDEPREL\".split()\r\n\r\ndef read_conll(file):\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n idx = 0\r\n with open(file) as f:\r\n for line in f:\r\n if line.startswith(\"-DOCSTART-\") or line == \"\\n\" or not line:\r\n if example[next(iter(example))]:\r\n yield idx, example\r\n idx += 1\r\n example = {col: [] for col in INPUT_COLUMNS}\r\n else:\r\n row_cols = line.split()\r\n for i, col in enumerate(example):\r\n example[col] = row_cols[i].rstrip()\r\n\r\n# (optional) pass custom features with `features=Features(...)`\r\ndset = Dataset.from_generator(read_conll, gen_kwargs={\"file\": \"path/to/conll/file\"}) \r\n``` ",
"I think we could add a dedicated builder if you think this format is general enough.",
"\r\n\r\n\r\n> I think we could add a dedicated builder if you think this format is general enough.\r\n\r\nI think its functions are incomplete. It should have to_ Conll and from_ There are two methods of conll."
] | 2022-09-23T07:49:42Z | 2022-11-02T11:57:15Z | null | NONE | null | null | null | I need to read the custom dataset in conll format
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5014/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5014/timeline | null | reopened | false |
https://api.github.com/repos/huggingface/datasets/issues/5013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5013/comments | https://api.github.com/repos/huggingface/datasets/issues/5013/events | https://github.com/huggingface/datasets/issues/5013 | 1,383,415,971 | I_kwDODunzps5SdUCj | 5,013 | would huggingface like publish cpp binding for datasets package ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/6143404?v=4",
"events_url": "https://api.github.com/users/mullerhai/events{/privacy}",
"followers_url": "https://api.github.com/users/mullerhai/followers",
"following_url": "https://api.github.com/users/mullerhai/following{/other_user}",
"gists_url": "https://api.github.com/users/mullerhai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mullerhai",
"id": 6143404,
"login": "mullerhai",
"node_id": "MDQ6VXNlcjYxNDM0MDQ=",
"organizations_url": "https://api.github.com/users/mullerhai/orgs",
"received_events_url": "https://api.github.com/users/mullerhai/received_events",
"repos_url": "https://api.github.com/users/mullerhai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mullerhai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mullerhai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mullerhai"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?",
"> Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?\r\n\r\nfor example ,the huggingface load_model() and load_dataset() can execute in cpp env",
"If it's a viable option for you, you can check [tch-rs](https://github.com/LaurentMazare/tch-rs) to load models in Rust. Regarding datasets, you can first download them in python and then use Arrow C++ or Rust to load them",
"If you are more adventurous, another option is to embed python calls inside c++ e.g. with `pybind11`.",
"> pybind11\r\n\r\nI think it is not the best solution"
] | 2022-09-23T07:42:49Z | 2022-09-27T03:40:30Z | null | NONE | null | null | null | HI:
I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it.
thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5013/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5013/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5012/comments | https://api.github.com/repos/huggingface/datasets/issues/5012/events | https://github.com/huggingface/datasets/issues/5012 | 1,382,851,096 | I_kwDODunzps5SbKIY | 5,012 | Force JSON format regardless of file naming on S3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/112650299?v=4",
"events_url": "https://api.github.com/users/junwang-wish/events{/privacy}",
"followers_url": "https://api.github.com/users/junwang-wish/followers",
"following_url": "https://api.github.com/users/junwang-wish/following{/other_user}",
"gists_url": "https://api.github.com/users/junwang-wish/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/junwang-wish",
"id": 112650299,
"login": "junwang-wish",
"node_id": "U_kgDOBrboOw",
"organizations_url": "https://api.github.com/users/junwang-wish/orgs",
"received_events_url": "https://api.github.com/users/junwang-wish/received_events",
"repos_url": "https://api.github.com/users/junwang-wish/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/junwang-wish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/junwang-wish/subscriptions",
"type": "User",
"url": "https://api.github.com/users/junwang-wish"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! Support for URIs like `s3://...` is not implemented yet in `data_files=`. You can use the HTTP URL instead if your data is public in the meantime"
] | 2022-09-22T18:28:15Z | 2022-09-26T09:31:38Z | null | NONE | null | null | null | I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run
```python
dataset = load_dataset(
"json",
data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
)
```
It gives me
```
InvalidSchema: No connection adapters were found for 's3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
```
However, I cannot go ahead and change the names of the s3 file. Is there a way to "force" load a S3 url with certain decoder (JSON, CSV, etc.) regardless of s3 URL naming? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5012/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5012/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5011/comments | https://api.github.com/repos/huggingface/datasets/issues/5011/events | https://github.com/huggingface/datasets/issues/5011 | 1,382,609,587 | I_kwDODunzps5SaPKz | 5,011 | Audio: `encode_example` fails with IndexError | {
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Sorry bug on my part π
Closing "
] | 2022-09-22T15:07:27Z | 2022-09-23T09:05:18Z | 2022-09-23T09:05:18Z | CONTRIBUTOR | null | null | null | ## Describe the bug
Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functions correctly, and there were no issues when I had the dataset locally.
Don't think it's a sound file bug as the version matches what worked previously.
Update: the bug appeared for me on a GPU, mysteriously on a TPU I can't repro and it downloads correctly...
## Steps to reproduce the bug
```python
from datasets import load_dataset
earnings22 = load_dataset("sanchit-gandhi/earnings22_split")
```
## Expected results
```
>>> earnings22
DatasetDict({
validation: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 2650
})
train: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 52006
})
test: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 2735
})
})
```
## Actual results
```
Traceback (most recent call last):
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2764, in _map_single
writer.write(example)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 451, in write
self.write_examples_on_file()
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 409, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 508, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 231, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 197, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1795, in cast_array_to_feature
return feature.cast_storage(array)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in cast_storage
storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in <listcomp>
storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 92, in encode_example
sf.write(buffer, value["array"], value["sampling_rate"], format="wav")
File "/opt/conda/envs/hf/lib/python3.8/site-packages/soundfile.py", line 313, in write
channels = data.shape[1]
IndexError: tuple index out of range
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
Plus:
- SoundFile version: 0.10.3.post1
cc @lhoestq @polinaeterna | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5011/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5011/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5010/comments | https://api.github.com/repos/huggingface/datasets/issues/5010/events | https://github.com/huggingface/datasets/pull/5010 | 1,382,308,799 | PR_kwDODunzps4_bB3q | 5,010 | Add deprecation warning to multilingual_librispeech dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-22T11:41:59Z | 2022-09-23T12:04:37Z | 2022-09-23T12:02:45Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5010.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5010",
"merged_at": "2022-09-23T12:02:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5010.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5010"
} | Besides the current deprecation warning in the script of `multilingual_librispeech`, this PR adds a deprecation warning to its dataset card as well.
The format of the deprecation warning is aligned with the one in the library documentation when docstrings contain the `<Deprecated/>` tag.
Related to:
- #4060 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5010/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5010/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5009/comments | https://api.github.com/repos/huggingface/datasets/issues/5009/events | https://github.com/huggingface/datasets/issues/5009 | 1,381,194,067 | I_kwDODunzps5SU1lT | 5,009 | Error loading StonyBrookNLP/tellmewhy dataset from hub even though local copy loads correctly | {
"avatar_url": "https://avatars.githubusercontent.com/u/4996184?v=4",
"events_url": "https://api.github.com/users/ykl7/events{/privacy}",
"followers_url": "https://api.github.com/users/ykl7/followers",
"following_url": "https://api.github.com/users/ykl7/following{/other_user}",
"gists_url": "https://api.github.com/users/ykl7/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ykl7",
"id": 4996184,
"login": "ykl7",
"node_id": "MDQ6VXNlcjQ5OTYxODQ=",
"organizations_url": "https://api.github.com/users/ykl7/orgs",
"received_events_url": "https://api.github.com/users/ykl7/received_events",
"repos_url": "https://api.github.com/users/ykl7/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ykl7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ykl7/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ykl7"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"I think this is because some columns are mostly empty lists. In particular the train and validation splits only have empty lists for `val_ann`. Therefore the type inference doesn't know which type is inside (or it would have to scan the other splits first before knowing).\r\n\r\nYou can fix that by specifying the features types explicitly.\r\nThen you can save the feature types inside the dataset repository, so that you won't need to specify the features in subsequent calls:\r\n```python\r\nfrom datasets import load_dataset, Features, Sequence, Value\r\nfrom datasets.info import DatasetInfosDict\r\n\r\nfeatures = Features({\r\n 'narrative': Value('string'),\r\n 'question': Value('string'),\r\n 'original_sentence_for_question': Value('string'),\r\n 'narrative_lexical_overlap': Value('float64'),\r\n 'is_ques_answerable': Value('string'),\r\n 'answer': Value('string'),\r\n 'is_ques_answerable_annotator': Value('string'),\r\n 'original_narrative_form': Sequence(Value('string')),\r\n 'question_meta': Value('string'),\r\n 'helpful_sentences': Sequence(Value('int64')),\r\n 'human_eval': Value('bool'),\r\n 'val_ann': Sequence(Value('int64')),\r\n 'gram_ann': Sequence(Value('int64'))\r\n})\r\nds = load_dataset('StonyBrookNLP/tellmewhy', features=features)\r\nDatasetInfosDict({\"default\": ds[\"train\"].info}).write_to_directory(\"path/to/local/tellmewhy\")\r\n```\r\nand then after pushing the change to the dataset repository on the Hub, `load_dataset(\"StonyBrookNLP/tellmewhy\")` will work directly`",
"(Note that specifying explicit types will be made easier with https://github.com/huggingface/datasets/pull/4926)",
"`gram_ann` and `val_ann` are annotations that only exist for part of the test set. I wanted to keep all the columns consistent across all files, so I added them to train and validation as well. I'll check if removing them from those files is still compliant with this repo. Otherwise, I will do as you suggested. Thanks @lhoestq !",
"@lhoestq I followed the exact steps you described but it seems like I'm getting the same error unfortunately. Any other ideas? Thanks in advance",
"Hi ! If you move `dataset_infos.json` from `data/` to the root of your dataset repository if should work :)",
"I tried that and pushed to the [hub](https://huggingface.co/datasets/StonyBrookNLP/tellmewhy/tree/main). Now, there is a new error.\r\n```\r\n File \"/home/yklal95/tellmewhy/src/prepare_data.py\", line 67, in main\r\n dataset = load_dataset('StonyBrookNLP/tellmewhy')\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py\", line 1746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py\", line 704, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py\", line 775, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/utils/info_utils.py\", line 33, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'/home/yklal95/tellmewhy/data/test.json', '/home/yklal95/tellmewhy/data/validation.json', '/home/yklal95/tellmewhy/data/train.json'}\r\n```\r\nNo changes were made to any of the other files and they are still on the hub. Let me know if you have any ideas @lhoestq Thanks!",
"Oh I see - the code I gave you returns local paths instead of URLs to store metadata about files to download.\r\nI opened a PR in your repo here to remove this: https://huggingface.co/datasets/StonyBrookNLP/tellmewhy/discussions/1\r\nsorry for the inconvenience !",
"It works now! Thanks a lot @lhoestq "
] | 2022-09-21T16:23:06Z | 2022-09-29T13:07:29Z | 2022-09-29T13:07:29Z | NONE | null | null | null | ## Describe the bug
I have added a new dataset with the identifier `StonyBrookNLP/tellmewhy` to the hub. When I load the individual files using my local copy using `dataset = datasets.load_dataset("json", data_files="data/train.jsonl")`, it loads the dataset correctly. However, when I try to load it from the hub, I get an error (pasted below). Additionally, `dataset = datasets.load_dataset("json", data_dir="data/")` throws the same error.
## Steps to reproduce the bug
```python
dataset = datasets.load_dataset('StonyBrookNLP/tellmewhy')
```
## Expected results
Successfully load the `StonyBrookNLP/tellmewhy` dataset.
## Actual results
```
Using custom data configuration StonyBrookNLP--tellmewhy-82712924092694ff
Downloading and preparing dataset json/StonyBrookNLP--tellmewhy to /home/yklal95/.cache/huggingface/datasets/StonyBrookNLP___json/StonyBrookNLP--tellmewhy-82712924092694ff/0.0.0/a3e658c4731e59120d44081ac10bf85dc7e1388126b92338344ce9661907f253...
Downloading data files: 100%|ββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 957.46it/s]
Extracting data files: 100%|βββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 299.14it/s]
Traceback (most recent call last):
File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 17, in <module>
main(args)
File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 11, in main
dataset = datasets.load_dataset(args.dataset_name)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 1277, in _prepare_split
writer.write_table(table)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/arrow_writer.py", line 524, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 2005, in table_cast
return cast_table_to_schema(table, schema)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1822, in cast_array_to_feature
casted_values = _c(array.values, feature.feature)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1853, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1761, in array_cast
raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
TypeError: Couldn't cast array of type int64 to null
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.15.0-121-generic-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5009/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5009/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5008/comments | https://api.github.com/repos/huggingface/datasets/issues/5008/events | https://github.com/huggingface/datasets/pull/5008 | 1,381,090,903 | PR_kwDODunzps4_XAc5 | 5,008 | Re-apply input columns change | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-21T15:09:01Z | 2022-09-22T13:57:36Z | 2022-09-22T13:55:23Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5008.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5008",
"merged_at": "2022-09-22T13:55:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5008.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5008"
} | Fixes the `filter` + `input_columns` combination, which is used in the `transformers` examples for instance.
Revert #5006 (which in turn reverts #4971)
Fix https://github.com/huggingface/datasets/issues/4858 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5008/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5008/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5007/comments | https://api.github.com/repos/huggingface/datasets/issues/5007/events | https://github.com/huggingface/datasets/pull/5007 | 1,381,007,607 | PR_kwDODunzps4_WvFQ | 5,007 | Add some note about running the transformers ci before a release | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-21T14:14:25Z | 2022-09-22T10:16:14Z | 2022-09-22T10:14:06Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5007.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5007",
"merged_at": "2022-09-22T10:14:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5007.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5007"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5007/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5007/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5006/comments | https://api.github.com/repos/huggingface/datasets/issues/5006/events | https://github.com/huggingface/datasets/pull/5006 | 1,380,968,395 | PR_kwDODunzps4_Wm8z | 5,006 | Revert input_columns change | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging this one and I'll check if it fixes the `transformers` CI before doing a patch release"
] | 2022-09-21T13:49:20Z | 2022-09-21T14:14:33Z | 2022-09-21T14:11:57Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5006.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5006",
"merged_at": "2022-09-21T14:11:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5006.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5006"
} | Revert https://github.com/huggingface/datasets/pull/4971
Fix https://github.com/huggingface/datasets/issues/5005 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5006/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5006/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5005/comments | https://api.github.com/repos/huggingface/datasets/issues/5005/events | https://github.com/huggingface/datasets/issues/5005 | 1,380,952,960 | I_kwDODunzps5ST6uA | 5,005 | Release 2.5.0 breaks transformers CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Shall we revert https://github.com/huggingface/datasets/pull/4971 @mariosasko ?\r\n\r\nAnd for consistency we can update IterableDataset.map later"
] | 2022-09-21T13:39:19Z | 2022-09-21T14:11:57Z | 2022-09-21T14:11:57Z | MEMBER | null | null | null | ## Describe the bug
As reported by @lhoestq:
> see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563
this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[β¦]torch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55250e7da/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L482-L488)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5005/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5005/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5004/comments | https://api.github.com/repos/huggingface/datasets/issues/5004/events | https://github.com/huggingface/datasets/pull/5004 | 1,380,860,606 | PR_kwDODunzps4_WQck | 5,004 | Remove license tag file and validation | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-21T12:35:14Z | 2022-09-22T11:47:41Z | 2022-09-22T11:45:46Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5004.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5004",
"merged_at": "2022-09-22T11:45:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5004.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5004"
} | As requested, we are removing the validation of the licenses from `datasets` because this is done on the Hub.
Fix #4994.
Related to:
- #4926, which is removing all the validation from `datasets` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5004/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5004/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5003/comments | https://api.github.com/repos/huggingface/datasets/issues/5003/events | https://github.com/huggingface/datasets/pull/5003 | 1,380,617,353 | PR_kwDODunzps4_Vdko | 5,003 | Fix missing use_auth_token in streaming docstrings | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-21T09:27:03Z | 2022-09-21T16:24:01Z | 2022-09-21T16:20:59Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5003.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5003",
"merged_at": "2022-09-21T16:20:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5003.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5003"
} | This PRs fixes docstrings:
- adds the missing `use_auth_token` param
- updates syntax of param types
- adds params to docstrings without them
- fixes return/yield types
- fixes syntax | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5003/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5003/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5002/comments | https://api.github.com/repos/huggingface/datasets/issues/5002/events | https://github.com/huggingface/datasets/issues/5002 | 1,380,589,402 | I_kwDODunzps5SSh9a | 5,002 | Dataset Viewer issue for loubnabnl/humaneval-x | {
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null | [
"It's a bug! Thanks for reporting, I'm looking at it",
"Fixed."
] | 2022-09-21T09:06:17Z | 2022-09-21T11:49:49Z | 2022-09-21T11:49:49Z | NONE | null | null | null | ### Link
https://huggingface.co/datasets/loubnabnl/humaneval-x/viewer/
### Description
The dataset has subsets but the viewer gets stuck in the default subset even when I select another one (the data loading of the subsets works fine)
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5002/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5002/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5001/comments | https://api.github.com/repos/huggingface/datasets/issues/5001/events | https://github.com/huggingface/datasets/pull/5001 | 1,379,844,820 | PR_kwDODunzps4_TBWa | 5,001 | Support loading XML datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5001). All of your documentation changes will be reflected on that endpoint.",
"> CC: @davanstrien\r\n\r\nI should have some time to look at this on Friday :) ",
"@albertvillanova I've tried this with a few different XML datasets. One issue I've run into is getting a `KeyError` when the attributes of a field differ from the first parsed row. Unfortunately, this can come up in the ALTO XML format, for example, if you want to parse the 'string' field, which contains the text in the ALTO XML files. \r\n\r\nWhen parsing a file, this instance has no 'STYLE' attribute: \r\n\r\n```xml\r\n<TextLine HEIGHT=\"39\" WIDTH=\"295\" VPOS=\"926\" HPOS=\"247\"><String WC=\"0.4600000083\" CONTENT=\"jufquβen\" HEIGHT=\"39\" WIDTH=\"117\" VPOS=\"926\" HPOS=\"247\"/><SP WIDTH=\"14\" VPOS=\"928\" HPOS=\"365\"/><String WC=\"0.6075000167\" CONTENT=\"lβan\" HEIGHT=\"26\" WIDTH=\"50\" VPOS=\"928\" HPOS=\"380\"/><SP WIDTH=\"24\" VPOS=\"936\" HPOS=\"431\"/><String WC=\"0.4300000072\" CONTENT=\"1\" HEIGHT=\"16\" WIDTH=\"9\" VPOS=\"936\" HPOS=\"456\"/><String STYLE=\"italics\" WC=\"0.5774999857\" CONTENT=\"361.\" HEIGHT=\"25\" WIDTH=\"68\" VPOS=\"933\" HPOS=\"474\"/></TextLine>\r\n```\r\n\r\nWhereas this one which appears later in the file, does have this field: \r\n\r\n```xml\r\n<TextLine HEIGHT=\"39\" WIDTH=\"712\" VPOS=\"966\" HPOS=\"297\"><String STYLE=\"italics\" WC=\"0.6999999881\" CONTENT=\"I\" HEIGHT=\"17\" WIDTH=\"9\" VPOS=\"977\" HPOS=\"297\"/><String WC=\"0.5\" CONTENT=\"I.\" HEIGHT=\"18\" WIDTH=\"25\" VPOS=\"976\" HPOS=\"318\"/><SP WIDTH=\"24\" VPOS=\"971\" HPOS=\"344\"/><String STYLE=\"italics\" WC=\"0.3359999955\" CONTENT=\"Crade\" HEIGHT=\"26\" WIDTH=\"91\" VPOS=\"967\" HPOS=\"369\"/><SP WIDTH=\"31\" VPOS=\"971\" HPOS=\"461\"/><String STYLE=\"italics\" WC=\"0.6060000062\" CONTENT=\"PΓ©tri\" HEIGHT=\"26\" WIDTH=\"71\" VPOS=\"968\" HPOS=\"493\"/><SP WIDTH=\"23\" VPOS=\"968\" HPOS=\"565\"/><String STYLE=\"italics\" WC=\"0.612857163\" CONTENT=\"Candidi\" HEIGHT=\"27\" WIDTH=\"111\" VPOS=\"967\" HPOS=\"589\"/><SP WIDTH=\"19\" VPOS=\"967\" HPOS=\"701\"/><String STYLE=\"italics\" WC=\"0.4088888764\" CONTENT=\"Decembrii\" HEIGHT=\"28\" WIDTH=\"144\" VPOS=\"966\" HPOS=\"721\"/><SP WIDTH=\"10\" VPOS=\"968\" HPOS=\"866\"/><String STYLE=\"italics\" WC=\"0.4600000083\" CONTENT=\"in\" HEIGHT=\"25\" WIDTH=\"27\" VPOS=\"968\" HPOS=\"877\"/><SP WIDTH=\"9\" VPOS=\"967\" HPOS=\"905\"/><String STYLE=\"italics\" WC=\"0.5099999905\" CONTENT=\"funere\" HEIGHT=\"38\" WIDTH=\"94\" VPOS=\"967\" HPOS=\"915\"/></TextLine>\r\n```\r\n\r\nSince the first-seen fields define what is passed to `arrow_writer`, this causes a KeyError when the version with the extra attributes is encountered because it doesn't expect this column. \r\n\r\nSince it's important to support streaming, I'm not sure there is a nice way to detect attributes for the whole file easily in an automatic way. The two potential ways I can see of doing it.\r\n\r\n- Do an initial pass on a batch of data to have a higher chance of encountering variations in attributes before doing the arrow write. \r\n- Do a full pass on one file (and assume that this won't change across files) \r\n\r\nI think the other way of doing this would be to allow users to define expected/wanted attributes as another loading argument. This could then be used to extract the described attributes (and make them None if not found). This requires a bit more work from the user but could be helpful. For example, in the XML above, likely, most users will only want the `WC` and `CONTENT` attributes. So they could specify this upfront and avoid loading extra data they don't need or want. I suspect this option would make more sense than making this operation automatic for the case where attributes might change. WDYT? \r\n\r\n\r\n\r\n\r\n\r\n\r\n"
] | 2022-09-20T18:42:58Z | 2022-11-01T12:44:42Z | null | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5001.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5001",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5001.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5001"
} | CC: @davanstrien | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5001/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5001/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5000/comments | https://api.github.com/repos/huggingface/datasets/issues/5000/events | https://github.com/huggingface/datasets/issues/5000 | 1,379,709,398 | I_kwDODunzps5SPLHW | 5,000 | Dataset Viewer issue for asapp/slue | {
"avatar_url": "https://avatars.githubusercontent.com/u/56092571?v=4",
"events_url": "https://api.github.com/users/fwu-asapp/events{/privacy}",
"followers_url": "https://api.github.com/users/fwu-asapp/followers",
"following_url": "https://api.github.com/users/fwu-asapp/following{/other_user}",
"gists_url": "https://api.github.com/users/fwu-asapp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fwu-asapp",
"id": 56092571,
"login": "fwu-asapp",
"node_id": "MDQ6VXNlcjU2MDkyNTcx",
"organizations_url": "https://api.github.com/users/fwu-asapp/orgs",
"received_events_url": "https://api.github.com/users/fwu-asapp/received_events",
"repos_url": "https://api.github.com/users/fwu-asapp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fwu-asapp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fwu-asapp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fwu-asapp"
} | [] | closed | false | null | [] | null | [
"<img width=\"519\" alt=\"Capture dβeΜcran 2022-09-20 aΜ 22 33 47\" src=\"https://user-images.githubusercontent.com/1676121/191358952-1220cb7d-745a-4203-a66b-3c707b25038f.png\">\r\n\r\n```\r\nNot found.\r\n\r\nError code: SplitsResponseNotFound\r\n```\r\n\r\nhttps://datasets-server.huggingface.co/splits?dataset=asapp/slue\r\n\r\n```json\r\n{\"error\":\"Not found.\"}\r\n```",
"I just launched a refresh. It's weird, I don't see any entry for this dataset in the cache, it's a bug on our side. In order to try to understand what happened, did you change the visibility status from private to public, by any chance?",
"The dataset is being refreshed, please retry later.\r\n\r\n<img width=\"802\" alt=\"Capture dβeΜcran 2022-09-20 aΜ 22 39 46\" src=\"https://user-images.githubusercontent.com/1676121/191360072-7cc86486-4e84-4b47-8f9a-4a69fe84a5ac.png\">\r\n",
"OK. We now have an issue because the dataset cannot be streamed, and the dataset viewer relies on it.\r\n\r\nMaybe @huggingface/datasets can help:\r\n\r\n```\r\nError code: StreamingRowsError\r\nException: NotImplementedError\r\nMessage: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 337, in get_first_rows_response\r\n rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)\r\n File \"/src/services/worker/src/worker/utils.py\", line 123, in decorator\r\n return func(*args, **kwargs)\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 65, in get_rows\r\n ds = load_dataset(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1739, in load_dataset\r\n return builder_instance.as_streaming_dataset(split=split)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1025, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/tmp/modules-cache/datasets_modules/datasets/asapp--slue/adaa0c78233e1a1df9c2f054e690ec5fc3eaf453bd76b80fe5cbe5728e55d9b1/slue.py\", line 189, in _split_generators\r\n dl_dir = dl_manager.download_and_extract(_DL_URLS[config_name])\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 944, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 907, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 385, in map_nested\r\n return function(data_struct)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 912, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 390, in _get_extraction_protocol\r\n raise NotImplementedError(\r\n NotImplementedError: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\n```",
"Thanks @severo, \r\n\r\nDo I have to modify the python script to support streaming so that it can be previewed?\r\nIs there a document somewhere that I can follow?\r\n",
"Hi @fwu-asapp thanks for reporting, and thanks @severo for the investigation.\r\n\r\nAs explained by @severo, the preview requires that your dataset loading script supports streaming.\r\n\r\nThere are several options here:\r\n- the easiest would be to replace the source files, archived using ZIP instead TAR: the TAR format does not allow random access while streaming, but only sequential access; the ZIP files support streaming out of the box.\r\n- alternatively, to stream TAR archives you can use `dl_manager.iter_archive`: the only prerequisite is that your \"index\" files (.tsv) should have been archived before their corresponding audio files, so while iterating the content of the TAR archive, the metadata files appear first. I think this is the case for voxpopuli tar but not for voxceleb.\r\n- if your .tsv files were not archived before their corresponding audio files (I think this is the case for voxceleb), then you should extract the .tsv files and host them separately (you can host them on the same Hugging Face Hub).\r\n - you can take as example, e.g.: https://huggingface.co/datasets/vivos/blob/main/vivos.py\r\n\r\nAs an advanced approach, you can handle both streaming and non-streaming cases separately.\r\n- as for example: https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py or https://huggingface.co/datasets/google/fleurs/blob/main/fleurs.py\r\n\r\nSee related discussion:\r\n- https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492",
"Thanks @albertvillanova for your clarification. I'll talk to my collaborators to see if we can replace those files. Let me just close this issue for now.",
"FYI, after replacing the source files with the ZIP ones, the dataset viewer works well. Thanks again to @severo and @albertvillanova for your help!",
"Great! And thank you for sharing that interesting dataset!"
] | 2022-09-20T16:45:45Z | 2022-09-27T07:04:03Z | 2022-09-21T07:24:07Z | NONE | null | null | null | ### Link
https://huggingface.co/datasets/asapp/slue/viewer/
### Description
Hi,
I wonder how to get the dataset viewer of our slue dataset to work.
Best,
Felix
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5000/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5000/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4999/comments | https://api.github.com/repos/huggingface/datasets/issues/4999/events | https://github.com/huggingface/datasets/pull/4999 | 1,379,610,030 | PR_kwDODunzps4_SQxL | 4,999 | Add EmptyDatasetError | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T15:28:05Z | 2022-09-21T12:23:43Z | 2022-09-21T12:21:24Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4999.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4999",
"merged_at": "2022-09-21T12:21:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4999.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4999"
} | examples:
from the hub:
```python
Traceback (most recent call last):
File "playground/ttest.py", line 3, in <module>
print(load_dataset("lhoestq/empty"))
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset
**config_kwargs,
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1458, in load_dataset_builder
data_files=data_files,
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1171, in dataset_module_factory
raise e1 from None
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1162, in dataset_module_factory
download_mode=download_mode,
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 760, in get_module
else get_data_patterns_in_dataset_repository(hfh_dataset_info, self.data_dir)
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/data_files.py", line 678, in get_data_patterns_in_dataset_repository
) from None
datasets.data_files.EmptyDatasetError: The dataset repository at 'lhoestq/empty' doesn't contain any data file.
```
from local directory:
```python
Traceback (most recent call last):
File "playground/ttest.py", line 3, in <module>
print(load_dataset("playground/empty"))
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset
**config_kwargs,
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1458, in load_dataset_builder
data_files=data_files,
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1107, in dataset_module_factory
path, data_dir=data_dir, data_files=data_files, download_mode=download_mode
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 625, in get_module
else get_data_patterns_locally(base_path)
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/data_files.py", line 460, in get_data_patterns_locally
raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data file") from None
datasets.data_files.EmptyDatasetError: The directory at playground/empty doesn't contain any data file
```
Close https://github.com/huggingface/datasets/issues/4995 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4999/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4999/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4998/comments | https://api.github.com/repos/huggingface/datasets/issues/4998/events | https://github.com/huggingface/datasets/pull/4998 | 1,379,466,717 | PR_kwDODunzps4_Ryp3 | 4,998 | Don't add a tag on the Hub on release | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T13:54:57Z | 2022-09-20T14:11:46Z | 2022-09-20T14:08:54Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4998.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4998",
"merged_at": "2022-09-20T14:08:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4998.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4998"
} | Datasets with no namespace on the Hub have tags to redirect to the version of datasets where they come from.
Iβm about to remove them all because I think it looks bad/unexpected in the UI and itβs not actually useful
Therefore I'm also disabling tagging.
Note that the CI job will be completely removed in https://github.com/huggingface/datasets/pull/4974 anyway | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4998/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4998/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4997/comments | https://api.github.com/repos/huggingface/datasets/issues/4997/events | https://github.com/huggingface/datasets/pull/4997 | 1,379,430,711 | PR_kwDODunzps4_RrBU | 4,997 | Add support for parsing JSON files in array form | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T13:31:26Z | 2022-09-20T15:42:40Z | 2022-09-20T15:40:06Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4997.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4997",
"merged_at": "2022-09-20T15:40:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4997.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4997"
} | Support parsing JSON files in the array form (top-level object is an array). For simplicity, `json.load` is used for decoding. This means the entire file is loaded into memory. If requested, we can optimize this by introducing a param similar to `lines` in [`pandas.read_json`](https://pandas.pydata.org/docs/reference/api/pandas.read_json.html), which, if set to `True`, would allow us to read in chunks.
Fixes https://github.com/huggingface/datasets/issues/4963
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4997/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4997/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4996/comments | https://api.github.com/repos/huggingface/datasets/issues/4996/events | https://github.com/huggingface/datasets/issues/4996 | 1,379,345,161 | I_kwDODunzps5SNyMJ | 4,996 | Dataset Viewer issue for Jean-Baptiste/wikiner_fr | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | closed | false | null | [] | null | [
"The script uses `Dataset.load_from_disk`, which as you can expect, doesn't work in streaming mode.\r\n\r\nIt would probably be more practical to load the dataset locally using `Dataset.load_from_disk` first and then `push_to_hub` to upload it in Parquet on the Hub",
"I've transferred this issue to the Hub repo: https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/discussions/3\r\n\r\nI'm closing this."
] | 2022-09-20T12:32:07Z | 2022-09-27T12:35:44Z | 2022-09-27T12:35:44Z | CONTRIBUTOR | null | null | null | ### Link
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr
### Description
```
Error code: StreamingRowsError
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response
rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)
File "/src/services/worker/src/worker/utils.py", line 123, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__
for key, example in self._iter():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter
yield from ex_iterable
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/tmp/modules-cache/datasets_modules/datasets/Jean-Baptiste--wikiner_fr/683a580ba6ec769d508f7dfc603a651667b0ed3817b1ae5bfd45f97cc024923f/wikiner_fr.py", line 165, in _generate_examples
dataset = Dataset.load_from_disk(filepath)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1210, in load_from_disk
with open(Path(dataset_path, config.DATASET_STATE_JSON_FILENAME).as_posix(), encoding="utf-8") as state_file:
FileNotFoundError: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
```
Is it an error with the dataset script, or the data itself, @huggingface/datasets?
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/tree/main
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4996/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4996/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4995/comments | https://api.github.com/repos/huggingface/datasets/issues/4995/events | https://github.com/huggingface/datasets/issues/4995 | 1,379,108,482 | I_kwDODunzps5SM4aC | 4,995 | Get a specific Exception when the dataset has no data | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | 2022-09-20T09:31:59Z | 2022-09-21T12:21:25Z | 2022-09-21T12:21:25Z | CONTRIBUTOR | null | null | null | In the dataset viewer on the Hub (https://huggingface.co/datasets/glue/viewer), we would like (https://github.com/huggingface/moon-landing/issues/3882) to show a specific message when the repository lacks any data files.
In that case, instead of showing a complex traceback, we want to show a call to action to help the user upload data.
To do that, it would be very helpful to know for sure that the repository is missing any (supported) data files.
It could be done by raising a custom exception, for example, `NoDataError`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4995/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4995/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4994/comments | https://api.github.com/repos/huggingface/datasets/issues/4994/events | https://github.com/huggingface/datasets/issues/4994 | 1,379,084,015 | I_kwDODunzps5SMybv | 4,994 | delete the hardcoded license list in `datasets` | {
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | 2022-09-20T09:14:41Z | 2022-09-22T11:45:47Z | 2022-09-22T11:45:47Z | MEMBER | null | null | null | > Feel free to delete the license list in `datasets` [...]
>
> Also FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.)
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238401662_
> [...], in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now?
_Originally posted by @julien-c in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238390659_ | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4994/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4994/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4993/comments | https://api.github.com/repos/huggingface/datasets/issues/4993/events | https://github.com/huggingface/datasets/pull/4993 | 1,379,044,435 | PR_kwDODunzps4_QYas | 4,993 | fix: avoid casting tuples after Dataset.map | {
"avatar_url": "https://avatars.githubusercontent.com/u/5697926?v=4",
"events_url": "https://api.github.com/users/szmoro/events{/privacy}",
"followers_url": "https://api.github.com/users/szmoro/followers",
"following_url": "https://api.github.com/users/szmoro/following{/other_user}",
"gists_url": "https://api.github.com/users/szmoro/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/szmoro",
"id": 5697926,
"login": "szmoro",
"node_id": "MDQ6VXNlcjU2OTc5MjY=",
"organizations_url": "https://api.github.com/users/szmoro/orgs",
"received_events_url": "https://api.github.com/users/szmoro/received_events",
"repos_url": "https://api.github.com/users/szmoro/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/szmoro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szmoro/subscriptions",
"type": "User",
"url": "https://api.github.com/users/szmoro"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T08:45:16Z | 2022-09-20T16:11:27Z | 2022-09-20T13:08:29Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4993.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4993",
"merged_at": "2022-09-20T13:08:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4993.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4993"
} | This PR updates features.py to avoid casting tuples to lists when reading the results of Dataset.map as suggested by @lhoestq [here](https://github.com/huggingface/datasets/issues/4676#issuecomment-1187371367) in https://github.com/huggingface/datasets/issues/4676.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4993/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4992/comments | https://api.github.com/repos/huggingface/datasets/issues/4992/events | https://github.com/huggingface/datasets/pull/4992 | 1,379,031,842 | PR_kwDODunzps4_QVw4 | 4,992 | Support streaming iwslt2017 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T08:35:41Z | 2022-09-20T09:27:55Z | 2022-09-20T09:15:24Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4992.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4992",
"merged_at": "2022-09-20T09:15:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4992.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4992"
} | Support streaming iwslt2017 dataset.
Once this PR is merged:
- [x] Remove old ".tgz" data files from the Hub. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4992/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4992/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4991/comments | https://api.github.com/repos/huggingface/datasets/issues/4991/events | https://github.com/huggingface/datasets/pull/4991 | 1,378,898,752 | PR_kwDODunzps4_P5hI | 4,991 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-20T06:42:07Z | 2022-09-22T12:25:32Z | 2022-09-20T07:37:30Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4991.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4991",
"merged_at": "2022-09-20T07:37:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4991.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4991"
} | Fix missing tags in dataset cards:
- aeslc
- empathetic_dialogues
- event2Mind
- gap
- iwslt2017
- newsgroup
- qa4mre
- scicite
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921
- #4931
- #4979 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4991/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4991/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4990/comments | https://api.github.com/repos/huggingface/datasets/issues/4990/events | https://github.com/huggingface/datasets/issues/4990 | 1,378,120,806 | I_kwDODunzps5SJHRm | 4,990 | "no-token" is passed to `huggingface_hub` when token is `None` | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Hi @Wauplin, thanks for raising this potential issue.\r\n\r\nThe choice of passing `\"no-token\"` instead of `None` was made in this PR:\r\n- #4536 \r\n\r\nAccording to the PR description, the reason why it is passed is to avoid that `HfApi.dataset_info` uses the local token when no token should be used.",
"Hi @albertvillanova , thanks for finding the original issue :+1: \r\n\r\nAs of next release of `huggingface_hub`, the `token` argument will be deprecated in favor of the `use_auth_token` argument in `dataset_info` method. This change as been done by @SBrandeis in https://github.com/huggingface/huggingface_hub/pull/928. `use_auth_token` is a bit different and allow the case \"don't sent the cached token by default\".\r\n\r\nIf you want to strictly avoid sending the cached token from `datasets`, you can use:\r\n```py\r\n# token=token if token else \"no-token\", <- will fail because token is not valid\r\n\r\nuse_auth_token=token if token else False, # using the new `use_auth_token` parameter\r\n```\r\n\r\nAnd as a note, I am currently updating the \"don't send the cached token by default\"-rule to \"don't send the cached token on public repos by default but use it in private ones\" in https://github.com/huggingface/huggingface_hub/pull/1064. This will not change the fact that `use_auth_token=False` doesn't send the token at all.\r\n",
"What is current strategy in term of updating `huggingface_hub` version in `datasets` ? I don't want to break stuff in the next release so let's find a proper solution :) ",
"As soon as `token` is deprecated and hfh has a new release, we'll update `datasets` to use the new argument instead. Does it sound good to you ?",
"Perfect :ok_hand: ",
"Hi @Wauplin, thanks for the warning about the deprecation of `token` in favor of `use_auth_token`.\r\n\r\nIndeed, in datasets we use internally `use_auth_token`, which in this case was transformed to `token` to call `HfApi.dataset_info`:\r\nhttps://github.com/huggingface/datasets/blob/1a9385d7cc8a3241b44015145ef56a230fdadc51/src/datasets/load.py#L747\r\n\r\nTherefore, for the new hfh release, the fix will be trivial: we will pass directly `use_auth_token`.\r\n\r\nAs discussed during our meeting yesterday, due to the fact that at datasets we support multiple hfh versions, I think we should handle passing `token` or `use_auth_token` depending on the hfh version."
] | 2022-09-19T15:14:40Z | 2022-09-30T09:16:00Z | 2022-09-30T09:16:00Z | CONTRIBUTOR | null | null | null | ## Describe the bug
In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I feel that here it is working because we assume the token will never be validated.
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L753
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L1121
## Expected results
Pass `token=None` to `huggingface_hub`.
## Actual results
`token="no-token"` is passed.
## Environment info
`huggingface_hub v0.10.0dev` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4990/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4990/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4989/comments | https://api.github.com/repos/huggingface/datasets/issues/4989/events | https://github.com/huggingface/datasets/issues/4989 | 1,376,832,233 | I_kwDODunzps5SEMrp | 4,989 | Running add_column() seems to corrupt existing sequence-type column info | {
"avatar_url": "https://avatars.githubusercontent.com/u/93728165?v=4",
"events_url": "https://api.github.com/users/derek-rocheleau/events{/privacy}",
"followers_url": "https://api.github.com/users/derek-rocheleau/followers",
"following_url": "https://api.github.com/users/derek-rocheleau/following{/other_user}",
"gists_url": "https://api.github.com/users/derek-rocheleau/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/derek-rocheleau",
"id": 93728165,
"login": "derek-rocheleau",
"node_id": "U_kgDOBZYtpQ",
"organizations_url": "https://api.github.com/users/derek-rocheleau/orgs",
"received_events_url": "https://api.github.com/users/derek-rocheleau/received_events",
"repos_url": "https://api.github.com/users/derek-rocheleau/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/derek-rocheleau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/derek-rocheleau/subscriptions",
"type": "User",
"url": "https://api.github.com/users/derek-rocheleau"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Nevermind, I was incorrect."
] | 2022-09-17T17:42:05Z | 2022-09-19T12:54:54Z | 2022-09-19T12:54:54Z | NONE | null | null | null | I have a dataset that contains a column ("foo") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like:
ds = load_dataset(...)
df = ds.to_pandas()
df:
foo_0 | foo_1 | foo_2 | foo_3
0.0 | 1.0 | 2.0 | 3.0
If I run .add_column("new_col", data) on the dataset, and then .to_pandas() on the resulting new dataset, the resulting dataframe contains only 2 columns - foo, new_col. The values in column foo are lists of length 4, the 4 elements that should have been split into separate columns. Dataframe 1st row would be:
ds = load_dataset(...)
new_ds = ds.add_column("new_col", data)
df = new_ds.to_pandas()
df:
foo | new_col
[0.0, 1.0, 2.0, 3.0] | new_val
I've explored the 2 datasets in a debugger and haven't noticed any changes to any attributes related to the foo column, but I can't determine why the dataframes are so different. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4989/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4989/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4988/comments | https://api.github.com/repos/huggingface/datasets/issues/4988/events | https://github.com/huggingface/datasets/issues/4988 | 1,376,096,584 | I_kwDODunzps5SBZFI | 4,988 | Add `IterableDataset.from_generator` to the API | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hamid-vakilzadeh",
"id": 56002455,
"login": "hamid-vakilzadeh",
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hamid-vakilzadeh"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/56002455?v=4",
"events_url": "https://api.github.com/users/hamid-vakilzadeh/events{/privacy}",
"followers_url": "https://api.github.com/users/hamid-vakilzadeh/followers",
"following_url": "https://api.github.com/users/hamid-vakilzadeh/following{/other_user}",
"gists_url": "https://api.github.com/users/hamid-vakilzadeh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hamid-vakilzadeh",
"id": 56002455,
"login": "hamid-vakilzadeh",
"node_id": "MDQ6VXNlcjU2MDAyNDU1",
"organizations_url": "https://api.github.com/users/hamid-vakilzadeh/orgs",
"received_events_url": "https://api.github.com/users/hamid-vakilzadeh/received_events",
"repos_url": "https://api.github.com/users/hamid-vakilzadeh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hamid-vakilzadeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamid-vakilzadeh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hamid-vakilzadeh"
}
] | null | [
"#take",
"Thanks @hamid-vakilzadeh ! Let us know if you have some questions or if we can help",
"Thank you! I certainly will reach out if I need any help."
] | 2022-09-16T15:19:41Z | 2022-10-05T12:10:49Z | 2022-10-05T12:10:49Z | CONTRIBUTOR | null | null | null | We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator.
cc @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4988/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4988/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4987/comments | https://api.github.com/repos/huggingface/datasets/issues/4987/events | https://github.com/huggingface/datasets/pull/4987 | 1,376,006,477 | PR_kwDODunzps4_GlIu | 4,987 | Embed image/audio data in dl_and_prepare parquet | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-16T14:09:27Z | 2022-09-16T16:24:47Z | 2022-09-16T16:22:35Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4987.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4987",
"merged_at": "2022-09-16T16:22:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4987.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4987"
} | Embed the bytes of the image or audio files in the Parquet files directly, instead of having a "path" that points to a local file.
Indeed Parquet files are often used to share data or to be used by workers that may not have access to the local files. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4987/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4986/comments | https://api.github.com/repos/huggingface/datasets/issues/4986/events | https://github.com/huggingface/datasets/pull/4986 | 1,375,895,035 | PR_kwDODunzps4_GNSd | 4,986 | [doc] Fix broken snippet that had too many quotes | {
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tomaarsen",
"id": 37621491,
"login": "tomaarsen",
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tomaarsen"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Spent the day familiarising myself with the huggingface line of products, and happened to run into some small issues here and there. Magically, I've found exactly one small issue in `transformers`, one in `accelerate` and now one in `datasets`, hah!\r\n\r\nAs for this PR, the issue seems solved according to the [new PR documentation](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4986/en/process#map):\r\n![image](https://user-images.githubusercontent.com/37621491/190646405-6afa06fa-9eac-48f6-ab30-2677944fb7b6.png)\r\n"
] | 2022-09-16T12:41:07Z | 2022-09-16T22:12:21Z | 2022-09-16T17:32:14Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4986.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4986",
"merged_at": "2022-09-16T17:32:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4986.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4986"
} | Hello!
### Pull request overview
* Fix broken snippet in https://huggingface.co/docs/datasets/main/en/process that has too many quotes
### Details
The snippet in question can be found here: https://huggingface.co/docs/datasets/main/en/process#map
This screenshot shows the issue, there is a quote too many, causing the snippet to be colored incorrectly:
![image](https://user-images.githubusercontent.com/37621491/190640627-f7587362-0e44-4464-a5d1-a0b98df6986f.png)
The change speaks for itself.
Thank you for the detailed documentation, by the way.
- Tom Aarsen
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4986/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4986/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4985/comments | https://api.github.com/repos/huggingface/datasets/issues/4985/events | https://github.com/huggingface/datasets/pull/4985 | 1,375,807,768 | PR_kwDODunzps4_F6kU | 4,985 | Prefer split patterns from directories over split patterns from filenames | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Can we merge this one since the issue this PR fixes was reported for the second time? I also think we don't need a test for this simple change.",
"@mariosasko sure! could you please approve it? ",
"Hi there @polinaeterna @mariosasko! I have installed 5.2.3.dev0, which should have this fix. Unfortunately, I am still getting the error:\r\n`ValueError: Unknown split \"validation\". Should be one of ['train'].` When I call `load_dataset(\"csv\", data_files=files, split=split)`\r\n\r\nAny help would be greatly appreciated!"
] | 2022-09-16T11:20:40Z | 2022-11-02T11:54:28Z | 2022-09-29T08:07:49Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4985.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4985",
"merged_at": "2022-09-29T08:07:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4985.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4985"
} | related to https://github.com/huggingface/datasets/issues/4895
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4985/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4985/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4984/comments | https://api.github.com/repos/huggingface/datasets/issues/4984/events | https://github.com/huggingface/datasets/pull/4984 | 1,375,690,330 | PR_kwDODunzps4_FhTm | 4,984 | docs: βοΈ add links to the Datasets API | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, thanks @lhoestq. I'll close this PR, and come back to it with @stevhliu once we work on https://github.com/huggingface/datasets-server/issues/568"
] | 2022-09-16T09:34:12Z | 2022-09-16T13:10:14Z | 2022-09-16T13:07:33Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4984.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4984",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4984.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4984"
} | I added some links to the Datasets API in the docs. See https://github.com/huggingface/datasets-server/pull/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs.
I'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm interested in ideas to integrate the API better in these docs without being too much. cc @lhoestq @julien-c @albertvillanova @stevhliu. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4984/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4984/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4983/comments | https://api.github.com/repos/huggingface/datasets/issues/4983/events | https://github.com/huggingface/datasets/issues/4983 | 1,375,667,654 | I_kwDODunzps5R_wXG | 4,983 | How to convert torch.utils.data.Dataset to huggingface dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/77595952?v=4",
"events_url": "https://api.github.com/users/DEROOCE/events{/privacy}",
"followers_url": "https://api.github.com/users/DEROOCE/followers",
"following_url": "https://api.github.com/users/DEROOCE/following{/other_user}",
"gists_url": "https://api.github.com/users/DEROOCE/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DEROOCE",
"id": 77595952,
"login": "DEROOCE",
"node_id": "MDQ6VXNlcjc3NTk1OTUy",
"organizations_url": "https://api.github.com/users/DEROOCE/orgs",
"received_events_url": "https://api.github.com/users/DEROOCE/received_events",
"repos_url": "https://api.github.com/users/DEROOCE/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DEROOCE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DEROOCE/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DEROOCE"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi! I think you can use the newly-added `from_generator` method for that:\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n ## or if it's an IterableDataset\r\n # for ex in torch_dataset:\r\n # yield ex\r\n\r\ndset = Dataset.from_generator(gen)\r\n```",
"Maybe `Dataset.from_list` can work as well no ?\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndset = Dataset.from_list(torch_dataset)\r\n```",
"> ```python\r\n> from datasets import Dataset\r\n> \r\n> def gen():\r\n> for idx in len(torch_dataset):\r\n> yield torch_dataset[idx] # this has to be a dictionary\r\n> ## or if it's an IterableDataset\r\n> # for ex in torch_dataset:\r\n> # yield ex\r\n> \r\n> dset = Dataset.from_generator(gen)\r\n> ```\r\n\r\nI try to use `Dataset.from_generator()` method, and it returns an error:\r\n```bash\r\nAttributeError: type object 'Dataset' has no attribute 'from_generator'\r\n```\r\nAnd I think it maybe the version of my datasets package is out-of-date, so I update it\r\n```bash\r\npip install --upgrade datasets\r\n```\r\nBut after that, the code still return the above Error. ",
"> ```python\r\n> dset = Dataset.from_list(torch_dataset)\r\n> ```\r\n\r\nIt seems that Dataset also has no `from_list` method π\r\n```bash\r\nAttributeError: type object 'Dataset' has no attribute 'from_list'\r\n```",
"> I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:\r\n> \r\n> ```python\r\n> from datasets import Dataset\r\n> data = [[1, 2],[3, 4]]\r\n> ds = Dataset.from_dict({\"data\": data})\r\n> ds = ds.with_format(\"torch\")\r\n> ds[0]\r\n> ds[:2]\r\n> ```\r\n> \r\n> So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.\r\n\r\nMy dummy code is like:\r\n```python\r\nimport os\r\nimport json\r\nfrom torch.utils import data\r\nimport datasets\r\n\r\ndef gen(torch_dataset):\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n\r\nclass MyDataset(data.Dataset):\r\n def __init__(self, path):\r\n self.dict = []\r\n for line in open(path, 'r', encoding='utf-8'):\r\n j_dict = json.loads(line)\r\n self.dict.append(j_dict['context'])\r\n \r\n def __getitem__(self, idx):\r\n return self.dict[idx]\r\n\r\n def __len__(self):\r\n return len(self.dict)\r\n\r\nroot_path = os.path.dirname(os.path.abspath(__file__))\r\npath = os.path.join(root_path, 'dataset', 'train.json')\r\ntorch_dataset = MyDataset(path)\r\n\r\ndit = []\r\nfor line in open(path, 'r', encoding='utf-8'):\r\n j_dict = json.loads(line)\r\n dit.append(j_dict['context'])\r\ndset1 = datasets.Dataset.from_list(dit)\r\nprint(dset1)\r\ndset2 = datasets.Dataset.from_generator(gen)\r\nprint(dset2)\r\n```",
"We're releasing `from_generator` and `from_list` today :)\r\nIn the meantime you can play with them by installing `datasets` from source",
"> We're releasing `from_generator` and `from_list` today :) In the meantime you can play with them by installing `datasets` from source\r\n\r\nThanks a lot for your work!"
] | 2022-09-16T09:15:10Z | 2022-09-20T11:23:43Z | 2022-09-20T11:23:43Z | NONE | null | null | null | I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:
```python
from datasets import Dataset
data = [[1, 2],[3, 4]]
ds = Dataset.from_dict({"data": data})
ds = ds.with_format("torch")
ds[0]
ds[:2]
```
So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert?
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4983/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4983/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4982/comments | https://api.github.com/repos/huggingface/datasets/issues/4982/events | https://github.com/huggingface/datasets/issues/4982 | 1,375,604,693 | I_kwDODunzps5R_g_V | 4,982 | Create dataset_infos.json with VALIDATION and TEST splits | {
"avatar_url": "https://avatars.githubusercontent.com/u/26695348?v=4",
"events_url": "https://api.github.com/users/skalinin/events{/privacy}",
"followers_url": "https://api.github.com/users/skalinin/followers",
"following_url": "https://api.github.com/users/skalinin/following{/other_user}",
"gists_url": "https://api.github.com/users/skalinin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/skalinin",
"id": 26695348,
"login": "skalinin",
"node_id": "MDQ6VXNlcjI2Njk1MzQ4",
"organizations_url": "https://api.github.com/users/skalinin/orgs",
"received_events_url": "https://api.github.com/users/skalinin/received_events",
"repos_url": "https://api.github.com/users/skalinin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/skalinin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skalinin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/skalinin"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"@mariosasko could you help me with this issue? we've started the discussion from [here](https://github.com/huggingface/datasets/issues/4895#issuecomment-1248227130)",
"Hi again! Can you please pass the directory name containing the dataset script instead of the script name to `datasets-cli test`?",
"Yes, it worked! thanks a lot"
] | 2022-09-16T08:21:19Z | 2022-09-28T07:59:39Z | 2022-09-28T07:59:39Z | NONE | null | null | null | The problem is described in that [issue](https://github.com/huggingface/datasets/issues/4895#issuecomment-1247975569).
> When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error:
> ValueError: Unknown split "test". Should be one of ['train'].
>
> The data_infos.json is created perfectly fine when I use only one split - datasets.Split.TRAIN
>
> You can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch)
I tried to clear the cache folder, than I got an another error. I run:
```
git clone https://huggingface.co/datasets/sberbank-ai/Peter
cd Peter
git checkout add_splits # switch to a add_splits branch
rm dataset_infos.json # remove local dataset_infos.json
rm -r ~/.cache/huggingface # remove cached dataset_infos.json
datasets-cli test Peter.py --save_infos --all_configs # trying to create new dataset_infos.json
```
The error message:
```
Using custom data configuration default
Testing builder 'default' (1/1)
Downloading and preparing dataset peter/default to /Users/kalinin/.cache/huggingface/datasets/peter/default/0.0.0/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d...
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 5160.63it/s]
Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last):
File "/usr/local/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/usr/local/lib/python3.9/site-packages/datasets/commands/test.py", line 137, in run
builder.download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/kalinin/.cache/huggingface/modules/datasets_modules/datasets/Peter/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d/Peter.py", line 23, in _split_generators
data_files = dl_manager.download_and_extract(_URLS)
File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 403, in extract
extracted_paths = map_nested(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 393, in map_nested
mapped = [
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 394, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 330, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 213, in cached_path
output_path = ExtractManager(cache_dir=download_config.cache_dir).extract(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 46, in extract
self.extractor.extract(input_path, output_path, extractor_format)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 263, in extract
with FileLock(lock_path):
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 399, in __init__
max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax
FileNotFoundError: [Errno 2] No such file or directory: ''
Exception ignored in: <function BaseFileLock.__del__ at 0x11caeec10>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 328, in __del__
self.release(force=True)
File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 303, in release
with self._thread_lock:
AttributeError: 'UnixFileLock' object has no attribute '_thread_lock'
Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]
```
Can you help me please?
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.9.5
- PyArrow version: 9.0.0
- Pandas version: 1.2.4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4982/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4982/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4981/comments | https://api.github.com/repos/huggingface/datasets/issues/4981/events | https://github.com/huggingface/datasets/issues/4981 | 1,375,086,773 | I_kwDODunzps5R9ii1 | 4,981 | Can't create a dataset with `float16` features | {
"avatar_url": "https://avatars.githubusercontent.com/u/15098095?v=4",
"events_url": "https://api.github.com/users/dconathan/events{/privacy}",
"followers_url": "https://api.github.com/users/dconathan/followers",
"following_url": "https://api.github.com/users/dconathan/following{/other_user}",
"gists_url": "https://api.github.com/users/dconathan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dconathan",
"id": 15098095,
"login": "dconathan",
"node_id": "MDQ6VXNlcjE1MDk4MDk1",
"organizations_url": "https://api.github.com/users/dconathan/orgs",
"received_events_url": "https://api.github.com/users/dconathan/received_events",
"repos_url": "https://api.github.com/users/dconathan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dconathan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dconathan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dconathan"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Hi @dconathan, thanks for reporting.\r\n\r\nWe rely on Arrow as a backend, and as far as I know currently support for `float16` in Arrow is not fully implemented in Python (C++), hence the `ArrowNotImplementedError` you get.\r\n\r\nSee, e.g.: https://arrow.apache.org/docs/status.html?highlight=float16#data-types",
"Thanks for the linkβ¦. didnβt realize arrow didnβt support it yet. Should it be removed from https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/main_classes#datasets.Value until Arrow supports it?",
"Yes, you are right: maybe we should either remove it from our docs or add a comment explaining the issue.\r\n\r\nThe thing is that in Arrow it is partially supported: you can create `float16` values, but you can't cast them from/to other types. And current implementation of `Value` always tries to perform a cast from `float64` to `float16`.",
"Maybe we can just add a note in the `Value` documentation ?"
] | 2022-09-15T21:03:24Z | 2022-09-26T09:34:50Z | null | NONE | null | null | null | ## Describe the bug
I can't create a dataset with `float16` features.
I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same exact error.
The bug seems to arise from `datasets` casting the values to `double` and then `pyarrow` doesn't know how to convert those back to `float16`... does that sound right? Is there a way to bypass this since it's not necessary in the `numpy` and `torch` cases?
Thanks!
## Steps to reproduce the bug
All of the following raise the following error with the same exact (as far as I can tell) traceback:
```python
ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float
```
```python
from datasets import Dataset, Features, Value
Dataset.from_dict({"x": [0.0, 1.0, 2.0]}, features=Features(x=Value("float16")))
import numpy as np
Dataset.from_dict({"x": np.arange(3, dtype=np.float16)}, features=Features(x=Value("float16")))
import torch
Dataset.from_dict({"x": torch.arange(3).to(torch.float16)}, features=Features(x=Value("float16")))
```
## Expected results
A dataset with `float16` features is successfully created.
## Actual results
```python
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
Cell In [14], line 1
----> 1 Dataset.from_dict({"x": [1.0, 2.0, 3.0]}, features=Features(x=Value("float16")))
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split)
865 mapping = features.encode_batch(mapping)
866 mapping = {
867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col)
868 for col, data in mapping.items()
869 }
--> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping)
871 if info.features is None:
872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()})
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs)
734 @classmethod
735 def from_pydict(cls, *args, **kwargs):
736 """
737 Construct a Table from Arrow arrays or columns
738
(...)
748 :class:`datasets.table.Table`:
749 """
--> 750 return cls(pa.Table.from_pydict(*args, **kwargs))
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:3648, in pyarrow.lib.Table.from_pydict()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:5174, in pyarrow.lib._from_pydict()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:343, in pyarrow.lib.asarray()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:231, in pyarrow.lib.array()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type)
192 # otherwise we can finally use the user's type
193 elif type is not None:
194 # We use cast_array_to_feature to support casting to custom types like Audio and Image
195 # Also, when trying type "string", we don't want to convert integers or floats to "string".
196 # We only do it if trying_type is False - since this is what the user asks for.
--> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
198 return out
199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs)
1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1682 else:
-> 1683 return func(array, *args, **kwargs)
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str)
1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)
1852 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
1854 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs)
1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1682 else:
-> 1683 return func(array, *args, **kwargs)
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1762, in array_cast(array, pa_type, allow_number_to_str)
1760 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type):
1761 raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}")
-> 1762 return array.cast(pa_type)
1763 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}")
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:919, in pyarrow.lib.Array.cast()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/compute.py:389, in cast(arr, target_type, safe, options)
387 else:
388 options = CastOptions.safe(target_type)
--> 389 return call_function("cast", [arr], options)
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:560, in pyarrow._compute.call_function()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:355, in pyarrow._compute.Function.call()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float
```
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4981/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4981/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4980/comments | https://api.github.com/repos/huggingface/datasets/issues/4980/events | https://github.com/huggingface/datasets/issues/4980 | 1,374,868,083 | I_kwDODunzps5R8tJz | 4,980 | Make `pyarrow` optional | {
"avatar_url": "https://avatars.githubusercontent.com/u/240344?v=4",
"events_url": "https://api.github.com/users/KOLANICH/events{/privacy}",
"followers_url": "https://api.github.com/users/KOLANICH/followers",
"following_url": "https://api.github.com/users/KOLANICH/following{/other_user}",
"gists_url": "https://api.github.com/users/KOLANICH/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KOLANICH",
"id": 240344,
"login": "KOLANICH",
"node_id": "MDQ6VXNlcjI0MDM0NA==",
"organizations_url": "https://api.github.com/users/KOLANICH/orgs",
"received_events_url": "https://api.github.com/users/KOLANICH/received_events",
"repos_url": "https://api.github.com/users/KOLANICH/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KOLANICH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOLANICH/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KOLANICH"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"The whole datasets library is pretty much a wrapper to pyarrow (just take a look at some of the source for a Dataset) https://github.com/huggingface/datasets/blob/51aef08ad7053c0bfe8f9a961207b26df15850d3/src/datasets/arrow_dataset.py#L639 \r\n\r\nI think removing the pyarrow dependency would involve a complete rewrite / a different library with minimal functionality (datasets-lite ?)",
"Thanks for the proposal, @KOLANICH. And also thanks for your answer, @dconathan.\r\n\r\nIndeed, we are using `pyarrow` as the backend for our datasets, in order to cache them and also allow memory-mapping (using datasets larger than your RAM memory).\r\n\r\nOne way to avoid using `pyarrow` could be loading the datasets in streaming mode, by passing `streaming=True` to `load_dataset`. This way you basically get a generator for the dataset; nothing is downloaded, nor cached. ",
"Thanks for the info. Could `datasets` then be made optional for `transformers` instead? I used `transformers` only to deal with pretrained models to deploy them (convert to ONNX, and then I use TVM), so I don't really need `pyarrow` and `datasets` by now.\r\n"
] | 2022-09-15T17:38:03Z | 2022-09-16T17:23:47Z | 2022-09-16T17:23:47Z | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
Is `pyarrow` really needed for every dataset?
**Describe the solution you'd like**
It is made optional.
**Describe alternatives you've considered**
Likely, no.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4980/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4980/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4979/comments | https://api.github.com/repos/huggingface/datasets/issues/4979/events | https://github.com/huggingface/datasets/pull/4979 | 1,374,820,758 | PR_kwDODunzps4_CouM | 4,979 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-15T16:51:03Z | 2022-09-22T12:37:55Z | 2022-09-15T17:12:09Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4979.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4979",
"merged_at": "2022-09-15T17:12:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4979.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4979"
} | Fix missing tags in dataset cards:
- amazon_us_reviews
- art
- discofuse
- indic_glue
- ubuntu_dialogs_corpus
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921
- #4931 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4979/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4979/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4978/comments | https://api.github.com/repos/huggingface/datasets/issues/4978/events | https://github.com/huggingface/datasets/pull/4978 | 1,374,271,504 | PR_kwDODunzps4_Axnh | 4,978 | Update IndicGLUE download links | {
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sumanthd17",
"id": 28291870,
"login": "sumanthd17",
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sumanthd17"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-15T10:05:57Z | 2022-09-15T22:00:20Z | 2022-09-15T21:57:34Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4978",
"merged_at": "2022-09-15T21:57:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4978"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4978/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4977/comments | https://api.github.com/repos/huggingface/datasets/issues/4977/events | https://github.com/huggingface/datasets/issues/4977 | 1,372,962,157 | I_kwDODunzps5R1b1t | 4,977 | Providing dataset size | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi @sashavor, thanks for your suggestion.\r\n\r\nUntil now we have the CLI command \r\n```\r\ndatasets-cli test datasets/<your-dataset-folder> --save_infos --all_configs\r\n```\r\nthat generates the `dataset_infos.json` with the size of the downloaded dataset, among other information.\r\n\r\nWe are currently in the middle of removing those JSON files and putting their information directly in the header of the `README.md` (as YAML tags). Normally, the CLI command should continue working but saving its output to the dataset card instead. See:\r\n- #4926",
"Additionally, the download size can be inferred by doing HEAD requests to the files to be downloaded. And for files hosted on the hub you can even get the file sizes using the Hub API",
"Amazing @albertvillanova ! I think just having that information visible in the dataset info (without having to do any requests/additional coding) would be really useful :hugs: "
] | 2022-09-14T13:09:27Z | 2022-09-15T16:03:58Z | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Especially for big datasets like [LAION](https://huggingface.co/datasets/laion/laion2B-en/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded).
**Describe the solution you'd like**
Auto-populating the downloaded dataset size on the dataset page would be really useful, including that of each split (when there are some).
**Describe alternatives you've considered**
People should be adding this to dataset cards, but I don't think that is systematically the case :slightly_smiling_face:
**Additional context**
Mentioned to @lhoestq
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4977/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4977/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4976/comments | https://api.github.com/repos/huggingface/datasets/issues/4976/events | https://github.com/huggingface/datasets/issues/4976 | 1,372,322,382 | I_kwDODunzps5Ry_pO | 4,976 | Hope to adapt Python3.9 as soon as possible | {
"avatar_url": "https://avatars.githubusercontent.com/u/74012141?v=4",
"events_url": "https://api.github.com/users/RedHeartSecretMan/events{/privacy}",
"followers_url": "https://api.github.com/users/RedHeartSecretMan/followers",
"following_url": "https://api.github.com/users/RedHeartSecretMan/following{/other_user}",
"gists_url": "https://api.github.com/users/RedHeartSecretMan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RedHeartSecretMan",
"id": 74012141,
"login": "RedHeartSecretMan",
"node_id": "MDQ6VXNlcjc0MDEyMTQx",
"organizations_url": "https://api.github.com/users/RedHeartSecretMan/orgs",
"received_events_url": "https://api.github.com/users/RedHeartSecretMan/received_events",
"repos_url": "https://api.github.com/users/RedHeartSecretMan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RedHeartSecretMan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RedHeartSecretMan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RedHeartSecretMan"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi! `datasets` should work in Python 3.9. What kind of issue have you encountered?",
"There is this related issue already: https://github.com/huggingface/datasets/issues/4113\r\nAnd I guess we need a CI job for 3.9 ^^",
"Perhaps we should report this issue in the `filelock` repo?"
] | 2022-09-14T04:42:22Z | 2022-09-26T16:32:35Z | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context about the feature request here.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4976/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4976/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4975/comments | https://api.github.com/repos/huggingface/datasets/issues/4975/events | https://github.com/huggingface/datasets/pull/4975 | 1,371,703,691 | PR_kwDODunzps4-4NXX | 4,975 | Add `fn_kwargs` param to `IterableDataset.map` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-13T16:19:05Z | 2022-09-13T16:47:47Z | 2022-09-13T16:45:34Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4975.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4975",
"merged_at": "2022-09-13T16:45:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4975.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4975"
} | Add the `fn_kwargs` parameter to `IterableDataset.map`.
("Resolves" https://discuss.huggingface.co/t/how-to-use-large-image-text-datasets-in-hugging-face-hub-without-downloading-for-free/22780/3) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4975/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4975/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4974/comments | https://api.github.com/repos/huggingface/datasets/issues/4974/events | https://github.com/huggingface/datasets/pull/4974 | 1,371,682,020 | PR_kwDODunzps4-4Iri | 4,974 | [GH->HF] Part 2: Remove all dataset scripts from github | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"So this means metrics will be deleted from this repo in favor of the \"evaluate\" library? Maybe you guys could just redirect metrics to that library.",
"We are deprecating the metrics in `datasets` indeed and suggest users to switch to `evaluate` (via a warning message)\r\n\r\nWe'll keep the current metrics as they are for now, but they'll be completely removed at one point",
"I guess this is ready to merge ?\r\n\r\nIt should break nothing except one rare case:\r\n\r\nIf someone is using an old version of `datasets` to try to load a recent dataset. Indeed in that case it fetches the `main` branch on github to see if it exists. But since we're removing all the datasets, forward fetching won't work anymore.\r\n\r\ne.g. if someone uses \"imagenet-1k\" with a version of `datasets` that didn't have it at that time. I checked on kibana and one single user would be affected with 4k downloads/months. It should still work for them though thanks to the `datasets` cache\r\n\r\nBut if they delete their cache, the workaround is... π₯ update `datasets` π
",
"Let's merge this on monday if we can, to make sure contributors who wanted to merge their dataset PRs here could do it",
"Alright, merging !"
] | 2022-09-13T16:01:12Z | 2022-10-03T17:09:39Z | 2022-10-03T17:07:32Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4974.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4974",
"merged_at": "2022-10-03T17:07:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4974.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4974"
} | Now that all the datasets live on the Hub we can remove the /datasets directory that contains all the dataset scripts of this repository
- [x] Needs https://github.com/huggingface/datasets/pull/4973 to be merged first
- [x] and PR to be enabled on the Hub for non-namespaced datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4974/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4974/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4973/comments | https://api.github.com/repos/huggingface/datasets/issues/4973/events | https://github.com/huggingface/datasets/pull/4973 | 1,371,600,074 | PR_kwDODunzps4-33JW | 4,973 | [GH->HF] Load datasets from the Hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Duplicate of:\r\n- #4059"
] | 2022-09-13T15:01:41Z | 2022-09-15T15:26:51Z | 2022-09-15T15:24:26Z | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/4973.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4973",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4973.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4973"
} | Currently datasets with no namespace (e.g. squad, glue) are loaded from github.
In this PR I changed this logic to use the Hugging Face Hub instead.
This is the first step in removing all the dataset scripts in this repository
related to discussions in https://github.com/huggingface/datasets/pull/4059 (I should have continued from this PR actually) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4973/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4973/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4972/comments | https://api.github.com/repos/huggingface/datasets/issues/4972/events | https://github.com/huggingface/datasets/pull/4972 | 1,371,443,306 | PR_kwDODunzps4-3VVF | 4,972 | Fix map batched with torch output | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-13T13:16:34Z | 2022-09-20T09:42:02Z | 2022-09-20T09:39:33Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4972.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4972",
"merged_at": "2022-09-20T09:39:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4972.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4972"
} | Reported in https://discuss.huggingface.co/t/typeerror-when-applying-map-after-set-format-type-torch/23067/2
Currently it fails if one uses batched `map` and the map function returns a torch tensor.
I fixed it for torch, tf, jax and pandas series. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4972/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4972/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4971/comments | https://api.github.com/repos/huggingface/datasets/issues/4971/events | https://github.com/huggingface/datasets/pull/4971 | 1,370,319,516 | PR_kwDODunzps4-zk3g | 4,971 | Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-12T18:08:24Z | 2022-09-13T13:51:08Z | 2022-09-13T13:48:45Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4971.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4971",
"merged_at": "2022-09-13T13:48:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4971.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4971"
} | Currently, if the `input_columns` list in `Dataset.map` is specified, the columns not in that list are dropped after the `map` transform.
This makes the behavior inconsistent with `IterableDataset.map`.
(It seems this issue was introduced by mistake in https://github.com/huggingface/datasets/pull/2246)
Fix https://github.com/huggingface/datasets/issues/4858 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4971/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4971/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4970/comments | https://api.github.com/repos/huggingface/datasets/issues/4970/events | https://github.com/huggingface/datasets/pull/4970 | 1,369,433,074 | PR_kwDODunzps4-wkY2 | 4,970 | Support streaming nli_tr dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-12T07:48:45Z | 2022-09-12T08:45:04Z | 2022-09-12T08:43:08Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4970.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4970",
"merged_at": "2022-09-12T08:43:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4970.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4970"
} | Support streaming nli_tr dataset.
This PR removes legacy `codecs.open` and replaces it with `open` that supports passing encoding.
Fix #3186. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4970/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4970/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4969/comments | https://api.github.com/repos/huggingface/datasets/issues/4969/events | https://github.com/huggingface/datasets/pull/4969 | 1,369,334,740 | PR_kwDODunzps4-wPOk | 4,969 | Fix data URL and metadata of vivos dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-12T06:12:34Z | 2022-09-12T07:16:15Z | 2022-09-12T07:14:19Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4969.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4969",
"merged_at": "2022-09-12T07:14:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4969.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4969"
} | After contacting the authors of the VIVOS dataset to report that their data server is down, we have received a reply from Hieu-Thi Luong that their data is now hosted on Zenodo: https://doi.org/10.5281/zenodo.7068130
This PR updates their data URL and some metadata (homepage, citation and license).
Fix #4936. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4969/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4969/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4968/comments | https://api.github.com/repos/huggingface/datasets/issues/4968/events | https://github.com/huggingface/datasets/pull/4968 | 1,369,312,877 | PR_kwDODunzps4-wKkw | 4,968 | Support streaming compguesswhat dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-12T05:42:24Z | 2022-09-12T08:00:06Z | 2022-09-12T07:58:06Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4968.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4968",
"merged_at": "2022-09-12T07:58:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4968.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4968"
} | Support streaming `compguesswhat` dataset.
Fix #3191. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4968/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4968/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4967/comments | https://api.github.com/repos/huggingface/datasets/issues/4967/events | https://github.com/huggingface/datasets/pull/4967 | 1,369,092,452 | PR_kwDODunzps4-vbS- | 4,967 | Strip "/" in local dataset path to avoid empty dataset name error | {
"avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4",
"events_url": "https://api.github.com/users/apohllo/events{/privacy}",
"followers_url": "https://api.github.com/users/apohllo/followers",
"following_url": "https://api.github.com/users/apohllo/following{/other_user}",
"gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/apohllo",
"id": 40543,
"login": "apohllo",
"node_id": "MDQ6VXNlcjQwNTQz",
"organizations_url": "https://api.github.com/users/apohllo/orgs",
"received_events_url": "https://api.github.com/users/apohllo/received_events",
"repos_url": "https://api.github.com/users/apohllo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apohllo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/apohllo"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool :-)"
] | 2022-09-11T23:09:16Z | 2022-09-29T10:46:21Z | 2022-09-12T15:30:38Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4967.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4967",
"merged_at": "2022-09-12T15:30:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4967.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4967"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4967/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4967/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4965/comments | https://api.github.com/repos/huggingface/datasets/issues/4965/events | https://github.com/huggingface/datasets/issues/4965 | 1,368,661,002 | I_kwDODunzps5RlBwK | 4,965 | [Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback() | {
"avatar_url": "https://avatars.githubusercontent.com/u/35718590?v=4",
"events_url": "https://api.github.com/users/hoangtnm/events{/privacy}",
"followers_url": "https://api.github.com/users/hoangtnm/followers",
"following_url": "https://api.github.com/users/hoangtnm/following{/other_user}",
"gists_url": "https://api.github.com/users/hoangtnm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hoangtnm",
"id": 35718590,
"login": "hoangtnm",
"node_id": "MDQ6VXNlcjM1NzE4NTkw",
"organizations_url": "https://api.github.com/users/hoangtnm/orgs",
"received_events_url": "https://api.github.com/users/hoangtnm/received_events",
"repos_url": "https://api.github.com/users/hoangtnm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hoangtnm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoangtnm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hoangtnm"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Hi! This seems like a bug in `soundfile`. Could you please open an issue in their repo? `soundfile` works without any issues on my M1, so I'm not sure we can help.",
"Hi @mariosasko, can you share how you installed `soundfile` on your mac M1?",
"Hi @hoangtnm - I upgraded to python 3.10 and it fixed the problem for me. I was also running 3.8 on an M1 mac."
] | 2022-09-10T15:55:49Z | 2022-11-18T23:45:02Z | null | NONE | null | null | null | ## Describe the bug
I'm trying to run `cast_column("audio", Audio())` on Apple M1 Pro, but it seems that it doesn't work.
## Steps to reproduce the bug
```python
import datasets
dataset = load_dataset("csv", data_files="./train.csv")["train"]
dataset = dataset.map(lambda x: {"audio": str(DATA_DIR / "audio" / x["audio"])})
dataset = dataset.cast_column("audio", Audio())
dataset[0]
```
## Expected results
```
{'audio': {'bytes': None,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav'},
'english_transcription': 'I would like to set up a joint account with my partner',
'intent_class': 11,
'lang_id': 4,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav',
'transcription': 'I would like to set up a joint account with my partner'}
```
## Actual results
````---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 dataset[0]
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2165, in Dataset.__getitem__(self, key)
2163 def __getitem__(self, key): # noqa: F811
2164 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2165 return self._getitem(
2166 key,
2167 )
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2150, in Dataset._getitem(self, key, decoded, **kwargs)
2148 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
2149 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2150 formatted_output = format_table(
2151 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2152 )
2153 return formatted_output
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row)
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1647, in Features.decode_example(self, example, token_per_repo_id)
1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1635 """Decode example with custom feature decoding.
1636
1637 Args:
(...)
1644 :obj:`dict[str, Any]`
1645 """
-> 1647 return {
1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1649 if self._column_requires_decoding[column_name]
1650 else value
1651 for column_name, (feature, value) in zip_dict(
1652 {key: value for key, value in self.items() if key in example}, example
1653 )
1654 }
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1648, in <dictcomp>(.0)
1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None):
1635 """Decode example with custom feature decoding.
1636
1637 Args:
(...)
1644 :obj:`dict[str, Any]`
1645 """
1647 return {
-> 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1649 if self._column_requires_decoding[column_name]
1650 else value
1651 for column_name, (feature, value) in zip_dict(
1652 {key: value for key, value in self.items() if key in example}, example
1653 )
1654 }
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1260, in decode_nested_example(schema, obj, token_per_repo_id)
1257 # Object with special decoding:
1258 elif isinstance(schema, (Audio, Image)):
1259 # we pass the token to read and decode files from private repositories in streaming mode
-> 1260 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
1261 return obj
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:156, in Audio.decode_example(self, value, token_per_repo_id)
154 array, sampling_rate = self._decode_non_mp3_file_like(file)
155 else:
--> 156 array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)
157 return {"path": path, "array": array, "sampling_rate": sampling_rate}
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:257, in Audio._decode_non_mp3_path_like(self, path, format, token_per_repo_id)
254 use_auth_token = None
256 with xopen(path, "rb", use_auth_token=use_auth_token) as f:
--> 257 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
258 return array, sampling_rate
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/util/decorators.py:88, in deprecate_positional_args.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
86 extra_args = len(args) - len(all_args)
87 if extra_args <= 0:
---> 88 return f(*args, **kwargs)
90 # extra_args > 0
91 args_msg = [
92 "{}={}".format(name, arg)
93 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:])
94 ]
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:164, in load(path, sr, mono, offset, duration, dtype, res_type)
161 else:
162 # Otherwise try soundfile first, and then fall back if necessary
163 try:
--> 164 y, sr_native = __soundfile_load(path, offset, duration, dtype)
166 except RuntimeError as exc:
167 # If soundfile failed, try audioread instead
168 if isinstance(path, (str, pathlib.PurePath)):
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:195, in __soundfile_load(path, offset, duration, dtype)
192 context = path
193 else:
194 # Otherwise, create the soundfile object
--> 195 context = sf.SoundFile(path)
197 with context as sf_desc:
198 sr_native = sf_desc.samplerate
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
626 self._mode = mode
627 self._info = _create_info_struct(file, mode, samplerate, channels,
628 format, subtype, endian)
--> 629 self._file = self._open(file, mode_int, closefd)
630 if set(mode).issuperset('r+') and self.seekable():
631 # Move write position to 0 (like in Python file objects)
632 self.seek(0)
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1179, in SoundFile._open(self, file, mode_int, closefd)
1177 file_ptr = _snd.sf_open_fd(file, mode_int, self._info, closefd)
1178 elif _has_virtual_io_attrs(file, mode_int):
-> 1179 file_ptr = _snd.sf_open_virtual(self._init_virtual_io(file),
1180 mode_int, self._info, _ffi.NULL)
1181 else:
1182 raise TypeError("Invalid file: {0!r}".format(self.name))
File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1197, in SoundFile._init_virtual_io(self, file)
1194 def _init_virtual_io(self, file):
1195 """Initialize callback functions for sf_open_virtual()."""
1196 @_ffi.callback("sf_vio_get_filelen")
-> 1197 def vio_get_filelen(user_data):
1198 curr = file.tell()
1199 file.seek(0, SEEK_END)
MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks
```
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4965/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4965/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4964/comments | https://api.github.com/repos/huggingface/datasets/issues/4964/events | https://github.com/huggingface/datasets/issues/4964 | 1,368,617,322 | I_kwDODunzps5Rk3Fq | 4,964 | Column of arrays (2D+) are using unreasonably high memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/30353?v=4",
"events_url": "https://api.github.com/users/vigsterkr/events{/privacy}",
"followers_url": "https://api.github.com/users/vigsterkr/followers",
"following_url": "https://api.github.com/users/vigsterkr/following{/other_user}",
"gists_url": "https://api.github.com/users/vigsterkr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vigsterkr",
"id": 30353,
"login": "vigsterkr",
"node_id": "MDQ6VXNlcjMwMzUz",
"organizations_url": "https://api.github.com/users/vigsterkr/orgs",
"received_events_url": "https://api.github.com/users/vigsterkr/received_events",
"repos_url": "https://api.github.com/users/vigsterkr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vigsterkr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vigsterkr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vigsterkr"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"note i have tried the same code with `datasets` version 2.4.0, the outcome is the very same as described above.",
"Seems related to issues #4623 and #4802 so it would appear this issue has been around for a few months.",
"Hi ! `Dataset.from_dict` keeps the data in memory. You can write on disk and reload them with\r\n```python\r\ndataset.save_to_disk(\"path/to/local\")\r\ndataset = load_from_disk(\"path/to/local\")\r\n```\r\nthis way you'll end up with a dataset loaded from your disk using memory mapping, and it won't fill up your RAM :)\r\n\r\nrelated to https://github.com/huggingface/datasets/issues/4861",
"@lhoestq thnx for getting back to me! i've tested the suggested method, but unfortunately the memory consumption is the very same:\r\n\r\n```\r\nfrom datasets import Dataset, Features, Array2D, Array3D, load_from_disk\r\nimport numpy as np\r\n\r\ncolumn_name = \"a\"\r\narray_shape = (64, 64, 3)\r\n\r\ndata = np.random.random((10000,) + array_shape)\r\ndataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype=\"float64\")}))\r\ndataset.save_to_disk(\"foo\")\r\n\r\nfoo_db = load_from_disk(\"foo\")\r\ncolum_value = foo_db[column_name]\r\n```\r\n\r\nthe very same happens when you create the dataset, but dont specify the feature type.\r\n\r\ni've tried running this on different envs (macOS, linux) and it's behaving the very same way.",
"When you call `colum_value = foo_db[column_name]`, you load the full column in memory.\r\n\r\nIf you want to avoid filling up your memory, you can access chunks of data instead\r\n```python\r\nembeddings = dataset[i:i + chunk_size][\"embeddings\"]\r\n```",
"@lhoestq yeah that's intentional, i.e. i really want to load the whole column into the memory. but as said above there's an unreasonable amount of overhead for the memory. the np array itself is using about 1G of memory:\r\n```\r\n>>> getsizeof(data)/1024/1024\r\n937.5001525878906\r\n```\r\nthat accessing of column above is using 10x memory compared to the original numpy array.",
"The dataset must be twice as big because we use regular arrow ListArray under the hood and not FixedSizeListArray. Basically we store unnecessary offsets.\r\n\r\nAnd this should affect performance as well. When we developed this, FixedSizeListArray still had some issues but they should be resolved on the PyArrow side now",
"A doubling would be fine. My very basic understanding of PyArrow is that using ListArray is probably related to the issue though. Using a multi-dimensional array in datasets is storing everything as strange nested 1d object arrays, which I imagine is creating the massive overhead.\r\n\r\nI think it should be a PyArrow Tensor, no?",
"PyArrow tensors are not part of the Arrow format AFAIK:\r\n\r\n> There is no direct support in the arrow columnar format to store Tensors as column values.\r\n\r\nsource: https://github.com/apache/arrow/issues/4802#issuecomment-508494694",
"That's... unfortunate. I didn't realize that."
] | 2022-09-10T13:07:22Z | 2022-09-22T18:29:22Z | null | NONE | null | null | null | ## Describe the bug
When trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage.
## Steps to reproduce the bug
```python
from datasets import Dataset, Features, Array2D, Array3D
import numpy as np
column_name = "a"
array_shape = (64, 64, 3)
data = np.random.random((10000,) + array_shape)
dataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype="float64")}))
```
the code above will use about 10Gb of RAM while constructing the `dataset` object.
The code below will use roughly the same amount of memory (and time) when trying to actually access the data itself of that column.
```python
from datasets import Dataset
import numpy as np
column_name = "a"
array_shape = (64, 64, 3)
data = np.random.random((10000,) + array_shape)
dataset = Dataset.from_dict({column_name: data})
dataset[column_name]
```
## Expected results
Some memory overhead, but not like as it is now and certainly not an overhead of such runtime that is currently happening.
## Actual results
Enormous memory- and runtime overhead.
## Environment info
- `datasets` version: 2.3.2
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4964/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4964/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4963/comments | https://api.github.com/repos/huggingface/datasets/issues/4963/events | https://github.com/huggingface/datasets/issues/4963 | 1,368,201,188 | I_kwDODunzps5RjRfk | 4,963 | Dataset without script does not support regular JSON data file | {
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c"
} | [] | closed | false | null | [] | null | [
"Hi @julien-c,\r\n\r\nOut of the box, we only support JSON lines (NDJSON) data files, but your data file is a regular JSON file. The reason is we use `pyarrow.json.read_json` and this only supports line-delimited JSON. "
] | 2022-09-09T18:45:33Z | 2022-09-20T15:40:07Z | 2022-09-20T15:40:07Z | MEMBER | null | null | null | ### Link
https://huggingface.co/datasets/julien-c/label-studio-my-dogs
### Description
<img width="1115" alt="image" src="https://user-images.githubusercontent.com/326577/189422048-7e9c390f-bea7-4521-a232-43f049ccbd1f.png">
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4963/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4963/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4962/comments | https://api.github.com/repos/huggingface/datasets/issues/4962/events | https://github.com/huggingface/datasets/pull/4962 | 1,368,155,365 | PR_kwDODunzps4-sh-o | 4,962 | Update setup.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4",
"events_url": "https://api.github.com/users/DCNemesis/events{/privacy}",
"followers_url": "https://api.github.com/users/DCNemesis/followers",
"following_url": "https://api.github.com/users/DCNemesis/following{/other_user}",
"gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DCNemesis",
"id": 3616964,
"login": "DCNemesis",
"node_id": "MDQ6VXNlcjM2MTY5NjQ=",
"organizations_url": "https://api.github.com/users/DCNemesis/orgs",
"received_events_url": "https://api.github.com/users/DCNemesis/received_events",
"repos_url": "https://api.github.com/users/DCNemesis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DCNemesis"
} | [] | closed | false | null | [] | null | [
"Before addressing this PR, we should be sure about the issue. See my comment in:\r\n- https://github.com/huggingface/datasets/issues/4961#issuecomment-1243376247",
"Once we know 2022.8.2 works, I'm closing this PR, as the corresponding issue."
] | 2022-09-09T17:57:56Z | 2022-09-12T14:33:04Z | 2022-09-12T14:33:04Z | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4962.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4962",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4962.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4962"
} | exclude broken version of fsspec. See the [related issue](https://github.com/huggingface/datasets/issues/4961) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4962/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4962/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4961/comments | https://api.github.com/repos/huggingface/datasets/issues/4961/events | https://github.com/huggingface/datasets/issues/4961 | 1,368,124,033 | I_kwDODunzps5Ri-qB | 4,961 | fsspec 2022.8.2 breaks xopen in streaming mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4",
"events_url": "https://api.github.com/users/DCNemesis/events{/privacy}",
"followers_url": "https://api.github.com/users/DCNemesis/followers",
"following_url": "https://api.github.com/users/DCNemesis/following{/other_user}",
"gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DCNemesis",
"id": 3616964,
"login": "DCNemesis",
"node_id": "MDQ6VXNlcjM2MTY5NjQ=",
"organizations_url": "https://api.github.com/users/DCNemesis/orgs",
"received_events_url": "https://api.github.com/users/DCNemesis/received_events",
"repos_url": "https://api.github.com/users/DCNemesis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DCNemesis"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"loading `fsspec==2022.7.1` fixes this issue, setup.py would need to be changed to prevent users from using the latest version of fsspec.",
"Opened [PR](https://github.com/huggingface/datasets/pull/4962) to address this.",
"Hi @DCNemesis, thanks for reporting.\r\n\r\nThat was a temporary issue in `fsspec` releases 2022.8.0 and 2022.8.1. But they fixed it in their patch release 2022.8.2 (and yanked both previous versions). See:\r\n- https://github.com/huggingface/transformers/pull/18846\r\n\r\nAre you sure you have version 2022.8.2 installed?\r\n```shell\r\npip install -U fsspec\r\n```\r\n",
"@albertvillanova I was using a temporary Google Colab instance, but checking it again today it seems it was loading 2022.8.1 rather than 2022.8.2. It's surprising that colab is using the version that was replaced the same day it was released. Testing with 2022.8.2 did work. It appears Colab [will be fixing it](https://github.com/googlecolab/colabtools/issues/3055) on their end too. ",
"Thanks for the additional information.\r\n\r\nOnce we know 2022.8.2 works, I'm closing this issue. Feel free to reopen it if necessary.",
"Colab just upgraded their default `fsspec` version to 2022.8.2:\r\n- https://github.com/googlecolab/colabtools/issues/3055#issuecomment-1244019010"
] | 2022-09-09T17:26:55Z | 2022-09-12T17:45:50Z | 2022-09-12T14:32:05Z | NONE | null | null | null | ## Describe the bug
When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable.
## Steps to reproduce the bug
```python
import datasets
data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True)
```
## Expected results
Dataset should load as iterator.
## Actual results
```
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1737 # Return iterable dataset in case of streaming
1738 if streaming:
-> 1739 return builder_instance.as_streaming_dataset(split=split)
1740
1741 # Some datasets are already processed on the HF google storage
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)
1023 )
1024 self._check_manual_download(dl_manager)
-> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
1026 # By default, return all splits
1027 if split is None:
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _split_generators(self, dl_manager)
182 name=datasets.Split.TRAIN,
183 gen_kwargs={
--> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages],
185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in
186 self.config.languages] if not dl_manager.is_streaming else None,
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in <listcomp>(.0)
182 name=datasets.Split.TRAIN,
183 gen_kwargs={
--> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages],
185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in
186 self.config.languages] if not dl_manager.is_streaming else None,
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives(dl_manager, lang, format, split)
267 # for streaming case
268 def _download_audio_archives(dl_manager, lang, format, split):
--> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split)
270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths]
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives_paths(dl_manager, lang, format, split)
251 n_files_path = dl_manager.download(n_files_url)
252
--> 253 with open(n_files_path, "r", encoding="utf-8") as file:
254 n_files = int(file.read().strip()) # the file contains a number of archives
255
ValueError: I/O operation on closed file.
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4961/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4961/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4960/comments | https://api.github.com/repos/huggingface/datasets/issues/4960/events | https://github.com/huggingface/datasets/issues/4960 | 1,368,035,159 | I_kwDODunzps5Rio9X | 4,960 | BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema' | {
"avatar_url": "https://avatars.githubusercontent.com/u/8426290?v=4",
"events_url": "https://api.github.com/users/DSLituiev/events{/privacy}",
"followers_url": "https://api.github.com/users/DSLituiev/followers",
"following_url": "https://api.github.com/users/DSLituiev/following{/other_user}",
"gists_url": "https://api.github.com/users/DSLituiev/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DSLituiev",
"id": 8426290,
"login": "DSLituiev",
"node_id": "MDQ6VXNlcjg0MjYyOTA=",
"organizations_url": "https://api.github.com/users/DSLituiev/orgs",
"received_events_url": "https://api.github.com/users/DSLituiev/received_events",
"repos_url": "https://api.github.com/users/DSLituiev/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DSLituiev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DSLituiev/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DSLituiev"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | [] | null | [
"Following worked:\r\n\r\n```\r\ndata_dir = \"/Users/dlituiev/repos/datasets/bioasq/\"\r\nbioasq_task_b = load_dataset(\"aps/bioasq_task_b\", data_dir=data_dir, name=\"bioasq_9b_source\")\r\n```\r\n\r\nWould maintainers be open to one of the following:\r\n- automating this with a latest default config (e.g. `bioasq_9b_source`); how can this be generalized to other datasets?\r\n- providing an actionable error message that lists available `name` values? I only got available `name` values once I've provided something there (`name=\"aps/bioasq_task_b\"`), before it would not even mention that it requires `name` argument",
"Hi ! In general the list of available configurations is prompted. I think this is an issue with this specific dataset.\r\n\r\nFeel free to open a new discussions at https://huggingface.co/datasets/aps/bioasq_task_b/discussions\r\n\r\ncc @apsdehal\r\n\r\nIn particular it sounds like the `BUILDER_CONFIG_CLASS= BigBioConfig ` class attribute is missing and the _info should account for schema being None and raise an error"
] | 2022-09-09T16:06:43Z | 2022-09-13T08:51:03Z | null | NONE | null | null | null | ## Describe the bug
I am trying to load a dataset from drive and running into an error.
## Steps to reproduce the bug
```python
data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b"
bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir)
```
## Actual results
`AttributeError: 'BuilderConfig' object has no attribute 'schema'`
<details>
```
Using custom data configuration default-a1ca3e05be5abf2f
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [8], in <cell line: 2>()
1 data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b"
----> 2 bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir)
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1723, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1720 ignore_verifications = ignore_verifications or save_infos
1722 # Create a dataset builder
-> 1723 builder_instance = load_dataset_builder(
1724 path=path,
1725 name=name,
1726 data_dir=data_dir,
1727 data_files=data_files,
1728 cache_dir=cache_dir,
1729 features=features,
1730 download_config=download_config,
1731 download_mode=download_mode,
1732 revision=revision,
1733 use_auth_token=use_auth_token,
1734 **config_kwargs,
1735 )
1737 # Return iterable dataset in case of streaming
1738 if streaming:
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1526, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1523 raise ValueError(error_msg)
1525 # Instantiate the dataset builder
-> 1526 builder_instance: DatasetBuilder = builder_cls(
1527 cache_dir=cache_dir,
1528 config_name=config_name,
1529 data_dir=data_dir,
1530 data_files=data_files,
1531 hash=hash,
1532 features=features,
1533 use_auth_token=use_auth_token,
1534 **builder_kwargs,
1535 **config_kwargs,
1536 )
1538 return builder_instance
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:1154, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs)
1153 def __init__(self, *args, writer_batch_size=None, **kwargs):
-> 1154 super().__init__(*args, **kwargs)
1155 # Batch size used by the ArrowWriter
1156 # It defines the number of samples that are kept in memory before writing them
1157 # and also the length of the arrow chunks
1158 # None means that the ArrowWriter will use its default value
1159 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:307, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)
305 if info is None:
306 info = self.get_exported_dataset_info()
--> 307 info.update(self._info())
308 info.builder_name = self.name
309 info.config_name = self.config.name
File ~/.cache/huggingface/modules/datasets_modules/datasets/aps--bioasq_task_b/3d54b1213f7e8001eef755af92877f9efa44161ee83c2a70d5d649defa95759e/bioasq_task_b.py:477, in BioasqTaskBDataset._info(self)
474 def _info(self):
475
476 # BioASQ Task B source schema
--> 477 if self.config.schema == "source":
478 features = datasets.Features(
479 {
480 "id": datasets.Value("string"),
(...)
504 }
505 )
506 # simplified schema for QA tasks
AttributeError: 'BuilderConfig' object has no attribute 'schema'
```
</details>
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4960/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4960/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4959/comments | https://api.github.com/repos/huggingface/datasets/issues/4959/events | https://github.com/huggingface/datasets/pull/4959 | 1,367,924,429 | PR_kwDODunzps4-rx6l | 4,959 | Fix data URLs of compguesswhat dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-09T14:36:10Z | 2022-09-09T16:01:34Z | 2022-09-09T15:59:04Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4959.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4959",
"merged_at": "2022-09-09T15:59:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4959.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4959"
} | After we informed the `compguesswhat` dataset authors about an error with their data URLs, they have updated them:
- https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1
This PR updates their data URLs in our loading script.
Related to:
- #3191 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4959/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4959/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4958/comments | https://api.github.com/repos/huggingface/datasets/issues/4958/events | https://github.com/huggingface/datasets/issues/4958 | 1,367,695,376 | I_kwDODunzps5RhWAQ | 4,958 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/66322047?v=4",
"events_url": "https://api.github.com/users/hasakikiki/events{/privacy}",
"followers_url": "https://api.github.com/users/hasakikiki/followers",
"following_url": "https://api.github.com/users/hasakikiki/following{/other_user}",
"gists_url": "https://api.github.com/users/hasakikiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hasakikiki",
"id": 66322047,
"login": "hasakikiki",
"node_id": "MDQ6VXNlcjY2MzIyMDQ3",
"organizations_url": "https://api.github.com/users/hasakikiki/orgs",
"received_events_url": "https://api.github.com/users/hasakikiki/received_events",
"repos_url": "https://api.github.com/users/hasakikiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hasakikiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasakikiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hasakikiki"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"I have solved this problem... The extension of the file should be `.json` not `.jsonl`"
] | 2022-09-09T11:29:55Z | 2022-09-09T11:38:44Z | 2022-09-09T11:38:44Z | NONE | null | null | null | Hi,
When I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version.
```
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x2b08342004c0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))")))
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4958/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4958/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4957/comments | https://api.github.com/repos/huggingface/datasets/issues/4957/events | https://github.com/huggingface/datasets/pull/4957 | 1,366,532,849 | PR_kwDODunzps4-nGIk | 4,957 | Add `Dataset.from_generator` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"I restarted the builder PR job just in case",
"_The documentation is not available anymore as the PR was closed or merged._",
"CI is now green. https://github.com/huggingface/doc-builder/pull/296 explains why it failed."
] | 2022-09-08T15:08:25Z | 2022-09-16T14:46:35Z | 2022-09-16T14:44:18Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4957.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4957",
"merged_at": "2022-09-16T14:44:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4957.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4957"
} | Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism.
Closes https://github.com/huggingface/datasets/issues/4417 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4957/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4957/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4956/comments | https://api.github.com/repos/huggingface/datasets/issues/4956/events | https://github.com/huggingface/datasets/pull/4956 | 1,366,475,160 | PR_kwDODunzps4-m5NU | 4,956 | Fix TF tests for 2.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-08T14:39:10Z | 2022-09-08T15:16:51Z | 2022-09-08T15:14:44Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4956.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4956",
"merged_at": "2022-09-08T15:14:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4956.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4956"
} | Fixes #4953 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4956/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4956/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4955/comments | https://api.github.com/repos/huggingface/datasets/issues/4955/events | https://github.com/huggingface/datasets/issues/4955 | 1,366,382,314 | I_kwDODunzps5RcVbq | 4,955 | Raise a more precise error when the URL is unreachable in streaming mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2022-09-08T13:52:37Z | 2022-09-08T13:53:36Z | null | CONTRIBUTOR | null | null | null | See for example:
- https://github.com/huggingface/datasets/issues/3191
- https://github.com/huggingface/datasets/issues/3186
It would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently:
- https://huggingface.co/datasets/compguesswhat
<img width="1029" alt="Capture dβeΜcran 2022-09-08 aΜ 15 51 37" src="https://user-images.githubusercontent.com/1676121/189139946-6deffb91-f21b-4281-8825-a98026c69740.png">
- https://huggingface.co/datasets/nli_tr
<img width="1032" alt="Capture dβeΜcran 2022-09-08 aΜ 15 51 44" src="https://user-images.githubusercontent.com/1676121/189139963-d26490ed-ad23-48ea-9cfc-1ab9c4d08d0c.png">
cc @albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4955/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4955/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4954/comments | https://api.github.com/repos/huggingface/datasets/issues/4954/events | https://github.com/huggingface/datasets/pull/4954 | 1,366,369,682 | PR_kwDODunzps4-mhl5 | 4,954 | Pin TensorFlow temporarily | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-08T13:46:15Z | 2022-09-08T14:12:33Z | 2022-09-08T14:10:03Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4954.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4954",
"merged_at": "2022-09-08T14:10:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4954.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4954"
} | Temporarily fix TensorFlow until a permanent solution is found.
Related to:
- #4953 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4954/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4954/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4953/comments | https://api.github.com/repos/huggingface/datasets/issues/4953/events | https://github.com/huggingface/datasets/issues/4953 | 1,366,356,514 | I_kwDODunzps5RcPIi | 4,953 | CI test of TensorFlow is failing | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-09-08T13:39:29Z | 2022-09-08T15:14:45Z | 2022-09-08T15:14:45Z | MEMBER | null | null | null | ## Describe the bug
The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true
```
FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError:
```
Details:
```
_________________________ TempSeedTest.test_tensorflow _________________________
[gw0] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python
self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow>
@require_tf
def test_tensorflow(self):
import tensorflow as tf
from tensorflow.keras import layers
def gen_random_output():
model = layers.Dense(2)
x = tf.random.uniform((1, 3))
return model(x).numpy()
with temp_seed(42, set_tensorflow=True):
out1 = gen_random_output()
with temp_seed(42, set_tensorflow=True):
out2 = gen_random_output()
out3 = gen_random_output()
> np.testing.assert_equal(out1, out2)
E AssertionError:
E Arrays are not equal
E
E Mismatched elements: 2 / 2 (100%)
E Max absolute difference: 0.84619296
E Max relative difference: 16.083529
E x: array([[-0.793581, 0.333286]], dtype=float32)
E y: array([[0.052612, 0.539708]], dtype=float32)
tests/test_py_utils.py:149: AssertionError
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4953/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4953/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4952/comments | https://api.github.com/repos/huggingface/datasets/issues/4952/events | https://github.com/huggingface/datasets/pull/4952 | 1,366,354,604 | PR_kwDODunzps4-meM0 | 4,952 | Add test-datasets CI job | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Closing this one since the dataset scripts will be removed in https://github.com/huggingface/datasets/pull/4974"
] | 2022-09-08T13:38:30Z | 2022-09-16T13:28:02Z | 2022-09-16T13:25:48Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4952.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4952",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4952.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4952"
} | To avoid having too many conflicts in the datasets and metrics dependencies I split the CI into test and test-catalog
test does the test of the core of the `datasets` lib, while test-catalog tests the datasets scripts and metrics scripts
This also makes `pip install -e .[dev]` much smaller for developers
WDYT @albertvillanova ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4952/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4952/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4951/comments | https://api.github.com/repos/huggingface/datasets/issues/4951/events | https://github.com/huggingface/datasets/pull/4951 | 1,365,954,814 | PR_kwDODunzps4-lDqd | 4,951 | Fix license information in qasc dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-08T10:04:39Z | 2022-09-08T14:54:47Z | 2022-09-08T14:52:05Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4951.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4951",
"merged_at": "2022-09-08T14:52:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4951.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4951"
} | This PR adds the license information to `qasc` dataset, once reported via GitHub by Tushar Khot, the dataset is licensed under CC BY 4.0:
- https://github.com/allenai/qasc/issues/5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4951/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4951/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4950/comments | https://api.github.com/repos/huggingface/datasets/issues/4950/events | https://github.com/huggingface/datasets/pull/4950 | 1,365,458,633 | PR_kwDODunzps4-jWZ1 | 4,950 | Update Enwik8 broken link and information | {
"avatar_url": "https://avatars.githubusercontent.com/u/54819091?v=4",
"events_url": "https://api.github.com/users/mtanghu/events{/privacy}",
"followers_url": "https://api.github.com/users/mtanghu/followers",
"following_url": "https://api.github.com/users/mtanghu/following{/other_user}",
"gists_url": "https://api.github.com/users/mtanghu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mtanghu",
"id": 54819091,
"login": "mtanghu",
"node_id": "MDQ6VXNlcjU0ODE5MDkx",
"organizations_url": "https://api.github.com/users/mtanghu/orgs",
"received_events_url": "https://api.github.com/users/mtanghu/received_events",
"repos_url": "https://api.github.com/users/mtanghu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mtanghu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtanghu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mtanghu"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-08T03:15:00Z | 2022-09-24T22:14:35Z | 2022-09-08T14:51:00Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4950.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4950",
"merged_at": "2022-09-08T14:51:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4950.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4950"
} | The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4950/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4950/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4949/comments | https://api.github.com/repos/huggingface/datasets/issues/4949/events | https://github.com/huggingface/datasets/pull/4949 | 1,365,251,916 | PR_kwDODunzps4-iqzI | 4,949 | Update enwik8 fixing the broken link | {
"avatar_url": "https://avatars.githubusercontent.com/u/54819091?v=4",
"events_url": "https://api.github.com/users/mtanghu/events{/privacy}",
"followers_url": "https://api.github.com/users/mtanghu/followers",
"following_url": "https://api.github.com/users/mtanghu/following{/other_user}",
"gists_url": "https://api.github.com/users/mtanghu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mtanghu",
"id": 54819091,
"login": "mtanghu",
"node_id": "MDQ6VXNlcjU0ODE5MDkx",
"organizations_url": "https://api.github.com/users/mtanghu/orgs",
"received_events_url": "https://api.github.com/users/mtanghu/received_events",
"repos_url": "https://api.github.com/users/mtanghu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mtanghu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtanghu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mtanghu"
} | [] | closed | false | null | [] | null | [
"Closing pull request to following contributing guidelines of making a new branch and will make a new pull request"
] | 2022-09-07T22:17:14Z | 2022-09-08T03:14:04Z | 2022-09-08T03:14:04Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4949.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4949",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4949.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4949"
} | The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4949/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4949/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4948/comments | https://api.github.com/repos/huggingface/datasets/issues/4948/events | https://github.com/huggingface/datasets/pull/4948 | 1,364,973,778 | PR_kwDODunzps4-hwsl | 4,948 | Fix minor typo in error message for missing imports | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-07T17:20:51Z | 2022-09-08T14:59:31Z | 2022-09-08T14:57:15Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4948.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4948",
"merged_at": "2022-09-08T14:57:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4948.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4948"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4948/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4948/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4947/comments | https://api.github.com/repos/huggingface/datasets/issues/4947/events | https://github.com/huggingface/datasets/pull/4947 | 1,364,967,957 | PR_kwDODunzps4-hvbq | 4,947 | Try to fix the Windows CI after TF update 2.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4947). All of your documentation changes will be reflected on that endpoint."
] | 2022-09-07T17:14:49Z | 2022-09-08T09:13:10Z | 2022-09-08T09:13:10Z | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/4947.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4947",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4947.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4947"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4947/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4947/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4946/comments | https://api.github.com/repos/huggingface/datasets/issues/4946/events | https://github.com/huggingface/datasets/pull/4946 | 1,364,692,069 | PR_kwDODunzps4-g0Hz | 4,946 | Introduce regex check when pushing as well | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Let me take over this PR if you don't mind"
] | 2022-09-07T13:45:58Z | 2022-09-13T10:19:01Z | 2022-09-13T10:16:34Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4946.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4946",
"merged_at": "2022-09-13T10:16:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4946.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4946"
} | Closes https://github.com/huggingface/datasets/issues/4945 by adding a regex check when pushing to hub.
Let me know if this is helpful and if it's the fix you would have in mind for the issue and I'm happy to contribute tests. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4946/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4946/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4945/comments | https://api.github.com/repos/huggingface/datasets/issues/4945/events | https://github.com/huggingface/datasets/issues/4945 | 1,364,691,096 | I_kwDODunzps5RV4iY | 4,945 | Push to hub can push splits that do not respect the regex | {
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LysandreJik",
"id": 30755778,
"login": "LysandreJik",
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LysandreJik"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-09-07T13:45:17Z | 2022-09-13T10:16:35Z | 2022-09-13T10:16:35Z | MEMBER | null | null | null | ## Describe the bug
The `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing.
## Steps to reproduce the bug
```python
>>> from datasets import Dataset, DatasetDict, load_dataset
>>> d = Dataset.from_dict({'x': [1,2,3], 'y': [1,2,3]})
>>> di = DatasetDict()
>>> di['identifier-with-column'] = d
>>> di.push_to_hub('open-source-metrics/test')
Pushing split identifier-with-column to the Hub.
Pushing dataset shards to the dataset hub: 100%|ββββββββββ| 1/1 [00:04<00:00, 4.40s/it]
```
Loading it afterwards:
```python
>>> load_dataset('open-source-metrics/test')
Downloading: 100%|ββββββββββ| 610/610 [00:00<00:00, 432kB/s]
Using custom data configuration open-source-metrics--test-28b63ec7cde80488
Downloading and preparing dataset None/None (download: 950 bytes, generated: 48 bytes, post-processed: Unknown size, total: 998 bytes) to /home/lysandre/.cache/huggingface/datasets/open-source-metrics___parquet/open-source-metrics--test-28b63ec7cde80488/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 100%|ββββββββββ| 950/950 [00:00<00:00, 1.01MB/s]
Downloading data files: 100%|ββββββββββ| 1/1 [00:01<00:00, 1.48s/it]
Extracting data files: 100%|ββββββββββ| 1/1 [00:00<00:00, 2291.97it/s]
Traceback (most recent call last):
File "/home/lysandre/.pyenv/versions/3.10.6/lib/python3.10/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/load.py", line 1746, in load_dataset
builder_instance.download_and_prepare(
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 771, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 48, in _split_generators
splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files}))
File "<string>", line 5, in __init__
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 599, in __post_init__
NamedSplit(self.name) # check that it's a valid split name
File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 346, in __init__
raise ValueError(f"Split name should match '{_split_re}' but got '{split_name}'.")
ValueError: Split name should match '^\w+(\.\w+)*$' but got 'identifier-with-column'.
```
## Expected results
I would expect `push_to_hub` to stop me in my tracks if trying to upload a split that will not be working afterwards.
## Actual results
See above
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.64-1-lts-x86_64-with-glibc2.36
- Python version: 3.10.6
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4945/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4945/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4944/comments | https://api.github.com/repos/huggingface/datasets/issues/4944/events | https://github.com/huggingface/datasets/issues/4944 | 1,364,313,569 | I_kwDODunzps5RUcXh | 4,944 | larger dataset, larger GPU memory in the training phase? Is that correct? | {
"avatar_url": "https://avatars.githubusercontent.com/u/38886373?v=4",
"events_url": "https://api.github.com/users/debby1103/events{/privacy}",
"followers_url": "https://api.github.com/users/debby1103/followers",
"following_url": "https://api.github.com/users/debby1103/following{/other_user}",
"gists_url": "https://api.github.com/users/debby1103/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/debby1103",
"id": 38886373,
"login": "debby1103",
"node_id": "MDQ6VXNlcjM4ODg2Mzcz",
"organizations_url": "https://api.github.com/users/debby1103/orgs",
"received_events_url": "https://api.github.com/users/debby1103/received_events",
"repos_url": "https://api.github.com/users/debby1103/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/debby1103/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/debby1103/subscriptions",
"type": "User",
"url": "https://api.github.com/users/debby1103"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"does the trainer save it in GPU? sooo curious... how to fix it",
"It's my bad. didn't limit the input length"
] | 2022-09-07T08:46:30Z | 2022-09-07T12:34:58Z | 2022-09-07T12:34:58Z | NONE | null | null | null | from datasets import set_caching_enabled
set_caching_enabled(False)
for ds_name in ["squad","newsqa","nqopen","narrativeqa"]:
train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name))
break
train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1
trainer = QuestionAnsweringTrainer( #huggingface trainer
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset= None,
eval_examples=None,
answer_column_name=answer_column,
dataset_name="squad",
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics if training_args.predict_with_generate else None,
)
with operation 1, the GPU memory increases from 16G to 23G | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4944/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4944/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4943/comments | https://api.github.com/repos/huggingface/datasets/issues/4943/events | https://github.com/huggingface/datasets/pull/4943 | 1,363,967,650 | PR_kwDODunzps4-eZd_ | 4,943 | Add splits to MBPP dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/2788526?v=4",
"events_url": "https://api.github.com/users/cwarny/events{/privacy}",
"followers_url": "https://api.github.com/users/cwarny/followers",
"following_url": "https://api.github.com/users/cwarny/following{/other_user}",
"gists_url": "https://api.github.com/users/cwarny/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cwarny",
"id": 2788526,
"login": "cwarny",
"node_id": "MDQ6VXNlcjI3ODg1MjY=",
"organizations_url": "https://api.github.com/users/cwarny/orgs",
"received_events_url": "https://api.github.com/users/cwarny/received_events",
"repos_url": "https://api.github.com/users/cwarny/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cwarny/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cwarny/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cwarny"
} | [] | closed | false | null | [] | null | [
"```\r\n(env) cwarny@Cedrics-Air datasets % RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mbpp\r\n================================================================================================ test session starts =================================================================================================\r\nplatform darwin -- Python 3.8.13, pytest-7.1.3, pluggy-1.0.0\r\nrootdir: /Users/cwarny/datasets, configfile: setup.cfg\r\ncollected 1 item \r\n\r\ntests/test_dataset_common.py . [100%]\r\n\r\n================================================================================================= 1 passed in 1.12s ==================================================================================================\r\n(env) cwarny@Cedrics-Air datasets % RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_mbpp \r\n================================================================================================ test session starts =================================================================================================\r\nplatform darwin -- Python 3.8.13, pytest-7.1.3, pluggy-1.0.0\r\nrootdir: /Users/cwarny/datasets, configfile: setup.cfg\r\ncollected 1 item \r\n\r\ntests/test_dataset_common.py . [100%]\r\n\r\n================================================================================================= 1 passed in 0.35s ==================================================================================================\r\n\r\n```",
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @cwarny ! Thanks for adding the correct splits :)\r\n\r\nYou can fix the CI error by running `make style` - this should reformat the dataset script",
"done"
] | 2022-09-07T01:18:31Z | 2022-09-13T12:29:19Z | 2022-09-13T12:27:21Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4943.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4943",
"merged_at": "2022-09-13T12:27:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4943.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4943"
} | This PR addresses https://github.com/huggingface/datasets/issues/4795 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4943/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4943/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4942/comments | https://api.github.com/repos/huggingface/datasets/issues/4942/events | https://github.com/huggingface/datasets/issues/4942 | 1,363,869,421 | I_kwDODunzps5RSv7t | 4,942 | Trec Dataset has incorrect labels | {
"avatar_url": "https://avatars.githubusercontent.com/u/6539145?v=4",
"events_url": "https://api.github.com/users/wmpauli/events{/privacy}",
"followers_url": "https://api.github.com/users/wmpauli/followers",
"following_url": "https://api.github.com/users/wmpauli/following{/other_user}",
"gists_url": "https://api.github.com/users/wmpauli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wmpauli",
"id": 6539145,
"login": "wmpauli",
"node_id": "MDQ6VXNlcjY1MzkxNDU=",
"organizations_url": "https://api.github.com/users/wmpauli/orgs",
"received_events_url": "https://api.github.com/users/wmpauli/received_events",
"repos_url": "https://api.github.com/users/wmpauli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wmpauli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wmpauli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wmpauli"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @wmpauli. \r\n\r\nIndeed we recently fixed this issue:\r\n- #4801 \r\n\r\nThe fix will be accessible after our next library release. In the meantime, you can have it by passing `revision=\"main\"` to `load_dataset`."
] | 2022-09-06T22:13:40Z | 2022-09-08T11:12:03Z | 2022-09-08T11:12:03Z | NONE | null | null | null | ## Describe the bug
Both coarse and fine labels seem to be out of line.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = "trec"
raw_datasets = load_dataset(dataset)
df = pd.DataFrame(raw_datasets["test"])
df.head()
```
## Expected results
text (string) | coarse_label (class label) | fine_label (class label)
-- | -- | --
How far is it from Denver to Aspen ? | 5 (NUM) | 40 (NUM:dist)
What county is Modesto , California in ? | 4 (LOC) | 32 (LOC:city)
Who was Galileo ? | 3 (HUM) | 31 (HUM:desc)
What is an atom ? | 2 (DESC) | 24 (DESC:def)
When did Hawaii become a state ? | 5 (NUM) | 39 (NUM:date)
## Actual results
index | label-coarse |label-fine | text
-- |-- | -- | --
0 | 4 | 40 | How far is it from Denver to Aspen ?
1 | 5 | 21 | What county is Modesto , California in ?
2 | 3 | 12 | Who was Galileo ?
3 | 0 | 7 | What is an atom ?
4 | 4 | 8 | When did Hawaii become a state ?
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4942/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4942/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4941/comments | https://api.github.com/repos/huggingface/datasets/issues/4941/events | https://github.com/huggingface/datasets/pull/4941 | 1,363,622,861 | PR_kwDODunzps4-dQ9F | 4,941 | Add Papers with Code ID to scifact dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-06T17:46:37Z | 2022-09-06T18:28:17Z | 2022-09-06T18:26:01Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4941.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4941",
"merged_at": "2022-09-06T18:26:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4941.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4941"
} | This PR:
- adds Papers with Code ID
- forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https://github.com/huggingface/datasets/runs/8200223631?check_suite_focus=true | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4941/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4941/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4940/comments | https://api.github.com/repos/huggingface/datasets/issues/4940/events | https://github.com/huggingface/datasets/pull/4940 | 1,363,513,058 | PR_kwDODunzps4-c6WY | 4,940 | Fix multilinguality tag and missing sections in xquad_r dataset card | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-06T16:05:35Z | 2022-09-12T10:11:07Z | 2022-09-12T10:08:48Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4940.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4940",
"merged_at": "2022-09-12T10:08:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4940.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4940"
} | This PR fixes issue reported on the Hub:
- Label as multilingual: https://huggingface.co/datasets/xquad_r/discussions/1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4940/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4940/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4939/comments | https://api.github.com/repos/huggingface/datasets/issues/4939/events | https://github.com/huggingface/datasets/pull/4939 | 1,363,468,679 | PR_kwDODunzps4-cw4A | 4,939 | Fix NonMatchingChecksumError in adv_glue dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-06T15:31:16Z | 2022-09-06T17:42:10Z | 2022-09-06T17:39:16Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4939.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4939",
"merged_at": "2022-09-06T17:39:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4939.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4939"
} | Fix issue reported on the Hub: https://huggingface.co/datasets/adv_glue/discussions/1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4939/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4938/comments | https://api.github.com/repos/huggingface/datasets/issues/4938/events | https://github.com/huggingface/datasets/pull/4938 | 1,363,429,228 | PR_kwDODunzps4-coaB | 4,938 | Remove main branch rename notice | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-06T15:03:05Z | 2022-09-06T16:46:11Z | 2022-09-06T16:43:53Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4938",
"merged_at": "2022-09-06T16:43:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4938"
} | We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months)
I also unpinned the github issue about the branch renaming | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4938/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4938/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4937/comments | https://api.github.com/repos/huggingface/datasets/issues/4937/events | https://github.com/huggingface/datasets/pull/4937 | 1,363,426,946 | PR_kwDODunzps4-cn6W | 4,937 | Remove deprecated identical_ok | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-06T15:01:24Z | 2022-09-06T22:24:09Z | 2022-09-06T22:21:57Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4937.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4937",
"merged_at": "2022-09-06T22:21:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4937.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4937"
} | `huggingface-hub` says that the `identical_ok` argument of `HfApi.upload_file` is now deprecated, and will be removed soon. It even has no effect at the moment when it's passed:
```python
Args:
...
identical_ok (`bool`, *optional*, defaults to `True`):
Deprecated: will be removed in 0.11.0.
Changing this value has no effect.
...
```
There was only one occurence of `identical_ok=False` but it's maybe not worth adding a check ti verify if the files were the same.
cc @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4937/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4937/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4936/comments | https://api.github.com/repos/huggingface/datasets/issues/4936/events | https://github.com/huggingface/datasets/issues/4936 | 1,363,274,907 | I_kwDODunzps5RQeyb | 4,936 | vivos (Vietnamese speech corpus) dataset not accessible | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"If you need an example of a small audio datasets, I just created few hours ago a speech dataset with only 300MB of compressed audio files https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia. It works also with streaming (@albertvillanova helped me adding this functionality) :-)",
"@cahya-wirawan omg this is awesome!! thank you! ",
"We have contacted the authors to ask them."
] | 2022-09-06T13:17:55Z | 2022-09-21T06:06:02Z | 2022-09-12T07:14:20Z | CONTRIBUTOR | null | null | null | ## Describe the bug
VIVOS data is not accessible anymore, neither of these links work (at least from France):
* https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data)
* https://ailab.hcmus.edu.vn/vivos (dataset page)
Therefore `load_dataset` doesn't work.
## Steps to reproduce the bug
```python
ds = load_dataset("vivos")
```
## Expected results
dataset loaded
## Actual results
```
ConnectionError: Couldn't reach https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: /assets/vivos.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9d8a27d190>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))")))
```
Will try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https://github.com/huggingface/datasets/pull/4872), because it's small and straightforward and uses tar archives. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4936/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4935/comments | https://api.github.com/repos/huggingface/datasets/issues/4935/events | https://github.com/huggingface/datasets/issues/4935 | 1,363,226,736 | I_kwDODunzps5RQTBw | 4,935 | Dataset Viewer issue for ubuntu_dialogs_corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/87330568?v=4",
"events_url": "https://api.github.com/users/CibinQuadance/events{/privacy}",
"followers_url": "https://api.github.com/users/CibinQuadance/followers",
"following_url": "https://api.github.com/users/CibinQuadance/following{/other_user}",
"gists_url": "https://api.github.com/users/CibinQuadance/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CibinQuadance",
"id": 87330568,
"login": "CibinQuadance",
"node_id": "MDQ6VXNlcjg3MzMwNTY4",
"organizations_url": "https://api.github.com/users/CibinQuadance/orgs",
"received_events_url": "https://api.github.com/users/CibinQuadance/received_events",
"repos_url": "https://api.github.com/users/CibinQuadance/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CibinQuadance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CibinQuadance/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CibinQuadance"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null | [
"The dataset maintainers (https://huggingface.co/datasets/ubuntu_dialogs_corpus) decided to forbid the dataset from being downloaded automatically (https://huggingface.co/docs/datasets/v2.4.0/en/loading#manual-download), and the dataset viewer respects this.\r\nWe will try to improve the error display though. Thanks for reporting."
] | 2022-09-06T12:41:50Z | 2022-09-06T12:51:25Z | 2022-09-06T12:51:25Z | NONE | null | null | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4935/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4935/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4934/comments | https://api.github.com/repos/huggingface/datasets/issues/4934/events | https://github.com/huggingface/datasets/issues/4934 | 1,363,034,253 | I_kwDODunzps5RPkCN | 4,934 | Dataset Viewer issue for indonesian-nlp/librivox-indonesia | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"The error is not related to the dataset viewer. I'm having a look...",
"Thanks @albertvillanova for checking the issue. Actually, I can use the dataset like following:\r\n```\r\n>>> from datasets import load_dataset\r\n>>> ds=load_dataset(\"indonesian-nlp/librivox-indonesia\")\r\nNo config specified, defaulting to: librivox-indonesia/all\r\nReusing dataset librivox-indonesia (/root/.cache/huggingface/datasets/indonesian-nlp___librivox-indonesia/all/1.0.0/9a934a42bfb53dc103003d191618443b8a786bea2bd7bb0bc2d9454b8494521e)\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 500.87it/s]\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['path', 'language', 'reader', 'sentence', 'audio'],\r\n num_rows: 7815\r\n })\r\n})\r\n>>> ds[\"train\"][0]\r\n{'path': '/root/.cache/huggingface/datasets/downloads/extracted/c8ead52370fa28feb64643ea9d05cd7d820192dc8a1700d665ec45ec7624f5a3/librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'language': 'sun', 'reader': '3174', 'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa', 'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/c8ead52370fa28feb64643ea9d05cd7d820192dc8a1700d665ec45ec7624f5a3/librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'array': array([ 0. , 0. , 0. , ..., -0.02419001,\r\n -0.01957154, -0.01502833], dtype=float32), 'sampling_rate': 44100}}\r\n\r\n```\r\nIt would be just nice if I also can see it using dataset viewer.",
"Yes, the issue arises when streaming (that is used by the viewer): your script does not support streaming and to support it in this case there are some subtleties that we are explaining better in our docs in a work-in progress pull request:\r\n- #4872\r\n\r\nJust note that when streaming, `local_extracted_archive` is None, and this code line generates the error:\r\n```python\r\nfilepath = local_extracted_archive + \"/librivox-indonesia/audio_transcription.csv\"\r\n```\r\n\r\nFor a proper implementation, you could have a look at: https://huggingface.co/datasets/common_voice/blob/main/common_voice.py\r\n\r\nYou can test your script locally by passing `streaming=True` to `load_dataset`:\r\n```python\r\nds = load_dataset(\"indonesian-nlp/librivox-indonesia\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\n```",
"Great, I will have a look and update the script. Thanks.",
"Hi @albertvillanova , I just add the streaming functionality and it works in the first try :-) Thanks a lot!",
"Awesome!!! :hugs: "
] | 2022-09-06T10:03:23Z | 2022-09-06T12:46:40Z | 2022-09-06T12:46:40Z | CONTRIBUTOR | null | null | null | ### Link
https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
### Description
I created a new speech dataset https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia, but the dataset preview doesn't work with following error message:
```
Server error
Status code: 400
Exception: TypeError
Message: unsupported operand type(s) for +: 'NoneType' and 'str'
```
Please help, I am not sure what the problem here is. Thanks a lot.
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4934/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4934/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4933/comments | https://api.github.com/repos/huggingface/datasets/issues/4933/events | https://github.com/huggingface/datasets/issues/4933 | 1,363,013,023 | I_kwDODunzps5RPe2f | 4,933 | Dataset/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable. | {
"avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4",
"events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}",
"followers_url": "https://api.github.com/users/tianjianjiang/followers",
"following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}",
"gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tianjianjiang",
"id": 4812544,
"login": "tianjianjiang",
"node_id": "MDQ6VXNlcjQ4MTI1NDQ=",
"organizations_url": "https://api.github.com/users/tianjianjiang/orgs",
"received_events_url": "https://api.github.com/users/tianjianjiang/received_events",
"repos_url": "https://api.github.com/users/tianjianjiang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tianjianjiang"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n\r\nIn your case, something like\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n\r\nds_mc4_ja = load_dataset(\"mc4\", \"ja\") # This will take 6+ hours... perhaps test it with a toy dataset instead?\r\nds_mc4_ja_2020 = ds_mc4_ja.filter(\r\n lambda batch: [timestamp[:4] == \"2020\" for timestamp in batch[\"timestamp\"]],\r\n batched=True,\r\n)\r\n```\r\n\r\nLet me know if it helps !",
"> Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n> [...]\r\n> Let me know if it helps !\r\n\r\nHi @lhoestq,\r\n\r\nAh, my bad, I totally forgot that part...\r\nSorry for the trouble and thank you for the kind help!"
] | 2022-09-06T09:47:48Z | 2022-09-06T11:44:27Z | 2022-09-06T11:44:27Z | CONTRIBUTOR | null | null | null | ## Describe the bug
`Dataset/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.
## Steps to reproduce the bug
(In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)
```python
from datasets import load_dataset
ds_mc4_ja = load_dataset("mc4", "ja") # This will take 6+ hours... perhaps test it with a toy dataset instead?
ds_mc4_ja_2020 = ds_mc4_ja.filter(
lambda example: example["timestamp"][:4] == "2020",
batched=True,
)
```
## Expected results
No error
## Actual results
```python
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2779, in _map_single
offset=offset,
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated
result = f(decorated_item, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 4946, in get_indices_from_mask_function
indices_array = [i for i, to_keep in zip(indices, mask) if to_keep]
TypeError: zip argument #2 must support iteration
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
/tmp/ipykernel_51348/2345782281.py in <module>
7 batched=True,
8 # batch_size=10_000,
----> 9 num_proc=111,
10 )
11 # ds_mc4_ja_clean_2020 = ds_mc4_ja.filter(
/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc)
878 desc=desc,
879 )
--> 880 for k, dataset in self.items()
881 }
882 )
/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
878 desc=desc,
879 )
--> 880 for k, dataset in self.items()
881 }
882 )
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
522 }
523 # apply actual function
--> 524 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
525 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
526 # re-apply format to the output
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
478 # Call actual function
479
--> 480 out = func(self, *args, **kwargs)
481
482 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2920 new_fingerprint=new_fingerprint,
2921 input_columns=input_columns,
-> 2922 desc=desc,
2923 )
2924 new_dataset = copy.deepcopy(self)
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2498
2499 for index, async_result in results.items():
-> 2500 transformed_shards[index] = async_result.get()
2501
2502 assert (
/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
TypeError: zip argument #2 must support iteration
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
(I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4933/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4933/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4932/comments | https://api.github.com/repos/huggingface/datasets/issues/4932/events | https://github.com/huggingface/datasets/issues/4932 | 1,362,522,423 | I_kwDODunzps5RNnE3 | 4,932 | Dataset Viewer issue for bigscience-biomedical/biosses | {
"avatar_url": "https://avatars.githubusercontent.com/u/663051?v=4",
"events_url": "https://api.github.com/users/galtay/events{/privacy}",
"followers_url": "https://api.github.com/users/galtay/followers",
"following_url": "https://api.github.com/users/galtay/following{/other_user}",
"gists_url": "https://api.github.com/users/galtay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/galtay",
"id": 663051,
"login": "galtay",
"node_id": "MDQ6VXNlcjY2MzA1MQ==",
"organizations_url": "https://api.github.com/users/galtay/orgs",
"received_events_url": "https://api.github.com/users/galtay/received_events",
"repos_url": "https://api.github.com/users/galtay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/galtay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galtay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/galtay"
} | [] | closed | false | null | [] | null | [
"Possibly not related to the dataset viewer in itself. cc @huggingface/datasets.\r\n\r\nIn particular, I think that the import of bigbiohub is not working here: https://huggingface.co/datasets/bigscience-biomedical/biosses/blob/main/biosses.py#L29 (requires a relative path?)\r\n\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n>>> get_dataset_config_names('bigscience-biomedical/biosses')\r\nDownloading builder script: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 8.00k/8.00k [00:00<00:00, 7.47MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 289, in get_dataset_config_names\r\n dataset_module = dataset_module_factory(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1247, in dataset_module_factory\r\n raise e1 from None\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1220, in dataset_module_factory\r\n return HubDatasetModuleFactoryWithScript(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 931, in get_module\r\n local_imports = _download_additional_modules(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 215, in _download_additional_modules\r\n raise ImportError(\r\nImportError: To be able to use bigscience-biomedical/biosses, you need to install the following dependency: bigbiohub.\r\nPlease install it using 'pip install bigbiohub' for instance'\r\n```",
"Opened a PR here to (hopefully) fix the dataset script: https://huggingface.co/datasets/bigscience-biomedical/biosses/discussions/1/files",
"thanks for taking a look @severo . agree this isn't related to dataset viewer (sorry just clicked on the auto issue creator). also thanks @lhoestq , I see the format to use for relative imports. was a bit confused b/c it seems to be working here \r\n\r\nhttps://huggingface.co/datasets/bigscience-biomedical/scitail/blob/main/scitail.py#L31\r\n\r\nI'll try this PR a see what happens. ",
"closing as I think the issue is relative imports and attempting to read json files directly in the repo (thanks again @lhoestq ) "
] | 2022-09-05T22:40:32Z | 2022-09-06T14:24:56Z | 2022-09-06T14:24:56Z | NONE | null | null | null | ### Link
https://huggingface.co/datasets/bigscience-biomedical/biosses
### Description
I've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) .
```
Status code: 400
Exception: ModuleNotFoundError
Message: No module named 'datasets_modules.datasets.bigscience-biomedical--biosses.ddbd5893bf6c2f4db06f407665eaeac619520ba41f69d94ead28f7cc5b674056.bigbiohub'
```
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4932/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4932/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4931/comments | https://api.github.com/repos/huggingface/datasets/issues/4931/events | https://github.com/huggingface/datasets/pull/4931 | 1,362,298,764 | PR_kwDODunzps4-Y3L6 | 4,931 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-05T17:03:04Z | 2022-09-22T12:40:15Z | 2022-09-06T05:39:29Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4931.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4931",
"merged_at": "2022-09-06T05:39:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4931.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4931"
} | Fix missing tags in dataset cards:
- coqa
- hyperpartisan_news_detection
- opinosis
- scientific_papers
- scifact
- search_qa
- wiki_qa
- wiki_split
- wikisql
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4931/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4930/comments | https://api.github.com/repos/huggingface/datasets/issues/4930/events | https://github.com/huggingface/datasets/pull/4930 | 1,362,193,587 | PR_kwDODunzps4-Yflc | 4,930 | Add cc-by-nc-2.0 to list of licenses | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"this list needs to be kept in sync with the ones in moon-landing and hub-docs :)",
"@julien-c don't you think it might be better to a have a single file (source of truth) in one of the repos and then use it in every other repo, instead of having 3 copies of the same file that must be kept in sync?\r\n\r\nAlso note that the licenses we are adding were all already present in our previous `licenses.json` file: are we regenerating it, step by step? Why don't we use a file with ALL the licenses we previously had in the list?\r\n\r\nLicenses added:\r\n- #4887\r\n- #4930 \r\n\r\nPrevious `licenses.json` file:\r\n- https://github.com/huggingface/datasets/blob/b7612754928e0fd43b9e3c3becb906ec280ff5d4/src/datasets/utils/resources/licenses.json\r\n- removed in this commit: https://github.com/huggingface/datasets/pull/4613/commits/9f7725412dac1089b3e057f9e3fcf39cc222bc26\r\n\r\nLet me know what you think and I can take care of this.",
"> Let me know what you think and I can take care of this.\r\n\r\nWhat I think is that we shouldn't add licenses that are just used in a couple of datasets, and just use `license_details` for this.\r\n\r\n> don't you think it might be better to a have a single file (source of truth) in one of the repos and then use it in every other repo, instead of having 3 copies of the same file that must be kept in sync?\r\n\r\nYes, in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now? \r\n",
"Feel free to delete the license list in `datasets` @albertvillanova ;)\r\n\r\nAlso FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.)"
] | 2022-09-05T15:37:32Z | 2022-09-06T16:43:32Z | 2022-09-05T17:01:04Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4930.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4930",
"merged_at": "2022-09-05T17:01:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4930.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4930"
} | This PR adds the `cc-by-nc-2.0` to the list of licenses because it is required by `scifact` dataset: https://github.com/allenai/scifact/blob/master/LICENSE.md | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4930/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4930/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4929/comments | https://api.github.com/repos/huggingface/datasets/issues/4929/events | https://github.com/huggingface/datasets/pull/4929 | 1,361,508,366 | PR_kwDODunzps4-WK2w | 4,929 | Fixes a typo in loading documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/7144772?v=4",
"events_url": "https://api.github.com/users/sighingnow/events{/privacy}",
"followers_url": "https://api.github.com/users/sighingnow/followers",
"following_url": "https://api.github.com/users/sighingnow/following{/other_user}",
"gists_url": "https://api.github.com/users/sighingnow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sighingnow",
"id": 7144772,
"login": "sighingnow",
"node_id": "MDQ6VXNlcjcxNDQ3NzI=",
"organizations_url": "https://api.github.com/users/sighingnow/orgs",
"received_events_url": "https://api.github.com/users/sighingnow/received_events",
"repos_url": "https://api.github.com/users/sighingnow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sighingnow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sighingnow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sighingnow"
} | [] | closed | false | null | [] | null | [] | 2022-09-05T07:18:54Z | 2022-09-06T02:11:03Z | 2022-09-05T13:06:38Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4929.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4929",
"merged_at": "2022-09-05T13:06:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4929.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4929"
} | As show in the [documentation page](https://huggingface.co/docs/datasets/loading) here the `"tr"in` should be `"train`.
![image](https://user-images.githubusercontent.com/7144772/188390445-e1f04d54-e3e3-4762-8686-63ecbe4087e5.png)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4929/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4929/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4928/comments | https://api.github.com/repos/huggingface/datasets/issues/4928/events | https://github.com/huggingface/datasets/pull/4928 | 1,360,941,172 | PR_kwDODunzps4-Ubi4 | 4,928 | Add ability to read-write to SQL databases. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Ah CI runs with `pandas=1.3.5` which doesn't return the number of row inserted.",
"wow this is super cool!",
"@lhoestq I'm getting error in integration tests, not sure if it's related to my PR. Any help would be appreciated :) \r\n\r\n```\r\nif not self._is_valid_token(token):\r\n> raise ValueError(\"Invalid token passed!\")\r\nE ValueError: Invalid token passed!\r\n```",
"I just relaunched the tests, it should be fixed now",
"Thanks a lot for working on this!\r\n\r\nI have some concerns with the current design:\r\n* Besides SQLite, the loader should also work with the other engines supported by SQLAlchemy. (A better name for it in the current state would be `sqlite` :))\r\n* It should support arbitrary queries/table names - only the latter currently works.\r\n* Exposing this loader as a packaged builder (`load_dataset(\"sql\", ...)`) is not a good idea for the following reasons:\r\n * Considering the scenario where a table with the same name is present in multiple files is very unlikely, the data files resolution is not needed here. And if we remove that, what the name of the default split should be? \"train\"?\r\n * `load_dataset(\"sql\", ...)` also implies that streaming should work, but that's not the case. And I don't think we can change that, considering how hard it is to make SQLite files streamable.\r\n\r\nAll this makes me think we shouldn't expose this builder as a packaged module and, instead, limit the API to `Dataset.from_sql`/`Dataset.to_sql` (with the signatures matching the ones in pandas as much as possible; regarding this, note that SQLAlchemy connections are not hashable/picklable, which is required for caching, but I think it's OK only to allow URI strings as connections to bypass that (Dask has the same limitation).\r\n\r\nWDYT?",
"Hi @mariosasko thank you for your review.\r\n\r\nI agree that `load_dataset('sql',...)` is a bit weird and I would be happy to remove it. To be honest, I only added it when I saw that it was the preferred way in `loading.mdx`. \r\n\r\nI agree that the `SELECT` should be a parameters as well. I'll add it.\r\n\r\nSo far, only `Dataset.to_sql` explicitly supports any SQLAlchemy Connexion, I'm pretty sure that `Dataset.from_sql` would work with a Connexion as well, but it would break the typing from the parent class which is `path_or_paths: NestedDataStructureLike[PathLike]`. I would prefer not to break this API Contract.\r\n\r\n\r\nI will have time to work on this over the weekend. Please let me know what you think if I do the following:\r\n* Remove `load_dataset('sql', ...)` and edit the documentation to use `to_sql, from_sql`.\r\n* Tentatively make `Dataset.from_sql` typing work with SQLAlchemy Connexion.\r\n* Add support for custom queries (Default would be `SELECT * FROM {table_name}`).\r\n\r\nCheers!",
"Perhaps after we merge https://github.com/huggingface/datasets/pull/4957 (**Done!**), you can subclass `AbstractDatasetInputStream` instead of `AbstractDatasetReader` to not break the contract with the connection object. Also, let's avoid having the default value for the query/table (you can set it to `None` in the builder and raise an error in the builder config's `__post_init__` if it's not provided). Other than that, sounds good!",
"@Dref360 I've made final changes/refinements to align the SQL API with Pandas/Dask. Let me know what you think.\r\n",
"Thank you so much! I was missing a lot of things sorry about that.\r\nLGTM",
"I think we can merge if the tests pass. \r\n\r\nOne last thing I would like to get your opinion on - currently, if SQLAlchemy is not installed, the missing dependency error will be thrown inside `pandas.read_sql`. Do you think we should be the ones throwing this error, e.g. after the imports in `packaged_modules/sql/sql.py` if `SQLALCHEMY_AVAILABLE` is `False` (note that this would mean making `sqlalchemy` a required dependency for the docs to be able to add `SqlConfig` to the package reference)?",
"> One last thing I would like to get your opinion on - currently, if SQLAlchemy is not installed, the missing dependency error will be thrown inside pandas.read_sql\r\n\r\nIs sqlalchemy always required for pd.read_sql ? If so, I think we can raise the error on our side.\r\nBut sqlalchemy should still be an optional dependency for `datasets` IMO",
"@lhoestq \r\n> Is sqlalchemy always required for pd.read_sql ? If so, I think we can raise the error on our side.\r\n\r\nIn our case, it's always required as we only support database URIs.\r\n\r\n> But sqlalchemy should still be an optional dependency for datasets IMO\r\n\r\nYes, it will remain optional for datasets but will be required for building the docs (as is`s3fs`, for instance). ",
"Ok I see ! Sounds good :)"
] | 2022-09-03T19:09:08Z | 2022-10-03T16:34:36Z | 2022-10-03T16:32:28Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4928.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4928",
"merged_at": "2022-10-03T16:32:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4928.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4928"
} | Fixes #3094
Add ability to read/write to SQLite files and also read from any SQL database supported by SQLAlchemy.
I didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional.
I also recorded a Loom to showcase the feature.
https://www.loom.com/share/f0e602c2de8a46f58bca4b43333d541f | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 4,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4928/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4928/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4927/comments | https://api.github.com/repos/huggingface/datasets/issues/4927/events | https://github.com/huggingface/datasets/pull/4927 | 1,360,428,139 | PR_kwDODunzps4-S0we | 4,927 | fix BLEU metric card | {
"avatar_url": "https://avatars.githubusercontent.com/u/40452030?v=4",
"events_url": "https://api.github.com/users/antoniolanza1996/events{/privacy}",
"followers_url": "https://api.github.com/users/antoniolanza1996/followers",
"following_url": "https://api.github.com/users/antoniolanza1996/following{/other_user}",
"gists_url": "https://api.github.com/users/antoniolanza1996/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/antoniolanza1996",
"id": 40452030,
"login": "antoniolanza1996",
"node_id": "MDQ6VXNlcjQwNDUyMDMw",
"organizations_url": "https://api.github.com/users/antoniolanza1996/orgs",
"received_events_url": "https://api.github.com/users/antoniolanza1996/received_events",
"repos_url": "https://api.github.com/users/antoniolanza1996/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/antoniolanza1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antoniolanza1996/subscriptions",
"type": "User",
"url": "https://api.github.com/users/antoniolanza1996"
} | [] | closed | false | null | [] | null | [] | 2022-09-02T17:00:56Z | 2022-09-09T16:28:15Z | 2022-09-09T16:28:15Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4927.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4927",
"merged_at": "2022-09-09T16:28:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4927.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4927"
} | I've fixed some typos in BLEU metric card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4927/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4927/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4926/comments | https://api.github.com/repos/huggingface/datasets/issues/4926/events | https://github.com/huggingface/datasets/pull/4926 | 1,360,384,484 | PR_kwDODunzps4-Srm1 | 4,926 | Dataset infos in yaml | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright this is ready for review :)\r\nI mostly would like your opinion on the YAML structure and what we can do in the docs (IMO we can add the docs about those fields in the Hub docs). Other than that let me know if the changes in info.py and features.py look good to you",
"LGTM and looking forward to having this merged!! β€οΈ ",
"We plan to do a release today, we'll merge this after the release :)\r\n\r\nEDIT: actually tomorrow",
"Created https://github.com/huggingface/datasets/pull/5018 where I added the YAML `dataset_info` of every single dataset in this repo\r\n\r\nsee other dataset cards: [imagenet-1k](https://github.com/huggingface/datasets/blob/040102f100964a33fd334e2695f1c493fa6b92db/datasets/imagenet-1k/README.md), [glue](https://github.com/huggingface/datasets/blob/040102f100964a33fd334e2695f1c493fa6b92db/datasets/glue/README.md), [flores](https://github.com/huggingface/datasets/blob/040102f100964a33fd334e2695f1c493fa6b92db/datasets/flores/README.md), [gem](https://github.com/huggingface/datasets/blob/040102f100964a33fd334e2695f1c493fa6b92db/datasets/gem/README.md)",
"Took your comments into account and updated `push_to_hub` to push the dataset_info to the README.md instead of json :) Let me know if it sounds good to you now !"
] | 2022-09-02T16:10:05Z | 2022-10-03T09:13:07Z | 2022-10-03T09:11:12Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4926.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4926",
"merged_at": "2022-10-03T09:11:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4926.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4926"
} | To simplify the addition of new datasets, we'd like to have the dataset infos in the YAML and deprecate the dataset_infos.json file. YAML is readable and easy to edit, and the YAML metadata of the readme already contain dataset metadata so we would have everything in one place.
To be more specific, I moved these fields from DatasetInfo to the YAML:
- config_name (if there are several configs)
- download_size
- dataset_size
- features
- splits
Here is what I ended up with for `squad`:
```yaml
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 79346360
num_examples: 87599
- name: validation
num_bytes: 10473040
num_examples: 10570
config_name: plain_text
download_size: 35142551
dataset_size: 89819400
```
and it can be a list if there are several configs
I already did the change for `conll2000` and `crime_and_punish` as an example.
## Implementation details
### Load/Read
This is done via `DatasetInfosDict.write_to_directory/from_directory`
I had to implement a custom the YAML export logic for `SplitDict`, `Version` and `Features`.
The first two are trivial, but the logic for `Features` is more complicated, because I added a simplification step (or the YAML would be too long and less readable): it's just a formatting step to remove unnecessary nesting of YAML data.
### Other changes
I had to update the DatasetModule factories to also download the README.md alongside the dataset scripts/data files, and not just the dataset_infos.json
## YAML validation
I removed the old validation code that was in metadata.py, now we can just use the Hub YAML validation
## Datasets-cli
The `datasets-cli test --save_infos` command now creates a README.md file with the dataset_infos in it, instead of a datasets_infos.json file
## Backward compatibility
`dataset_infos.json` files are still supported and loaded if they exist to have full backward compatibility.
Though I removed the unnecessary keys when the value is the default (like all the `id: null` from the Value feature types) to make them easier to read.
## TODO
- [x] add comments
- [x] tests
- [x] document the new YAML fields
- [x] try to reload the new dataset_infos.json file content with an old version of `datasets`
## EDITS
- removed "config_name" when there's only one config
- removed "version" for now (?), because it's not useful in general
- renamed the yaml field "dataset_info" instead of "dataset_infos", since it only has one by default (and because "infos" is not english)
Fix https://github.com/huggingface/datasets/issues/4876 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4926/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4926/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4925/comments | https://api.github.com/repos/huggingface/datasets/issues/4925/events | https://github.com/huggingface/datasets/pull/4925 | 1,360,007,616 | PR_kwDODunzps4-RbP5 | 4,925 | Add note about loading image / audio files to docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4925). All of your documentation changes will be reflected on that endpoint.",
"Thanks for the feedback @polinaeterna ! I've reworded the docs a bit to integrate your comments and this should be ready for another review :)",
"> I've just realized that there is another PR about audio documentation open: #4872\r\n> and there the more detailed description on how to use `audiofolder` is moved to another section (\"Create an audio dataset\")\r\n\r\nAh yes, let's add a comment to #4872 - that will be simpler than the alternatives :)",
"@polinaeterna @lhoestq What do you think about adding support for the metadata format from Kaggle (one metadata file for each split with the name equal to the split name) to ImageFolder/AudioFolder? I also think we can relax some requirements a bit by:\r\n* allowing `filename` as the name of the main metadata column (currently, only `file_path` is allowed)\r\n* not requiring that the features of all the given metadata files are equal. Instead, we can have a soft check by using `_check_if_features_can_be_aligned` + `_align_features`. The rationale is that train/val metadata often has extra columns compared to test metadata.\r\n\r\nThese changes would allow us to load the Kaggle dataset linked in the forum thread without any \"interventions\".\r\n\r\nPS: this metadata format for ImageFolder was also proposed by @abhishekkrthakur initially.\r\n",
"Can you give more details about the Kaggle format ? I'm down to discuss it in a separate issue if you don't mind.\r\n\r\n> allowing filename as the name of the main metadata column (currently, only file_path is allowed)\r\n\r\n`filename` refers to the name of the file, so there's no logic about relative path or directories. If I recall correctly this is what we're doing right now so why not\r\n\r\n> not requiring that the features of all the given metadata files are equal. Instead, we can have a soft check by using _check_if_features_can_be_aligned + _align_features. The rationale is that train/val metadata often has extra columns compared to test metadata.\r\n\r\n+1 and we can set to None the missing features",
"I'm not sure if this is worth opening a new issue :).\r\n\r\nWhat I mean by the Kaggle format is the structure like this one (the name of a metadata file is equal to the directory it \"references\"):\r\n```\r\n- train\r\n - img1.jpeg\r\n - img2.jpeg\r\n - ...\r\n- test\r\n - img1.jpeg\r\n - img2.jpeg\r\n - ... \r\n- train.csv\r\n- test.csv\r\n```\r\n\r\n\r\n",
"Sounds nice !",
"@mariosasko +1 to allowing different features set and metadata filenames corresponding to split names\r\n\r\nConsidering filename column - right now it's even called `file_name` now, which is not nice because in fact it's a relative file path indeed, so I think it should be `file_path` (and I don't know why I haven't thought about it before the release...)",
"@lewtun don't you mind if I close this pull request as I've integrated your changes in https://github.com/huggingface/datasets/pull/4872 ? (it doesn't have a link to a kaggle example though)"
] | 2022-09-02T10:31:58Z | 2022-09-26T12:21:30Z | 2022-09-23T13:59:07Z | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4925",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4925"
} | This PR adds a small note about how to load image / audio datasets that have multiple splits in their dataset structure.
Related forum thread: https://discuss.huggingface.co/t/loading-train-and-test-splits-with-audiofolder/22447
cc @NielsRogge | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4925/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4925/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4924/comments | https://api.github.com/repos/huggingface/datasets/issues/4924/events | https://github.com/huggingface/datasets/issues/4924 | 1,358,611,513 | I_kwDODunzps5Q-sQ5 | 4,924 | Concatenate_datasets loads everything into RAM | {
"avatar_url": "https://avatars.githubusercontent.com/u/39416047?v=4",
"events_url": "https://api.github.com/users/louisdeneve/events{/privacy}",
"followers_url": "https://api.github.com/users/louisdeneve/followers",
"following_url": "https://api.github.com/users/louisdeneve/following{/other_user}",
"gists_url": "https://api.github.com/users/louisdeneve/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/louisdeneve",
"id": 39416047,
"login": "louisdeneve",
"node_id": "MDQ6VXNlcjM5NDE2MDQ3",
"organizations_url": "https://api.github.com/users/louisdeneve/orgs",
"received_events_url": "https://api.github.com/users/louisdeneve/received_events",
"repos_url": "https://api.github.com/users/louisdeneve/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/louisdeneve/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louisdeneve/subscriptions",
"type": "User",
"url": "https://api.github.com/users/louisdeneve"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-09-01T10:25:17Z | 2022-09-01T11:50:54Z | 2022-09-01T11:50:54Z | NONE | null | null | null | ## Describe the bug
When loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance
## Steps to reproduce the bug
```python
gcs = gcsfs.GCSFileSystem(project='project')
datasets = [load_from_disk(f'path/to/slice/of/data/{i}', fs=gcs, keep_in_memory=False) for i in range(10)]
dataset = concatenate_datasets(datasets)
```
## Expected results
A concatenated dataset which is stored on my disk.
## Actual results
Concatenated dataset gets loaded into RAM and overflows it which gets the process killed.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 8.0.1
- Pandas version: 1.4.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4924/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4924/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4923/comments | https://api.github.com/repos/huggingface/datasets/issues/4923/events | https://github.com/huggingface/datasets/pull/4923 | 1,357,735,287 | PR_kwDODunzps4-Jv7C | 4,923 | decode mp3 with librosa if torchaudio is > 0.12 as a temporary workaround | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! Should we still support torchaudio>0.12 if it works ? And if it doesn't we can explain that downgrading is the right solution, or alternatively use librosa",
"@lhoestq \r\n\r\n> Should we still support torchaudio>0.12 if it works ? And if it doesn't we can explain that downgrading is the right solution, or alternatively use librosa\r\n\r\nI'm not sure here, because from the one hand, if `torchaudio` works - it works 60 times faster then `librosa`.\r\nBut from the other hand, we will get inconsistent behavior (=different results of decoding) for users of `torchaudio>=0.12`. \r\nI'd better go for using `librosa` only to avoid inconsistency then. wdyt?",
"It seems a bit too constraining to not allow users who have a working torchaudio 0.12 setup to not use it. \r\n\r\nIf the issue is about avoiding silent errors if the decoding changes, maybe we can log which back-end is used ? It can even be a warning with performance suggestions (\"you're using librosa but torchaudio 0.xx is recommended\").\r\n\r\nNote that users can still have a requirements.txt or whatever in their projects if they really want full reproducibility (and it's the bare minimum imo)\r\n\r\nThere are multiple possible back-ends so it's maybe not reasonable to only allow one back-end, especially since each back-end has installation constrains and there's no \"best\" back-end.",
"Woohoo all green ! Feel free to merge if it's all good for you :)"
] | 2022-08-31T18:57:59Z | 2022-11-02T11:54:33Z | 2022-09-20T13:12:52Z | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4923.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4923",
"merged_at": "2022-09-20T13:12:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4923.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4923"
} | `torchaudio>0.12` fails with decoding mp3 files if `ffmpeg<4`. currently we ask users to downgrade torchaudio, but sometimes it's not possible as torchaudio version is binded to torch version. as a temporary workaround we can decode mp3 with librosa (though it 60 times slower, at least it works)
another option would be to ask users to install the required version of `ffmpeg`, but is non-trivial on colab: it's not in apt packages in ubuntu 18 and `conda` is not preinstalled (with `conda` it would be easily installable)
- [x] decode with torchaudio anyway if the version of ffmpeg is correct? it's 60 times faster
- [x] tests
- [x] DO NOT FORGET to get back all the tests
see https://github.com/huggingface/datasets/issues/4776 and https://github.com/huggingface/datasets/issues/3663#issuecomment-1225797165 (there is a Colab notebook to reproduce the error) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4923/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4923/timeline | null | null | true |