url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
2.14B
| node_id
stringlengths 18
32
| number
int64 1
6.68k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | num_comments
int64 0
70
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | __index_level_0__
int64 0
6.65k
| is_pr
bool 2
classes | comments
sequencelengths 0
30
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6683/comments | https://api.github.com/repos/huggingface/datasets/issues/6683/events | https://github.com/huggingface/datasets/pull/6683 | 2,142,751,955 | PR_kwDODunzps5nTxGu | 6,683 | Fix imagefolder dataset url | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | 2 | "2024-02-19T16:26:51" | "2024-02-19T17:24:25" | "2024-02-19T17:18:10" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6683.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6683",
"merged_at": "2024-02-19T17:18:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6683.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6683"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6683/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6683/timeline | null | null | 0 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6683). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005501 / 0.011353 (-0.005851) | 0.003907 / 0.011008 (-0.007101) | 0.063524 / 0.038508 (0.025016) | 0.031773 / 0.023109 (0.008664) | 0.244672 / 0.275898 (-0.031226) | 0.293342 / 0.323480 (-0.030138) | 0.004091 / 0.007986 (-0.003895) | 0.002837 / 0.004328 (-0.001491) | 0.049181 / 0.004250 (0.044930) | 0.044515 / 0.037052 (0.007462) | 0.263932 / 0.258489 (0.005443) | 0.288412 / 0.293841 (-0.005429) | 0.028338 / 0.128546 (-0.100208) | 0.010865 / 0.075646 (-0.064781) | 0.207979 / 0.419271 (-0.211293) | 0.036149 / 0.043533 (-0.007384) | 0.250674 / 0.255139 (-0.004465) | 0.263232 / 0.283200 (-0.019968) | 0.017919 / 0.141683 (-0.123763) | 1.127794 / 1.452155 (-0.324360) | 1.172071 / 1.492716 (-0.320645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090435 / 0.018006 (0.072429) | 0.300041 / 0.000490 (0.299552) | 0.000217 / 0.000200 (0.000018) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018986 / 0.037411 (-0.018426) | 0.064872 / 0.014526 (0.050346) | 0.074738 / 0.176557 (-0.101818) | 0.121577 / 0.737135 (-0.615558) | 0.076416 / 0.296338 (-0.219923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279471 / 0.215209 (0.064262) | 2.743066 / 2.077655 (0.665411) | 1.429511 / 1.504120 (-0.074609) | 1.315391 / 1.541195 (-0.225804) | 1.371255 / 1.468490 (-0.097235) | 0.570708 / 4.584777 (-4.014069) | 2.373047 / 3.745712 (-1.372666) | 2.813198 / 5.269862 (-2.456663) | 1.768928 / 4.565676 (-2.796749) | 0.066031 / 0.424275 (-0.358244) | 0.005074 / 0.007607 (-0.002533) | 0.333484 / 0.226044 (0.107440) | 3.295002 / 2.268929 (1.026074) | 1.796089 / 55.444624 (-53.648535) | 1.521849 / 6.876477 (-5.354627) | 1.604417 / 2.142072 (-0.537655) | 0.645235 / 4.805227 (-4.159992) | 0.119226 / 6.500664 (-6.381439) | 0.043275 / 0.075469 (-0.032194) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986350 / 1.841788 (-0.855438) | 11.921886 / 8.074308 (3.847578) | 9.878841 / 10.191392 (-0.312551) | 0.141072 / 0.680424 (-0.539352) | 0.014514 / 0.534201 (-0.519687) | 0.304060 / 0.579283 (-0.275223) | 0.267844 / 0.434364 (-0.166520) | 0.324881 / 0.540337 (-0.215457) | 0.421426 / 1.386936 (-0.965510) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005322 / 0.011353 (-0.006030) | 0.003942 / 0.011008 (-0.007066) | 0.050629 / 0.038508 (0.012121) | 0.031176 / 0.023109 (0.008066) | 0.279627 / 0.275898 (0.003729) | 0.302667 / 0.323480 (-0.020813) | 0.004281 / 0.007986 (-0.003705) | 0.002900 / 0.004328 (-0.001428) | 0.048168 / 0.004250 (0.043918) | 0.046094 / 0.037052 (0.009042) | 0.290714 / 0.258489 (0.032224) | 0.321336 / 0.293841 (0.027496) | 0.047934 / 0.128546 (-0.080612) | 0.010773 / 0.075646 (-0.064873) | 0.059439 / 0.419271 (-0.359832) | 0.033644 / 0.043533 (-0.009889) | 0.273710 / 0.255139 (0.018571) | 0.295144 / 0.283200 (0.011944) | 0.018115 / 0.141683 (-0.123568) | 1.150302 / 1.452155 (-0.301853) | 1.197304 / 1.492716 (-0.295412) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090262 / 0.018006 (0.072255) | 0.300727 / 0.000490 (0.300238) | 0.000228 / 0.000200 (0.000028) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022706 / 0.037411 (-0.014706) | 0.077420 / 0.014526 (0.062894) | 0.089119 / 0.176557 (-0.087437) | 0.126760 / 0.737135 (-0.610375) | 0.090702 / 0.296338 (-0.205637) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296558 / 0.215209 (0.081349) | 2.865311 / 2.077655 (0.787656) | 1.587355 / 1.504120 (0.083235) | 1.491660 / 1.541195 (-0.049534) | 1.513604 / 1.468490 (0.045114) | 0.565209 / 4.584777 (-4.019568) | 2.450648 / 3.745712 (-1.295064) | 2.709941 / 5.269862 (-2.559921) | 1.775032 / 4.565676 (-2.790645) | 0.063767 / 0.424275 (-0.360508) | 0.005047 / 0.007607 (-0.002560) | 0.347406 / 0.226044 (0.121361) | 3.416671 / 2.268929 (1.147743) | 1.949653 / 55.444624 (-53.494971) | 1.669885 / 6.876477 (-5.206592) | 1.848125 / 2.142072 (-0.293947) | 0.648179 / 4.805227 (-4.157048) | 0.116374 / 6.500664 (-6.384290) | 0.041816 / 0.075469 (-0.033653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007009 / 1.841788 (-0.834779) | 12.749964 / 8.074308 (4.675656) | 10.765890 / 10.191392 (0.574498) | 0.141743 / 0.680424 (-0.538681) | 0.016077 / 0.534201 (-0.518124) | 0.293275 / 0.579283 (-0.286008) | 0.277064 / 0.434364 (-0.157300) | 0.327039 / 0.540337 (-0.213299) | 0.421784 / 1.386936 (-0.965152) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f807cd4c733a3616011a3f7f53a9fa56f7d5f685 \"CML watermark\")\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/6682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6682/comments | https://api.github.com/repos/huggingface/datasets/issues/6682/events | https://github.com/huggingface/datasets/pull/6682 | 2,142,000,800 | PR_kwDODunzps5nRME6 | 6,682 | Update GitHub Actions to Node 20 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | 1 | "2024-02-19T10:10:50" | "2024-02-19T10:15:06" | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6682.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6682",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6682.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6682"
} | Update GitHub Actions to Node 20.
Fix #6679. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6682/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6682/timeline | null | null | 1 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6682). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/6681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6681/comments | https://api.github.com/repos/huggingface/datasets/issues/6681/events | https://github.com/huggingface/datasets/pull/6681 | 2,141,985,239 | PR_kwDODunzps5nRItQ | 6,681 | Update release instructions | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | open | false | null | [] | null | 1 | "2024-02-19T10:03:08" | "2024-02-19T10:07:19" | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6681.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6681",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6681.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6681"
} | Update release instructions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6681/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6681/timeline | null | null | 2 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6681). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/6680 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6680/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6680/comments | https://api.github.com/repos/huggingface/datasets/issues/6680/events | https://github.com/huggingface/datasets/pull/6680 | 2,141,979,527 | PR_kwDODunzps5nRHcz | 6,680 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 2 | "2024-02-19T10:00:31" | "2024-02-19T10:06:43" | "2024-02-19T10:00:40" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6680.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6680",
"merged_at": "2024-02-19T10:00:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6680.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6680"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6680/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6680/timeline | null | null | 3 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6680). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004981 / 0.011353 (-0.006372) | 0.003030 / 0.011008 (-0.007978) | 0.059862 / 0.038508 (0.021354) | 0.030595 / 0.023109 (0.007486) | 0.262638 / 0.275898 (-0.013260) | 0.276287 / 0.323480 (-0.047193) | 0.003955 / 0.007986 (-0.004030) | 0.002667 / 0.004328 (-0.001661) | 0.047827 / 0.004250 (0.043576) | 0.041170 / 0.037052 (0.004118) | 0.252494 / 0.258489 (-0.005995) | 0.277493 / 0.293841 (-0.016348) | 0.027269 / 0.128546 (-0.101277) | 0.010380 / 0.075646 (-0.065266) | 0.204404 / 0.419271 (-0.214867) | 0.035251 / 0.043533 (-0.008282) | 0.244368 / 0.255139 (-0.010771) | 0.258003 / 0.283200 (-0.025197) | 0.016751 / 0.141683 (-0.124932) | 1.134108 / 1.452155 (-0.318047) | 1.159969 / 1.492716 (-0.332748) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.087011 / 0.018006 (0.069004) | 0.295577 / 0.000490 (0.295087) | 0.000213 / 0.000200 (0.000013) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017993 / 0.037411 (-0.019419) | 0.061690 / 0.014526 (0.047164) | 0.071791 / 0.176557 (-0.104765) | 0.118282 / 0.737135 (-0.618853) | 0.073453 / 0.296338 (-0.222885) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284764 / 0.215209 (0.069555) | 2.771791 / 2.077655 (0.694136) | 1.469614 / 1.504120 (-0.034506) | 1.334096 / 1.541195 (-0.207099) | 1.339995 / 1.468490 (-0.128495) | 0.562740 / 4.584777 (-4.022037) | 2.390219 / 3.745712 (-1.355493) | 2.679776 / 5.269862 (-2.590086) | 1.684397 / 4.565676 (-2.881279) | 0.062137 / 0.424275 (-0.362138) | 0.004934 / 0.007607 (-0.002673) | 0.336257 / 0.226044 (0.110212) | 3.256330 / 2.268929 (0.987401) | 1.801520 / 55.444624 (-53.643105) | 1.520662 / 6.876477 (-5.355815) | 1.537023 / 2.142072 (-0.605049) | 0.644360 / 4.805227 (-4.160867) | 0.115603 / 6.500664 (-6.385061) | 0.040601 / 0.075469 (-0.034868) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982992 / 1.841788 (-0.858796) | 11.002182 / 8.074308 (2.927873) | 9.564671 / 10.191392 (-0.626721) | 0.137682 / 0.680424 (-0.542742) | 0.013936 / 0.534201 (-0.520265) | 0.285898 / 0.579283 (-0.293385) | 0.264426 / 0.434364 (-0.169938) | 0.321615 / 0.540337 (-0.218723) | 0.420216 / 1.386936 (-0.966720) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005239 / 0.011353 (-0.006114) | 0.003165 / 0.011008 (-0.007844) | 0.048176 / 0.038508 (0.009668) | 0.030680 / 0.023109 (0.007571) | 0.258176 / 0.275898 (-0.017722) | 0.282342 / 0.323480 (-0.041138) | 0.004218 / 0.007986 (-0.003767) | 0.002616 / 0.004328 (-0.001713) | 0.047253 / 0.004250 (0.043003) | 0.044178 / 0.037052 (0.007126) | 0.276942 / 0.258489 (0.018453) | 0.312353 / 0.293841 (0.018512) | 0.046714 / 0.128546 (-0.081832) | 0.009892 / 0.075646 (-0.065755) | 0.056123 / 0.419271 (-0.363149) | 0.032691 / 0.043533 (-0.010842) | 0.268781 / 0.255139 (0.013642) | 0.285921 / 0.283200 (0.002722) | 0.016050 / 0.141683 (-0.125633) | 1.138058 / 1.452155 (-0.314096) | 1.193405 / 1.492716 (-0.299311) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089280 / 0.018006 (0.071273) | 0.288425 / 0.000490 (0.287935) | 0.000201 / 0.000200 (0.000001) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021536 / 0.037411 (-0.015875) | 0.075157 / 0.014526 (0.060631) | 0.088943 / 0.176557 (-0.087613) | 0.125191 / 0.737135 (-0.611945) | 0.087991 / 0.296338 (-0.208348) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285103 / 0.215209 (0.069894) | 2.791798 / 2.077655 (0.714144) | 1.518104 / 1.504120 (0.013984) | 1.388690 / 1.541195 (-0.152505) | 1.409896 / 1.468490 (-0.058594) | 0.554077 / 4.584777 (-4.030700) | 2.396994 / 3.745712 (-1.348718) | 2.596801 / 5.269862 (-2.673060) | 1.683761 / 4.565676 (-2.881915) | 0.061209 / 0.424275 (-0.363066) | 0.004735 / 0.007607 (-0.002873) | 0.337566 / 0.226044 (0.111522) | 3.258183 / 2.268929 (0.989254) | 1.886185 / 55.444624 (-53.558439) | 1.599148 / 6.876477 (-5.277329) | 1.726867 / 2.142072 (-0.415206) | 0.642784 / 4.805227 (-4.162444) | 0.114947 / 6.500664 (-6.385717) | 0.040450 / 0.075469 (-0.035019) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001316 / 1.841788 (-0.840472) | 11.695367 / 8.074308 (3.621058) | 9.854870 / 10.191392 (-0.336522) | 0.136462 / 0.680424 (-0.543961) | 0.016708 / 0.534201 (-0.517493) | 0.286421 / 0.579283 (-0.292862) | 0.270773 / 0.434364 (-0.163591) | 0.322947 / 0.540337 (-0.217390) | 0.416772 / 1.386936 (-0.970164) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6ba542847314bd349301937e59c3de04ce13aa5e \"CML watermark\")\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/6679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6679/comments | https://api.github.com/repos/huggingface/datasets/issues/6679/events | https://github.com/huggingface/datasets/issues/6679 | 2,141,953,981 | I_kwDODunzps5_q5-9 | 6,679 | Node.js 16 GitHub Actions are deprecated | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 0 | "2024-02-19T09:47:37" | "2024-02-19T11:34:11" | null | MEMBER | null | null | null | `Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
We should update them to Node 20.
See warnings in our CI, e.g.: https://github.com/huggingface/datasets/actions/runs/7957295009?pr=6678
```
Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20: actions/checkout@v3, actions/setup-python@v4. For more information see: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6679/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6679/timeline | null | null | 4 | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6678 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6678/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6678/comments | https://api.github.com/repos/huggingface/datasets/issues/6678/events | https://github.com/huggingface/datasets/pull/6678 | 2,141,902,154 | PR_kwDODunzps5nQ2ZO | 6,678 | Release: 2.17.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 2 | "2024-02-19T09:24:29" | "2024-02-19T10:03:00" | "2024-02-19T09:56:52" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6678.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6678",
"merged_at": "2024-02-19T09:56:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6678.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6678"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6678/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6678/timeline | null | null | 5 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6678). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005070 / 0.011353 (-0.006283) | 0.003685 / 0.011008 (-0.007323) | 0.063191 / 0.038508 (0.024683) | 0.030506 / 0.023109 (0.007397) | 0.258033 / 0.275898 (-0.017865) | 0.269790 / 0.323480 (-0.053690) | 0.004180 / 0.007986 (-0.003805) | 0.002811 / 0.004328 (-0.001517) | 0.048718 / 0.004250 (0.044467) | 0.043473 / 0.037052 (0.006421) | 0.267306 / 0.258489 (0.008817) | 0.290315 / 0.293841 (-0.003526) | 0.027402 / 0.128546 (-0.101144) | 0.010782 / 0.075646 (-0.064864) | 0.207243 / 0.419271 (-0.212029) | 0.035637 / 0.043533 (-0.007896) | 0.264032 / 0.255139 (0.008893) | 0.270450 / 0.283200 (-0.012749) | 0.017407 / 0.141683 (-0.124276) | 1.107481 / 1.452155 (-0.344674) | 1.163187 / 1.492716 (-0.329529) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095065 / 0.018006 (0.077059) | 0.305169 / 0.000490 (0.304680) | 0.000221 / 0.000200 (0.000021) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017706 / 0.037411 (-0.019706) | 0.061431 / 0.014526 (0.046905) | 0.073541 / 0.176557 (-0.103016) | 0.117326 / 0.737135 (-0.619809) | 0.074368 / 0.296338 (-0.221971) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284533 / 0.215209 (0.069324) | 2.775230 / 2.077655 (0.697575) | 1.455196 / 1.504120 (-0.048924) | 1.357651 / 1.541195 (-0.183544) | 1.337477 / 1.468490 (-0.131013) | 0.567439 / 4.584777 (-4.017338) | 2.380612 / 3.745712 (-1.365100) | 2.792305 / 5.269862 (-2.477556) | 1.726501 / 4.565676 (-2.839176) | 0.061729 / 0.424275 (-0.362546) | 0.004928 / 0.007607 (-0.002679) | 0.331989 / 0.226044 (0.105944) | 3.301704 / 2.268929 (1.032776) | 1.805107 / 55.444624 (-53.639518) | 1.500434 / 6.876477 (-5.376043) | 1.535548 / 2.142072 (-0.606524) | 0.639490 / 4.805227 (-4.165737) | 0.115876 / 6.500664 (-6.384788) | 0.041895 / 0.075469 (-0.033574) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993584 / 1.841788 (-0.848203) | 11.596680 / 8.074308 (3.522371) | 9.631726 / 10.191392 (-0.559666) | 0.141153 / 0.680424 (-0.539271) | 0.014077 / 0.534201 (-0.520124) | 0.288237 / 0.579283 (-0.291046) | 0.261213 / 0.434364 (-0.173151) | 0.323897 / 0.540337 (-0.216441) | 0.420350 / 1.386936 (-0.966586) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005275 / 0.011353 (-0.006078) | 0.003739 / 0.011008 (-0.007269) | 0.049801 / 0.038508 (0.011293) | 0.030544 / 0.023109 (0.007435) | 0.264835 / 0.275898 (-0.011063) | 0.297738 / 0.323480 (-0.025742) | 0.004487 / 0.007986 (-0.003499) | 0.002835 / 0.004328 (-0.001493) | 0.048091 / 0.004250 (0.043841) | 0.044375 / 0.037052 (0.007322) | 0.286538 / 0.258489 (0.028049) | 0.319561 / 0.293841 (0.025720) | 0.047925 / 0.128546 (-0.080621) | 0.010816 / 0.075646 (-0.064831) | 0.057940 / 0.419271 (-0.361331) | 0.033588 / 0.043533 (-0.009945) | 0.270075 / 0.255139 (0.014936) | 0.290441 / 0.283200 (0.007242) | 0.017173 / 0.141683 (-0.124509) | 1.164686 / 1.452155 (-0.287469) | 1.213205 / 1.492716 (-0.279511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093408 / 0.018006 (0.075402) | 0.305525 / 0.000490 (0.305036) | 0.000235 / 0.000200 (0.000035) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021605 / 0.037411 (-0.015806) | 0.075479 / 0.014526 (0.060953) | 0.085990 / 0.176557 (-0.090567) | 0.124783 / 0.737135 (-0.612352) | 0.089108 / 0.296338 (-0.207230) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.306222 / 0.215209 (0.091013) | 2.987282 / 2.077655 (0.909627) | 1.664714 / 1.504120 (0.160594) | 1.523136 / 1.541195 (-0.018059) | 1.534112 / 1.468490 (0.065622) | 0.566347 / 4.584777 (-4.018430) | 2.438641 / 3.745712 (-1.307071) | 2.669048 / 5.269862 (-2.600814) | 1.732935 / 4.565676 (-2.832741) | 0.063460 / 0.424275 (-0.360815) | 0.004973 / 0.007607 (-0.002634) | 0.366233 / 0.226044 (0.140189) | 3.553578 / 2.268929 (1.284649) | 1.984343 / 55.444624 (-53.460281) | 1.711038 / 6.876477 (-5.165439) | 1.857346 / 2.142072 (-0.284726) | 0.651077 / 4.805227 (-4.154150) | 0.118670 / 6.500664 (-6.381994) | 0.041839 / 0.075469 (-0.033631) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008230 / 1.841788 (-0.833558) | 12.047403 / 8.074308 (3.973095) | 10.039053 / 10.191392 (-0.152339) | 0.141640 / 0.680424 (-0.538784) | 0.014758 / 0.534201 (-0.519443) | 0.285016 / 0.579283 (-0.294267) | 0.275461 / 0.434364 (-0.158903) | 0.325535 / 0.540337 (-0.214803) | 0.415871 / 1.386936 (-0.971065) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5d2268261bf0fb3eed8faae6bc1fa20a25b4382c \"CML watermark\")\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/6677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6677/comments | https://api.github.com/repos/huggingface/datasets/issues/6677/events | https://github.com/huggingface/datasets/pull/6677 | 2,141,244,167 | PR_kwDODunzps5nOmo_ | 6,677 | Pass through information about location of cache directory. | {
"avatar_url": "https://avatars.githubusercontent.com/u/94808782?v=4",
"events_url": "https://api.github.com/users/stridge-cruxml/events{/privacy}",
"followers_url": "https://api.github.com/users/stridge-cruxml/followers",
"following_url": "https://api.github.com/users/stridge-cruxml/following{/other_user}",
"gists_url": "https://api.github.com/users/stridge-cruxml/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stridge-cruxml",
"id": 94808782,
"login": "stridge-cruxml",
"node_id": "U_kgDOBaaqzg",
"organizations_url": "https://api.github.com/users/stridge-cruxml/orgs",
"received_events_url": "https://api.github.com/users/stridge-cruxml/received_events",
"repos_url": "https://api.github.com/users/stridge-cruxml/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stridge-cruxml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stridge-cruxml/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stridge-cruxml"
} | [] | open | false | null | [] | null | 0 | "2024-02-18T23:48:57" | "2024-02-18T23:48:57" | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6677.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6677",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6677.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6677"
} | If cache directory is set, information is not passed through.
Pass download config in as an arg too. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6677/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6677/timeline | null | null | 6 | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6676/comments | https://api.github.com/repos/huggingface/datasets/issues/6676/events | https://github.com/huggingface/datasets/issues/6676 | 2,140,648,619 | I_kwDODunzps5_l7Sr | 6,676 | Can't Read List of JSON Files Properly | {
"avatar_url": "https://avatars.githubusercontent.com/u/20232088?v=4",
"events_url": "https://api.github.com/users/lordsoffallen/events{/privacy}",
"followers_url": "https://api.github.com/users/lordsoffallen/followers",
"following_url": "https://api.github.com/users/lordsoffallen/following{/other_user}",
"gists_url": "https://api.github.com/users/lordsoffallen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lordsoffallen",
"id": 20232088,
"login": "lordsoffallen",
"node_id": "MDQ6VXNlcjIwMjMyMDg4",
"organizations_url": "https://api.github.com/users/lordsoffallen/orgs",
"received_events_url": "https://api.github.com/users/lordsoffallen/received_events",
"repos_url": "https://api.github.com/users/lordsoffallen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lordsoffallen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordsoffallen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lordsoffallen"
} | [] | open | false | null | [] | null | 1 | "2024-02-17T22:58:15" | "2024-02-17T23:11:12" | null | NONE | null | null | null | ### Describe the bug
Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging:
The code fails with
```
ArrowInvalid: JSON parse error: Invalid value. in row 0
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
This doesn't work
```
from datasets import Dataset
# dir contains 100 json files.
Dataset.from_json("/PUT SOME PATH HERE/*")
```
This works:
```
from datasets import concatenate_datasets
ls_ds = []
for file in list_of_json_files:
ls_ds.append(Dataset.from_json(file))
ds = concatenate_datasets(ls_ds)
```
### Expected behavior
I expect this to read json files properly as error is not clear
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6676/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6676/timeline | null | null | 7 | false | [
"Found the issue, if there are other files in the directory, it gets caught into this `*` so essentially it should be `*.json`. Could we possibly to check for list of files to make sure the pattern matches json files and raise error if not?"
] |
https://api.github.com/repos/huggingface/datasets/issues/6675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6675/comments | https://api.github.com/repos/huggingface/datasets/issues/6675/events | https://github.com/huggingface/datasets/issues/6675 | 2,139,640,381 | I_kwDODunzps5_iFI9 | 6,675 | Allow image model (color conversion) to be specified as part of datasets Image() decode | {
"avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4",
"events_url": "https://api.github.com/users/rwightman/events{/privacy}",
"followers_url": "https://api.github.com/users/rwightman/followers",
"following_url": "https://api.github.com/users/rwightman/following{/other_user}",
"gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwightman",
"id": 5702664,
"login": "rwightman",
"node_id": "MDQ6VXNlcjU3MDI2NjQ=",
"organizations_url": "https://api.github.com/users/rwightman/orgs",
"received_events_url": "https://api.github.com/users/rwightman/received_events",
"repos_url": "https://api.github.com/users/rwightman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwightman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwightman"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | "2024-02-16T23:43:20" | "2024-02-16T23:47:03" | null | NONE | null | null | null | ### Feature request
Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.org/vision/main/generated/torchvision.io.decode_jpeg.html, and similarly in tensorflow.data pipelines decode_jpeg or https://www.tensorflow.org/api_docs/python/tf/io/decode_and_crop_jpeg have a channels arg that allows controlling the image mode in the decode step.
datasets currently requires this pattern (from [examples](https://huggingface.co/docs/datasets/main/en/image_process)):
```
from torchvision.transforms import Compose, ColorJitter, ToTensor
jitter = Compose(
[
ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.7),
ToTensor(),
]
)
def transforms(examples):
examples["pixel_values"] = [jitter(image.convert("RGB")) for image in examples["image"]]
return examples
```
### Motivation
It would be nice to be able to handle `image.convert("RGB")` (or other modes) in the decode step, before applying torchvision transforms, this would reduce differences in code when handling pipelines that can handle torchvision, webdatset, or hf datasets with fewer code differences and without needing to handle image mode argument passing in two different stages of the pipelines...
### Your contribution
Can do a PR with guidance on how mode should be passed / set on the dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6675/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6675/timeline | null | null | 8 | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6674/comments | https://api.github.com/repos/huggingface/datasets/issues/6674/events | https://github.com/huggingface/datasets/issues/6674 | 2,139,595,576 | I_kwDODunzps5_h6M4 | 6,674 | Depprcated Overview.ipynb Link to new Quickstart Notebook invalid | {
"avatar_url": "https://avatars.githubusercontent.com/u/55932554?v=4",
"events_url": "https://api.github.com/users/Codeblockz/events{/privacy}",
"followers_url": "https://api.github.com/users/Codeblockz/followers",
"following_url": "https://api.github.com/users/Codeblockz/following{/other_user}",
"gists_url": "https://api.github.com/users/Codeblockz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Codeblockz",
"id": 55932554,
"login": "Codeblockz",
"node_id": "MDQ6VXNlcjU1OTMyNTU0",
"organizations_url": "https://api.github.com/users/Codeblockz/orgs",
"received_events_url": "https://api.github.com/users/Codeblockz/received_events",
"repos_url": "https://api.github.com/users/Codeblockz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Codeblockz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Codeblockz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Codeblockz"
} | [] | open | false | null | [] | null | 0 | "2024-02-16T22:51:35" | "2024-02-16T22:51:35" | null | NONE | null | null | null | ### Describe the bug
For the dreprecated notebook found [here](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb). The link to the new notebook is broken.
### Steps to reproduce the bug
Click the [Quickstart notebook](https://github.com/huggingface/notebooks/blob/main/datasets_doc/quickstart.ipynb) link in the notebook.
### Expected behavior
I believe is it suposed to link [here](https://github.com/huggingface/notebooks/blob/main/datasets_doc/en/quickstart.ipynb). That is mentioned in the readme.
### Environment info
Colab | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6674/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6674/timeline | null | null | 9 | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6673/comments | https://api.github.com/repos/huggingface/datasets/issues/6673/events | https://github.com/huggingface/datasets/issues/6673 | 2,139,522,827 | I_kwDODunzps5_hocL | 6,673 | IterableDataset `set_epoch` is ignored when DataLoader `persistent_workers=True` | {
"avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4",
"events_url": "https://api.github.com/users/rwightman/events{/privacy}",
"followers_url": "https://api.github.com/users/rwightman/followers",
"following_url": "https://api.github.com/users/rwightman/following{/other_user}",
"gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwightman",
"id": 5702664,
"login": "rwightman",
"node_id": "MDQ6VXNlcjU3MDI2NjQ=",
"organizations_url": "https://api.github.com/users/rwightman/orgs",
"received_events_url": "https://api.github.com/users/rwightman/received_events",
"repos_url": "https://api.github.com/users/rwightman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwightman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwightman"
} | [] | open | false | null | [] | null | 0 | "2024-02-16T21:38:12" | "2024-02-16T21:39:48" | null | NONE | null | null | null | ### Describe the bug
When persistent workers are enabled, the epoch that's set via the IterableDataset instance held by the training process is ignored by the workers as they are disconnected across processes.
PyTorch samplers for non-iterable datasets have a mechanism to sync this, datasets.IterableDataset does not.
In my own use of IterableDatasets I usually track the epoch count which crosses process boundaries in a multiprocessing.Value
### Steps to reproduce the bug
Use a streaming dataset (Iterable) w/ the recommended pattern below and `persistent_workers=True` in the torch DataLoader.
```
for epoch in range(epochs):
shuffled_dataset.set_epoch(epoch)
for example in shuffled_dataset:
...
```
### Expected behavior
When the canonical bit of code above is used with `num_workers > 0` and `persistent_workers=True`, the epoch set via `set_epoch()` is propagated to the IterableDataset instances in the worker processes
### Environment info
N/A | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6673/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6673/timeline | null | null | 10 | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6672 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6672/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6672/comments | https://api.github.com/repos/huggingface/datasets/issues/6672/events | https://github.com/huggingface/datasets/pull/6672 | 2,138,732,288 | PR_kwDODunzps5nGAlw | 6,672 | Remove deprecated verbose parameter from CSV builder | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | 3 | "2024-02-16T14:26:21" | "2024-02-19T09:26:34" | "2024-02-19T09:20:22" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6672.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6672",
"merged_at": "2024-02-19T09:20:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6672.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6672"
} | Remove deprecated `verbose` parameter from CSV builder.
Note that the `verbose` parameter is deprecated since pandas 2.2.0. See:
- https://github.com/pandas-dev/pandas/pull/56556
- https://github.com/pandas-dev/pandas/pull/57450
Fix #6671. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6672/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6672/timeline | null | null | 11 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6672). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I am merging this PR (so that it is included in the next patch release) to remove the deprecation warning raised by the CSV builder from pandas 2.2.0.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005374 / 0.011353 (-0.005979) | 0.003833 / 0.011008 (-0.007175) | 0.063465 / 0.038508 (0.024957) | 0.029564 / 0.023109 (0.006455) | 0.252759 / 0.275898 (-0.023139) | 0.274726 / 0.323480 (-0.048754) | 0.004014 / 0.007986 (-0.003971) | 0.002754 / 0.004328 (-0.001574) | 0.049351 / 0.004250 (0.045101) | 0.041858 / 0.037052 (0.004806) | 0.269023 / 0.258489 (0.010534) | 0.290670 / 0.293841 (-0.003171) | 0.028435 / 0.128546 (-0.100111) | 0.010988 / 0.075646 (-0.064658) | 0.207447 / 0.419271 (-0.211824) | 0.035945 / 0.043533 (-0.007588) | 0.257336 / 0.255139 (0.002197) | 0.267310 / 0.283200 (-0.015890) | 0.018575 / 0.141683 (-0.123108) | 1.144515 / 1.452155 (-0.307640) | 1.214614 / 1.492716 (-0.278102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103527 / 0.018006 (0.085521) | 0.310607 / 0.000490 (0.310117) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018597 / 0.037411 (-0.018814) | 0.063176 / 0.014526 (0.048650) | 0.073553 / 0.176557 (-0.103003) | 0.120648 / 0.737135 (-0.616487) | 0.075625 / 0.296338 (-0.220713) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289148 / 0.215209 (0.073939) | 2.798351 / 2.077655 (0.720696) | 1.487909 / 1.504120 (-0.016211) | 1.369945 / 1.541195 (-0.171250) | 1.378889 / 1.468490 (-0.089602) | 0.569825 / 4.584777 (-4.014952) | 2.413309 / 3.745712 (-1.332403) | 2.795668 / 5.269862 (-2.474193) | 1.757748 / 4.565676 (-2.807929) | 0.064686 / 0.424275 (-0.359589) | 0.005027 / 0.007607 (-0.002580) | 0.341835 / 0.226044 (0.115791) | 3.349915 / 2.268929 (1.080987) | 1.864253 / 55.444624 (-53.580371) | 1.595788 / 6.876477 (-5.280688) | 1.666127 / 2.142072 (-0.475945) | 0.665239 / 4.805227 (-4.139989) | 0.120563 / 6.500664 (-6.380101) | 0.043649 / 0.075469 (-0.031820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988543 / 1.841788 (-0.853244) | 11.973275 / 8.074308 (3.898967) | 9.685401 / 10.191392 (-0.505991) | 0.141416 / 0.680424 (-0.539008) | 0.014328 / 0.534201 (-0.519873) | 0.287063 / 0.579283 (-0.292220) | 0.266284 / 0.434364 (-0.168080) | 0.324643 / 0.540337 (-0.215694) | 0.423845 / 1.386936 (-0.963091) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005430 / 0.011353 (-0.005923) | 0.003770 / 0.011008 (-0.007239) | 0.050879 / 0.038508 (0.012371) | 0.031929 / 0.023109 (0.008819) | 0.297739 / 0.275898 (0.021841) | 0.319380 / 0.323480 (-0.004100) | 0.004348 / 0.007986 (-0.003637) | 0.002783 / 0.004328 (-0.001545) | 0.050024 / 0.004250 (0.045774) | 0.045209 / 0.037052 (0.008157) | 0.307608 / 0.258489 (0.049119) | 0.338168 / 0.293841 (0.044327) | 0.051712 / 0.128546 (-0.076834) | 0.011092 / 0.075646 (-0.064554) | 0.059830 / 0.419271 (-0.359441) | 0.033894 / 0.043533 (-0.009638) | 0.295278 / 0.255139 (0.040139) | 0.310749 / 0.283200 (0.027550) | 0.018676 / 0.141683 (-0.123007) | 1.201086 / 1.452155 (-0.251069) | 1.258214 / 1.492716 (-0.234502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094079 / 0.018006 (0.076073) | 0.304657 / 0.000490 (0.304168) | 0.000225 / 0.000200 (0.000026) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021969 / 0.037411 (-0.015442) | 0.075749 / 0.014526 (0.061223) | 0.087878 / 0.176557 (-0.088679) | 0.126022 / 0.737135 (-0.611114) | 0.089466 / 0.296338 (-0.206873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286415 / 0.215209 (0.071206) | 2.831867 / 2.077655 (0.754212) | 1.584119 / 1.504120 (0.079999) | 1.468454 / 1.541195 (-0.072740) | 1.495831 / 1.468490 (0.027341) | 0.579569 / 4.584777 (-4.005208) | 2.477248 / 3.745712 (-1.268464) | 2.830536 / 5.269862 (-2.439325) | 1.820188 / 4.565676 (-2.745488) | 0.064408 / 0.424275 (-0.359867) | 0.005156 / 0.007607 (-0.002451) | 0.342391 / 0.226044 (0.116347) | 3.424380 / 2.268929 (1.155452) | 1.993110 / 55.444624 (-53.451514) | 1.702971 / 6.876477 (-5.173506) | 1.844281 / 2.142072 (-0.297792) | 0.668208 / 4.805227 (-4.137020) | 0.120306 / 6.500664 (-6.380358) | 0.042127 / 0.075469 (-0.033342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.019118 / 1.841788 (-0.822670) | 12.418330 / 8.074308 (4.344022) | 10.474226 / 10.191392 (0.282834) | 0.148510 / 0.680424 (-0.531914) | 0.015107 / 0.534201 (-0.519094) | 0.289488 / 0.579283 (-0.289795) | 0.278149 / 0.434364 (-0.156215) | 0.334655 / 0.540337 (-0.205682) | 0.419127 / 1.386936 (-0.967809) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#58733d2824192fc748cc8730cf77c33be5ded2ea \"CML watermark\")\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/6671 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6671/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6671/comments | https://api.github.com/repos/huggingface/datasets/issues/6671/events | https://github.com/huggingface/datasets/issues/6671 | 2,138,727,870 | I_kwDODunzps5_emW- | 6,671 | CSV builder raises deprecation warning on verbose parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 0 | "2024-02-16T14:23:46" | "2024-02-19T09:20:23" | "2024-02-19T09:20:23" | MEMBER | null | null | null | CSV builder raises a deprecation warning on `verbose` parameter:
```
FutureWarning: The 'verbose' keyword in pd.read_csv is deprecated and will be removed in a future version.
```
See:
- https://github.com/pandas-dev/pandas/pull/56556
- https://github.com/pandas-dev/pandas/pull/57450 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6671/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6671/timeline | null | completed | 12 | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6670/comments | https://api.github.com/repos/huggingface/datasets/issues/6670/events | https://github.com/huggingface/datasets/issues/6670 | 2,138,372,958 | I_kwDODunzps5_dPte | 6,670 | ValueError | {
"avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4",
"events_url": "https://api.github.com/users/prashanth19bolukonda/events{/privacy}",
"followers_url": "https://api.github.com/users/prashanth19bolukonda/followers",
"following_url": "https://api.github.com/users/prashanth19bolukonda/following{/other_user}",
"gists_url": "https://api.github.com/users/prashanth19bolukonda/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prashanth19bolukonda",
"id": 112316000,
"login": "prashanth19bolukonda",
"node_id": "U_kgDOBrHOYA",
"organizations_url": "https://api.github.com/users/prashanth19bolukonda/orgs",
"received_events_url": "https://api.github.com/users/prashanth19bolukonda/received_events",
"repos_url": "https://api.github.com/users/prashanth19bolukonda/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prashanth19bolukonda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prashanth19bolukonda/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prashanth19bolukonda"
} | [] | closed | false | null | [] | null | 2 | "2024-02-16T11:05:17" | "2024-02-17T04:26:34" | "2024-02-16T14:43:53" | NONE | null | null | null | ### Describe the bug
ValueError Traceback (most recent call last)
[<ipython-input-11-9b99bc80ec23>](https://localhost:8080/#) in <cell line: 11>()
9 import numpy as np
10 import matplotlib.pyplot as plt
---> 11 from datasets import DatasetDict, Dataset
12 from transformers import AutoTokenizer, AutoModelForSequenceClassification
13 from transformers import Trainer, TrainingArguments
5 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
16 __version__ = "2.17.0"
17
---> 18 from .arrow_dataset import Dataset
19 from .arrow_reader import ReadInstruction
20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
65
66 from . import config
---> 67 from .arrow_reader import ArrowReader
68 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
69 from .data_files import sanitize_patterns
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module>
27
28 import pyarrow as pa
---> 29 import pyarrow.parquet as pq
30 from tqdm.contrib.concurrent import thread_map
31
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module>
18 # flake8: noqa
19
---> 20 from .core import *
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module>
34 import pyarrow as pa
35 import pyarrow.lib as lib
---> 36 import pyarrow._parquet as _parquet
37
38 from pyarrow._parquet import (ParquetReader, Statistics, # noqa
/usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet()
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Expected behavior
Resolve the binary incompatibility
### Environment info
Google Colab Note book | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6670/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6670/timeline | null | completed | 13 | false | [
"Hi @prashanth19bolukonda,\r\n\r\nYou have to restart the notebook runtime session after the installation of `datasets`.\r\n\r\nDuplicate of:\r\n- #5923",
"Thank you soo much\r\n\r\nOn Fri, Feb 16, 2024 at 8:14 PM Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> Closed #6670 <https://github.com/huggingface/datasets/issues/6670> as\r\n> completed.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6670#event-11829788289>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A2Y44YDQOBUFUWMR4C5O3QTYT5WDJAVCNFSM6AAAAABDL24S5SVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJRHAZDSNZYHAZDQOI>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/6669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6669/comments | https://api.github.com/repos/huggingface/datasets/issues/6669/events | https://github.com/huggingface/datasets/issues/6669 | 2,138,322,662 | I_kwDODunzps5_dDbm | 6,669 | attribute error when writing trainer.train() | {
"avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4",
"events_url": "https://api.github.com/users/prashanth19bolukonda/events{/privacy}",
"followers_url": "https://api.github.com/users/prashanth19bolukonda/followers",
"following_url": "https://api.github.com/users/prashanth19bolukonda/following{/other_user}",
"gists_url": "https://api.github.com/users/prashanth19bolukonda/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prashanth19bolukonda",
"id": 112316000,
"login": "prashanth19bolukonda",
"node_id": "U_kgDOBrHOYA",
"organizations_url": "https://api.github.com/users/prashanth19bolukonda/orgs",
"received_events_url": "https://api.github.com/users/prashanth19bolukonda/received_events",
"repos_url": "https://api.github.com/users/prashanth19bolukonda/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prashanth19bolukonda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prashanth19bolukonda/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prashanth19bolukonda"
} | [] | open | false | null | [] | null | 0 | "2024-02-16T10:40:49" | "2024-02-16T10:40:49" | null | NONE | null | null | null | ### Describe the bug
AttributeError Traceback (most recent call last)
Cell In[39], line 2
1 # Start the training process
----> 2 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1537 hf_hub_utils.enable_progress_bars()
1538 else:
-> 1539 return inner_training_loop(
1540 args=args,
1541 resume_from_checkpoint=resume_from_checkpoint,
1542 trial=trial,
1543 ignore_keys_for_eval=ignore_keys_for_eval,
1544 )
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1836, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1833 rng_to_sync = True
1835 step = -1
-> 1836 for step, inputs in enumerate(epoch_iterator):
1837 total_batched_samples += 1
1839 if self.args.include_num_input_tokens_seen:
File /opt/conda/lib/python3.10/site-packages/accelerate/data_loader.py:451, in DataLoaderShard.__iter__(self)
449 # We iterate one batch ahead to check when we are at the end
450 try:
--> 451 current_batch = next(dataloader_iter)
452 except StopIteration:
453 yield
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
627 if self._sampler_iter is None:
628 # TODO([https://github.com/pytorch/pytorch/issues/76750)](https://github.com/pytorch/pytorch/issues/76750)%3C/span%3E)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
632 if self._dataset_kind == _DatasetKind.Iterable and \
633 self._IterableDataset_len_called is not None and \
634 self._num_yielded > self._IterableDataset_len_called:
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self)
672 def _next_data(self):
673 index = self._next_index() # may raise StopIteration
--> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
675 if self._pin_memory:
676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in _MapDatasetFetcher.fetch(self, possibly_batched_index)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in <listcomp>(.0)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key)
1762 def __getitem__(self, key): # noqa: F811
1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 1764 return self._getitem(
1765 key,
1766 )
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs)
1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 1749 formatted_output = format_table(
1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1751 )
1752 return formatted_output
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:540, in format_table(table, key, formatter, format_columns, output_all_columns)
538 else:
539 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns)
--> 540 formatted_output = formatter(pa_table_to_format, query_type=query_type)
541 if output_all_columns:
542 if isinstance(formatted_output, MutableMapping):
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:57, in TorchFormatter.format_row(self, pa_table)
56 def format_row(self, pa_table: pa.Table) -> dict:
---> 57 row = self.numpy_arrow_extractor().extract_row(pa_table)
58 return self.recursive_tensorize(row)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:154, in NumpyArrowExtractor.extract_row(self, pa_table)
153 def extract_row(self, pa_table: pa.Table) -> dict:
--> 154 return _unnest(self.extract_batch(pa_table))
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in NumpyArrowExtractor.extract_batch(self, pa_table)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in <dictcomp>(.0)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:196, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
--> 196 if any(
197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:197, in <genexpr>(.0)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
196 if any(
--> 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr)
319 warnings.warn(
320 f"In the future `np.{attr}` will be defined as the "
321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
323 if attr in __former_attrs__:
--> 324 raise AttributeError(__former_attrs__[attr])
326 if attr == 'testing':
327 import numpy.testing as testing
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecationsAttributeError Traceback (most recent call last)
Cell In[39], line 2
1 # Start the training process
----> 2 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1537 hf_hub_utils.enable_progress_bars()
1538 else:
-> 1539 return inner_training_loop(
1540 args=args,
1541 resume_from_checkpoint=resume_from_checkpoint,
1542 trial=trial,
1543 ignore_keys_for_eval=ignore_keys_for_eval,
1544 )
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1836, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
1833 rng_to_sync = True
1835 step = -1
-> 1836 for step, inputs in enumerate(epoch_iterator):
1837 total_batched_samples += 1
1839 if self.args.include_num_input_tokens_seen:
File /opt/conda/lib/python3.10/site-packages/accelerate/data_loader.py:451, in DataLoaderShard.__iter__(self)
449 # We iterate one batch ahead to check when we are at the end
450 try:
--> 451 current_batch = next(dataloader_iter)
452 except StopIteration:
453 yield
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
627 if self._sampler_iter is None:
628 # TODO([https://github.com/pytorch/pytorch/issues/76750)](https://github.com/pytorch/pytorch/issues/76750)%3C/span%3E)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
632 if self._dataset_kind == _DatasetKind.Iterable and \
633 self._IterableDataset_len_called is not None and \
634 self._num_yielded > self._IterableDataset_len_called:
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self)
672 def _next_data(self):
673 index = self._next_index() # may raise StopIteration
--> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
675 if self._pin_memory:
676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in _MapDatasetFetcher.fetch(self, possibly_batched_index)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in <listcomp>(.0)
49 data = self.dataset.__getitems__(possibly_batched_index)
50 else:
---> 51 data = [self.dataset[idx] for idx in possibly_batched_index]
52 else:
53 data = self.dataset[possibly_batched_index]
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key)
1762 def __getitem__(self, key): # noqa: F811
1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 1764 return self._getitem(
1765 key,
1766 )
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs)
1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 1749 formatted_output = format_table(
1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1751 )
1752 return formatted_output
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:540, in format_table(table, key, formatter, format_columns, output_all_columns)
538 else:
539 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns)
--> 540 formatted_output = formatter(pa_table_to_format, query_type=query_type)
541 if output_all_columns:
542 if isinstance(formatted_output, MutableMapping):
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:57, in TorchFormatter.format_row(self, pa_table)
56 def format_row(self, pa_table: pa.Table) -> dict:
---> 57 row = self.numpy_arrow_extractor().extract_row(pa_table)
58 return self.recursive_tensorize(row)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:154, in NumpyArrowExtractor.extract_row(self, pa_table)
153 def extract_row(self, pa_table: pa.Table) -> dict:
--> 154 return _unnest(self.extract_batch(pa_table))
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in NumpyArrowExtractor.extract_batch(self, pa_table)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in <dictcomp>(.0)
159 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:196, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
--> 196 if any(
197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:197, in <genexpr>(.0)
194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()
195 if len(array) > 0:
196 if any(
--> 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape))
198 or (isinstance(x, float) and np.isnan(x))
199 for x in array
200 ):
201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object})
202 return np.array(array, copy=False, **self.np_array_kwargs)
File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr)
319 warnings.warn(
320 f"In the future `np.{attr}` will be defined as the "
321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2)
323 if attr in __former_attrs__:
--> 324 raise AttributeError(__former_attrs__[attr])
326 if attr == 'testing':
327 import numpy.testing as testing
AttributeError: module 'numpy' has no attribute 'object'.
`np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:
https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
Please help me to resolve the above error
### Steps to reproduce the bug
Please resolve the issue of deprecated function np.object to object in the numpy
### Expected behavior
np.object should be written as object only
### Environment info
kaggle notebook | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6669/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6669/timeline | null | null | 14 | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6668/comments | https://api.github.com/repos/huggingface/datasets/issues/6668/events | https://github.com/huggingface/datasets/issues/6668 | 2,137,859,935 | I_kwDODunzps5_bSdf | 6,668 | Chapter 6 - Issue Loading `cnn_dailymail` dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/34660389?v=4",
"events_url": "https://api.github.com/users/hariravichandran/events{/privacy}",
"followers_url": "https://api.github.com/users/hariravichandran/followers",
"following_url": "https://api.github.com/users/hariravichandran/following{/other_user}",
"gists_url": "https://api.github.com/users/hariravichandran/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hariravichandran",
"id": 34660389,
"login": "hariravichandran",
"node_id": "MDQ6VXNlcjM0NjYwMzg5",
"organizations_url": "https://api.github.com/users/hariravichandran/orgs",
"received_events_url": "https://api.github.com/users/hariravichandran/received_events",
"repos_url": "https://api.github.com/users/hariravichandran/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hariravichandran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hariravichandran/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hariravichandran"
} | [] | open | false | null | [] | null | 0 | "2024-02-16T04:40:56" | "2024-02-16T04:40:56" | null | NONE | null | null | null | ### Describe the bug
So I am getting this bug when I try to run cell 4 of the Chapter 6 notebook code:
`dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")`
Error Message:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[4], line 4
1 #hide_output
2 from datasets import load_dataset
----> 4 dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")
7 # dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0", trust_remote_code=True)
8 print(f"Features: {dataset['train'].column_names}")
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2583 # Build dataset for splits
2584 keep_in_memory = (
2585 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2586 )
-> 2587 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
2588 # Rename and cast features to match task schema
2589 if task is not None:
2590 # To avoid issuing the same warning twice
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1244, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1241 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS)
1243 # Create a dataset for each of the given splits
-> 1244 datasets = map_nested(
1245 partial(
1246 self._build_single_dataset,
1247 run_post_process=run_post_process,
1248 verification_mode=verification_mode,
1249 in_memory=in_memory,
1250 ),
1251 split,
1252 map_tuple=True,
1253 disable_tqdm=True,
1254 )
1255 if isinstance(datasets, dict):
1256 datasets = DatasetDict(datasets)
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:477, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)
466 mapped = [
467 map_nested(
468 function=function,
(...)
474 for obj in iterable
475 ]
476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:
--> 477 mapped = [
478 _single_map_nested((function, obj, types, None, True, None))
479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
480 ]
481 else:
482 with warnings.catch_warnings():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:478, in <listcomp>(.0)
466 mapped = [
467 map_nested(
468 function=function,
(...)
474 for obj in iterable
475 ]
476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length:
477 mapped = [
--> 478 _single_map_nested((function, obj, types, None, True, None))
479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
480 ]
481 else:
482 with warnings.catch_warnings():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:370, in _single_map_nested(args)
368 # Singleton first to spare some computation
369 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 370 return function(data_struct)
372 # Reduce logging to keep things readable in multiprocessing with tqdm
373 if rank is not None and logging.get_verbosity() < logging.WARNING:
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1274, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory)
1271 split = Split(split)
1273 # Build base dataset
-> 1274 ds = self._as_dataset(
1275 split=split,
1276 in_memory=in_memory,
1277 )
1278 if run_post_process:
1279 for resource_file_name in self._post_processing_resources(split).values():
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1348, in DatasetBuilder._as_dataset(self, split, in_memory)
1346 if self._check_legacy_cache():
1347 dataset_name = self.name
-> 1348 dataset_kwargs = ArrowReader(cache_dir, self.info).read(
1349 name=dataset_name,
1350 instructions=split,
1351 split_infos=self.info.splits.values(),
1352 in_memory=in_memory,
1353 )
1354 fingerprint = self._get_dataset_fingerprint(split)
1355 return Dataset(fingerprint=fingerprint, **dataset_kwargs)
File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\arrow_reader.py:254, in BaseReader.read(self, name, instructions, split_infos, in_memory)
252 if not files:
253 msg = f'Instruction "{instructions}" corresponds to no data!'
--> 254 raise ValueError(msg)
255 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
**ValueError: Instruction "validation" corresponds to no data!**
````
Looks like the data is not being loaded. Any advice would be appreciated. Thanks!
### Steps to reproduce the bug
Run all cells of Chapter 6 notebook.
### Expected behavior
Data should load correctly without any errors.
### Environment info
- `datasets` version: 2.17.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.18
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6668/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6668/timeline | null | null | 15 | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6667/comments | https://api.github.com/repos/huggingface/datasets/issues/6667/events | https://github.com/huggingface/datasets/issues/6667 | 2,137,769,552 | I_kwDODunzps5_a8ZQ | 6,667 | Default config for squad is incorrect | {
"avatar_url": "https://avatars.githubusercontent.com/u/22651617?v=4",
"events_url": "https://api.github.com/users/kiddyboots216/events{/privacy}",
"followers_url": "https://api.github.com/users/kiddyboots216/followers",
"following_url": "https://api.github.com/users/kiddyboots216/following{/other_user}",
"gists_url": "https://api.github.com/users/kiddyboots216/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kiddyboots216",
"id": 22651617,
"login": "kiddyboots216",
"node_id": "MDQ6VXNlcjIyNjUxNjE3",
"organizations_url": "https://api.github.com/users/kiddyboots216/orgs",
"received_events_url": "https://api.github.com/users/kiddyboots216/received_events",
"repos_url": "https://api.github.com/users/kiddyboots216/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kiddyboots216/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kiddyboots216/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kiddyboots216"
} | [] | open | false | null | [] | null | 0 | "2024-02-16T02:36:55" | "2024-02-16T02:36:55" | null | NONE | null | null | null | ### Describe the bug
If you download Squad, it will download the plain_text version, but the config still specifies "default", so if you set the offline mode the cache will try to look it up according to the config_id which is "default" and this will say;
ValueError: Couldn't find cache for squad for config 'default'
Available configs in the cache: ['plain_text']
### Steps to reproduce the bug
1. export HF_DATASETS_OFFLINE=0
2. load_dataset("squad")
3. export HF_DATASETS_OFFLINE=1
4. load_dataset("squad")
### Expected behavior
We should change the config_name I guess?
### Environment info
linux, latest version of datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6667/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6667/timeline | null | null | 16 | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6665/comments | https://api.github.com/repos/huggingface/datasets/issues/6665/events | https://github.com/huggingface/datasets/pull/6665 | 2,136,136,425 | PR_kwDODunzps5m9JgW | 6,665 | Allow SplitDict setitem to replace existing SplitInfo | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | 1 | "2024-02-15T10:17:08" | "2024-02-15T10:21:26" | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6665.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6665",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6665.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6665"
} | Fix this code provided by @clefourrier
```python
import datasets
import os
token = os.getenv("TOKEN")
results = datasets.load_dataset("gaia-benchmark/results_public", "2023", token=token, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)
results["test"] = datasets.Dataset.from_list([row for row in results["test"] if row["model"] != "StateFlow"])
results["test"].push_to_hub("gaia-benchmark/results_public", "2023", token=token, split="test")
```
```
ValueError Traceback (most recent call last)
Cell In[43], line 1
----> 1 results["test"].push_to_hub("gaia-benchmark/results_public", "2023", token=token, split="test")
File ~/miniconda3/envs/default310/lib/python3.10/site-packages/datasets/arrow_dataset.py:5498, in Dataset.push_to_hub(self, repo_id, config_name, split, private, token, branch, max_shard_size, num_shards, embed_external_files)
5496 repo_info.dataset_size = (repo_info.dataset_size or 0) + dataset_nbytes
5497 repo_info.size_in_bytes = repo_info.download_size + repo_info.dataset_size
-> 5498 repo_info.splits[split] = SplitInfo(
5499 split, num_bytes=dataset_nbytes, num_examples=len(self), dataset_name=dataset_name
5500 )
5501 info_to_dump = repo_info
5502 # create the metadata configs if it was uploaded with push_to_hub before metadata configs existed
File ~/miniconda3/envs/default310/lib/python3.10/site-packages/datasets/splits.py:541, in SplitDict.__setitem__(self, key, value)
539 raise ValueError(f"Cannot add elem. (key mismatch: '{key}' != '{value.name}')")
540 if key in self:
--> 541 raise ValueError(f"Split {key} already present")
542 super().__setitem__(key, value)
ValueError: Split test already present
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6665/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6665/timeline | null | null | 17 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6665). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/6664 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6664/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6664/comments | https://api.github.com/repos/huggingface/datasets/issues/6664/events | https://github.com/huggingface/datasets/pull/6664 | 2,135,483,978 | PR_kwDODunzps5m67g0 | 6,664 | Revert the changes in `arrow_writer.py` from #6636 | {
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bryant1410",
"id": 3905501,
"login": "bryant1410",
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bryant1410"
} | [] | closed | false | null | [] | null | 5 | "2024-02-15T01:47:33" | "2024-02-16T14:02:39" | "2024-02-16T02:31:11" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6664.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6664",
"merged_at": "2024-02-16T02:31:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6664.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6664"
} | #6636 broke `write_examples_on_file` and `write_batch` from the class `ArrowWriter`. I'm undoing these changes. See #6663.
Note the current implementation doesn't keep the order of the columns and the schema, thus setting a wrong schema for each column. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6664/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6664/timeline | null | null | 18 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6664). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Hi! We can't revert this as the \"reverted\" implementation has quadratic time complexity. Instead, let's fix it:\r\n\r\nI agree, but it's the implementation we have had so far. Why don't we:\r\n1. Release a hotfix ASAP (since would be doing a revert, we know it works as before) so people can continue using this library fine since AFAIU right now mostly writing examples for people is broken.\r\n2. Then, focus on still applying the performance improvement and release again",
"The fix is straightforward, so one patch release (after this PR is merged) is enough.\r\n\r\nBtw, let's also add a test to `tests/test_arrow_writer.py` to avoid this issue in the future.",
"> Btw, let's also add a test to tests/test_arrow_writer.py to avoid this issue in the future.\r\n\r\nWould you mind adding such test, as you're more familiar with the codebase?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005083 / 0.011353 (-0.006270) | 0.003697 / 0.011008 (-0.007311) | 0.063302 / 0.038508 (0.024794) | 0.028866 / 0.023109 (0.005757) | 0.249987 / 0.275898 (-0.025911) | 0.270803 / 0.323480 (-0.052677) | 0.004096 / 0.007986 (-0.003890) | 0.002752 / 0.004328 (-0.001577) | 0.049156 / 0.004250 (0.044906) | 0.042936 / 0.037052 (0.005884) | 0.266907 / 0.258489 (0.008418) | 0.291462 / 0.293841 (-0.002379) | 0.027703 / 0.128546 (-0.100844) | 0.011006 / 0.075646 (-0.064641) | 0.206238 / 0.419271 (-0.213033) | 0.035446 / 0.043533 (-0.008087) | 0.248923 / 0.255139 (-0.006216) | 0.264141 / 0.283200 (-0.019058) | 0.017545 / 0.141683 (-0.124138) | 1.157145 / 1.452155 (-0.295009) | 1.199007 / 1.492716 (-0.293710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092741 / 0.018006 (0.074734) | 0.299057 / 0.000490 (0.298567) | 0.000211 / 0.000200 (0.000011) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017936 / 0.037411 (-0.019475) | 0.061552 / 0.014526 (0.047026) | 0.072938 / 0.176557 (-0.103618) | 0.118192 / 0.737135 (-0.618944) | 0.074589 / 0.296338 (-0.221750) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287186 / 0.215209 (0.071977) | 2.795694 / 2.077655 (0.718039) | 1.474386 / 1.504120 (-0.029734) | 1.359065 / 1.541195 (-0.182130) | 1.375295 / 1.468490 (-0.093196) | 0.569448 / 4.584777 (-4.015329) | 2.374428 / 3.745712 (-1.371284) | 2.770198 / 5.269862 (-2.499663) | 1.716346 / 4.565676 (-2.849330) | 0.063173 / 0.424275 (-0.361102) | 0.005031 / 0.007607 (-0.002576) | 0.333197 / 0.226044 (0.107153) | 3.271739 / 2.268929 (1.002811) | 1.826406 / 55.444624 (-53.618218) | 1.554537 / 6.876477 (-5.321939) | 1.565927 / 2.142072 (-0.576146) | 0.649796 / 4.805227 (-4.155431) | 0.118371 / 6.500664 (-6.382293) | 0.042536 / 0.075469 (-0.032933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969882 / 1.841788 (-0.871906) | 11.638201 / 8.074308 (3.563893) | 9.759370 / 10.191392 (-0.432022) | 0.128069 / 0.680424 (-0.552355) | 0.013493 / 0.534201 (-0.520708) | 0.287324 / 0.579283 (-0.291959) | 0.267542 / 0.434364 (-0.166821) | 0.320072 / 0.540337 (-0.220265) | 0.421132 / 1.386936 (-0.965804) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005679 / 0.011353 (-0.005674) | 0.003746 / 0.011008 (-0.007262) | 0.050149 / 0.038508 (0.011641) | 0.034382 / 0.023109 (0.011273) | 0.289802 / 0.275898 (0.013904) | 0.314993 / 0.323480 (-0.008487) | 0.004488 / 0.007986 (-0.003498) | 0.002786 / 0.004328 (-0.001542) | 0.047987 / 0.004250 (0.043737) | 0.046589 / 0.037052 (0.009537) | 0.301420 / 0.258489 (0.042931) | 0.335384 / 0.293841 (0.041543) | 0.050701 / 0.128546 (-0.077845) | 0.010987 / 0.075646 (-0.064660) | 0.058292 / 0.419271 (-0.360979) | 0.033973 / 0.043533 (-0.009560) | 0.288923 / 0.255139 (0.033784) | 0.306263 / 0.283200 (0.023064) | 0.018856 / 0.141683 (-0.122827) | 1.160721 / 1.452155 (-0.291433) | 1.208151 / 1.492716 (-0.284565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092633 / 0.018006 (0.074626) | 0.300353 / 0.000490 (0.299864) | 0.000219 / 0.000200 (0.000019) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022257 / 0.037411 (-0.015154) | 0.075417 / 0.014526 (0.060892) | 0.087289 / 0.176557 (-0.089268) | 0.125416 / 0.737135 (-0.611720) | 0.088751 / 0.296338 (-0.207588) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286477 / 0.215209 (0.071268) | 2.801931 / 2.077655 (0.724277) | 1.553034 / 1.504120 (0.048914) | 1.426152 / 1.541195 (-0.115043) | 1.443824 / 1.468490 (-0.024666) | 0.563298 / 4.584777 (-4.021479) | 2.428968 / 3.745712 (-1.316744) | 2.685964 / 5.269862 (-2.583897) | 1.752304 / 4.565676 (-2.813372) | 0.064174 / 0.424275 (-0.360101) | 0.005079 / 0.007607 (-0.002528) | 0.344899 / 0.226044 (0.118855) | 3.372528 / 2.268929 (1.103600) | 1.900723 / 55.444624 (-53.543901) | 1.623721 / 6.876477 (-5.252756) | 1.781009 / 2.142072 (-0.361064) | 0.655229 / 4.805227 (-4.149998) | 0.116050 / 6.500664 (-6.384614) | 0.040374 / 0.075469 (-0.035095) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004714 / 1.841788 (-0.837074) | 12.108179 / 8.074308 (4.033871) | 10.233447 / 10.191392 (0.042055) | 0.141438 / 0.680424 (-0.538986) | 0.015387 / 0.534201 (-0.518814) | 0.288068 / 0.579283 (-0.291216) | 0.277025 / 0.434364 (-0.157339) | 0.331714 / 0.540337 (-0.208623) | 0.424209 / 1.386936 (-0.962727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bdebf1922663c30744efb8869c86b28f102b84dd \"CML watermark\")\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/6663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6663/comments | https://api.github.com/repos/huggingface/datasets/issues/6663/events | https://github.com/huggingface/datasets/issues/6663 | 2,135,480,811 | I_kwDODunzps5_SNnr | 6,663 | `write_examples_on_file` and `write_batch` are broken in `ArrowWriter` | {
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bryant1410",
"id": 3905501,
"login": "bryant1410",
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bryant1410"
} | [] | closed | false | null | [] | null | 3 | "2024-02-15T01:43:27" | "2024-02-16T09:25:00" | "2024-02-16T09:25:00" | CONTRIBUTOR | null | null | null | ### Describe the bug
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well.
### Steps to reproduce the bug
Try to do `write_batch` with anything that has many columns, and it's likely to break.
### Expected behavior
I expect these functions to work, instead of it trying to cast a column to its incorrect type.
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-5.15.0-1040-aws-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.19.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6663/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6663/timeline | null | completed | 19 | false | [
"Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.",
"> Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.\r\n\r\nI feel that'd be good, but it'd be great to release a hotfix ASAP (a revert is a fast thing to do) so people can continue using this library and then focus on still applying the improvement.",
"Fixed by #6664 "
] |
https://api.github.com/repos/huggingface/datasets/issues/6662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6662/comments | https://api.github.com/repos/huggingface/datasets/issues/6662/events | https://github.com/huggingface/datasets/pull/6662 | 2,132,425,812 | PR_kwDODunzps5mwgKP | 6,662 | fix: show correct package name to install biopython | {
"avatar_url": "https://avatars.githubusercontent.com/u/59344?v=4",
"events_url": "https://api.github.com/users/BioGeek/events{/privacy}",
"followers_url": "https://api.github.com/users/BioGeek/followers",
"following_url": "https://api.github.com/users/BioGeek/following{/other_user}",
"gists_url": "https://api.github.com/users/BioGeek/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BioGeek",
"id": 59344,
"login": "BioGeek",
"node_id": "MDQ6VXNlcjU5MzQ0",
"organizations_url": "https://api.github.com/users/BioGeek/orgs",
"received_events_url": "https://api.github.com/users/BioGeek/received_events",
"repos_url": "https://api.github.com/users/BioGeek/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BioGeek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BioGeek/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BioGeek"
} | [] | open | false | null | [] | null | 0 | "2024-02-13T14:15:04" | "2024-02-14T14:32:58" | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6662.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6662",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6662.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6662"
} | When you try to download a dataset that uses [biopython](https://github.com/biopython/biopython), like `load_dataset("InstaDeepAI/multi_species_genomes")`, you get the error:
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("InstaDeepAI/multi_species_genomes")
/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py:1454: FutureWarning: The repository for InstaDeepAI/multi_species_genomes contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/InstaDeepAI/multi_species_genomes
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
warnings.warn(
Downloading builder script: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.51k/7.51k [00:00<00:00, 7.67MB/s]
Downloading readme: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17.2k/17.2k [00:00<00:00, 11.0MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 2548, in load_dataset
builder_instance = load_dataset_builder(
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 2220, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise e1 from None
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1844, in dataset_module_factory
).get_module()
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1466, in get_module
local_imports = _download_additional_modules(
File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 346, in _download_additional_modules
raise ImportError(
ImportError: To be able to use InstaDeepAI/multi_species_genomes, you need to install the following dependency: Bio.
Please install it using 'pip install Bio' for instance.
>>>
```
`Bio` comes from the `biopython` package that can be installed with `pip install biopython`, not with `pip install Bio` as suggested.
This PR adds special logic to show the correct package name in the error message of ` _download_additional_modules`, similarly as is done for `sklearn` / `scikit-learn` already.
There are more packages where importable module name differs from the PyPI package name, so this could be made more generic, like:
```
# Mapping of importable module names to their PyPI package names
package_map = {
"sklearn": "scikit-learn",
"Bio": "biopython",
"PIL": "Pillow",
"bs4": "beautifulsoup4"
}
for module_name, pypi_name in package_map.items():
if module_name in needs_to_be_installed.keys():
needs_to_be_installed[module_name] = pypi_name
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6662/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6662/timeline | null | null | 20 | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6661/comments | https://api.github.com/repos/huggingface/datasets/issues/6661/events | https://github.com/huggingface/datasets/issues/6661 | 2,132,296,267 | I_kwDODunzps5_GEJL | 6,661 | Import error on Google Colab | {
"avatar_url": "https://avatars.githubusercontent.com/u/16103566?v=4",
"events_url": "https://api.github.com/users/kithogue/events{/privacy}",
"followers_url": "https://api.github.com/users/kithogue/followers",
"following_url": "https://api.github.com/users/kithogue/following{/other_user}",
"gists_url": "https://api.github.com/users/kithogue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kithogue",
"id": 16103566,
"login": "kithogue",
"node_id": "MDQ6VXNlcjE2MTAzNTY2",
"organizations_url": "https://api.github.com/users/kithogue/orgs",
"received_events_url": "https://api.github.com/users/kithogue/received_events",
"repos_url": "https://api.github.com/users/kithogue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kithogue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kithogue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kithogue"
} | [] | closed | false | null | [] | null | 3 | "2024-02-13T13:12:40" | "2024-02-16T14:43:44" | "2024-02-14T08:04:47" | NONE | null | null | null | ### Describe the bug
Cannot be imported on Google Colab, the import throws the following error:
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
1. `! pip install -U datasets`
2. `import datasets`
### Expected behavior
Should be possible to use the library
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6661/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6661/timeline | null | completed | 21 | false | [
"Hi! This can happen if an incompatible `pyarrow` version (`pyarrow<12.0.0`) has been imported before the `datasets` installation and the Colab session hasn't been restarted afterward. To avoid the error, go to \"Runtime -> Restart session\" after `!pip install -U datasets` and before `import datasets`, or insert the `import os; os.kill(os.getpid(), 9)` cell between `!pip install -U datasets` and `import datasets` to do the same programmatically.",
"One possible cause might be the one pointed out by @mariosasko above, and you get the following warning on Colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\n\r\nOn the other hand, if the old version of `pyarrow` is not previously imported (before the installation of `datasets`), the reported issue here is not reproducible: `datasets` can be installed, imported and used on Colab.",
"Duplicate of:\r\n- #5923"
] |
https://api.github.com/repos/huggingface/datasets/issues/6660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6660/comments | https://api.github.com/repos/huggingface/datasets/issues/6660/events | https://github.com/huggingface/datasets/pull/6660 | 2,131,977,011 | PR_kwDODunzps5mu9wU | 6,660 | Automatic Conversion for uint16/uint32 to Compatible PyTorch Dtypes | {
"avatar_url": "https://avatars.githubusercontent.com/u/23399590?v=4",
"events_url": "https://api.github.com/users/mohalisad/events{/privacy}",
"followers_url": "https://api.github.com/users/mohalisad/followers",
"following_url": "https://api.github.com/users/mohalisad/following{/other_user}",
"gists_url": "https://api.github.com/users/mohalisad/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mohalisad",
"id": 23399590,
"login": "mohalisad",
"node_id": "MDQ6VXNlcjIzMzk5NTkw",
"organizations_url": "https://api.github.com/users/mohalisad/orgs",
"received_events_url": "https://api.github.com/users/mohalisad/received_events",
"repos_url": "https://api.github.com/users/mohalisad/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mohalisad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mohalisad/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mohalisad"
} | [] | open | false | null | [] | null | 0 | "2024-02-13T10:24:33" | "2024-02-13T10:24:33" | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6660.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6660",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6660.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6660"
} | This PR addresses an issue encountered when utilizing uint16 or uint32 datatypes with datasets, followed by attempting to convert these datasets into PyTorch-compatible formats. Currently, doing so results in a TypeError due to incompatible datatype conversion, as illustrated by the following example:
```python
from datasets import Dataset, Sequence, Value, Features
def gen():
for i in range(100):
yield {'seq': list(range(i, i + 20))}
ds = Dataset.from_generator(gen, features=Features({'seq': Sequence(feature=Value(dtype='uint16'), length=-1)}))
ds.set_format('torch')
print(ds[0])
```
This code snippet triggers the following error due to the inability to convert numpy.uint16 arrays to a PyTorch-supported format:
```
TypeError: can't convert np.ndarray of type numpy.uint16. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.
```
This PR introduces an automatic mechanism to convert np.uint16 and np.uint32 datatypes to np.int64 for seamless compatibility with PyTorch formats, simplifying workflows and improving developer experience by eliminating the need for manual conversion handling. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6660/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6660/timeline | null | null | 22 | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6659/comments | https://api.github.com/repos/huggingface/datasets/issues/6659/events | https://github.com/huggingface/datasets/pull/6659 | 2,129,229,810 | PR_kwDODunzps5mlmmo | 6,659 | Change default compression argument for JsonDatasetWriter | {
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rexhaif",
"id": 5154447,
"login": "Rexhaif",
"node_id": "MDQ6VXNlcjUxNTQ0NDc=",
"organizations_url": "https://api.github.com/users/Rexhaif/orgs",
"received_events_url": "https://api.github.com/users/Rexhaif/received_events",
"repos_url": "https://api.github.com/users/Rexhaif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rexhaif"
} | [] | open | false | null | [] | null | 1 | "2024-02-11T23:49:07" | "2024-02-13T23:40:06" | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6659.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6659",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6659.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6659"
} | Change default compression type from `None` to "infer", to align with pandas' defaults.
Documentation asks the user to supply `to_json_kwargs` with arguments suitable for pandas' `to_json` method. At the same time, while pandas' by default uses ["infer"](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_json.html) for compression, datasets enforce `None` as default. This, likely, confuses user, as they expect the same behaviour, i.e they expect that if they name their output file as "dataset.jsonl.zst" then the compression would be inferred as "zstd" and file will be compressed before writing.
Moreover, while it is probably outside of the scope of this pull request, `compression` argument needs to be capable of taking `dict` as input (along with `str`), as it does in pandas, in order to allow user to specify compression parameters. Current implementation will likely fail with `NotImplementedError`, as it expects either `None` or `str` specifying compression algo. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6659/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6659/timeline | null | null | 23 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6659). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/6658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6658/comments | https://api.github.com/repos/huggingface/datasets/issues/6658/events | https://github.com/huggingface/datasets/pull/6658 | 2,129,158,371 | PR_kwDODunzps5mlZyb | 6,658 | [Resumable IterableDataset] Add IterableDataset state_dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | open | false | null | [] | null | 1 | "2024-02-11T20:35:52" | "2024-02-12T12:24:32" | null | MEMBER | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6658.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6658",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6658.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6658"
} | A simple implementation of a mechanism to resume an IterableDataset.
This is WIP and untested.
Example:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({"a": range(5)}).to_iterable_dataset(num_shards=3)
ds = concatenate_datasets([ds] * 2)
print(f"{ds.state_dict()=}")
for i, example in enumerate(ds):
print(example)
if i == 6:
state_dict = ds.state_dict()
ds.load_state_dict(state_dict)
print(f"{ds.state_dict()=}")
for example in ds:
print(example)
```
returns
```
ds.state_dict()={'ex_iterable_idx': 0, 'ex_iterables': [{'shard_idx': 0, 'shard_example_idx': 0}, {'shard_idx': 0, 'shard_example_idx': 0}]}
{'a': 0}
{'a': 1}
{'a': 2}
{'a': 3}
{'a': 4}
{'a': 0}
{'a': 1}
{'a': 2}
{'a': 3}
{'a': 4}
ds.state_dict()={'ex_iterable_idx': 1, 'ex_iterables': [{'shard_idx': 3, 'shard_example_idx': 0}, {'shard_idx': 0, 'shard_example_idx': 2}]}
{'a': 2}
{'a': 3}
{'a': 4}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6658/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6658/timeline | null | null | 24 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6658). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
https://api.github.com/repos/huggingface/datasets/issues/6657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6657/comments | https://api.github.com/repos/huggingface/datasets/issues/6657/events | https://github.com/huggingface/datasets/issues/6657 | 2,129,147,085 | I_kwDODunzps5-6DTN | 6,657 | Release not pushed to conda channel | {
"avatar_url": "https://avatars.githubusercontent.com/u/7138162?v=4",
"events_url": "https://api.github.com/users/atulsaurav/events{/privacy}",
"followers_url": "https://api.github.com/users/atulsaurav/followers",
"following_url": "https://api.github.com/users/atulsaurav/following{/other_user}",
"gists_url": "https://api.github.com/users/atulsaurav/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/atulsaurav",
"id": 7138162,
"login": "atulsaurav",
"node_id": "MDQ6VXNlcjcxMzgxNjI=",
"organizations_url": "https://api.github.com/users/atulsaurav/orgs",
"received_events_url": "https://api.github.com/users/atulsaurav/received_events",
"repos_url": "https://api.github.com/users/atulsaurav/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/atulsaurav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atulsaurav/subscriptions",
"type": "User",
"url": "https://api.github.com/users/atulsaurav"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 3 | "2024-02-11T20:05:17" | "2024-02-12T14:29:36" | null | NONE | null | null | null | ### Describe the bug
The github actions step to publish the release 2.17.0 to conda channel has failed due to expired token. Can some one please update the anaconda token rerun the failed action? @albertvillanova ?
![image](https://github.com/huggingface/datasets/assets/7138162/1b56ad3d-7643-4778-9cce-4bf531717700)
### Steps to reproduce the bug
Please see this actions [link](https://github.com/huggingface/datasets/actions/runs/7842473662)
### Expected behavior
The action runs successfully and the latest release is pushed to HuggingFace conda channel
### Environment info
Not applicable. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6657/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6657/timeline | null | null | 25 | false | [
"Thanks for reporting, @atulsaurav.\r\n\r\nWe are investigating the issue. ",
"I can't fix this issue because I do not appear as a team member of the huggingface datasets project: https://anaconda.org/huggingface/datasets\r\n\r\n@lhoestq could you please add `datasets` team members to the corresponding Anaconda project?\r\n\r\nOnce this done, I could recreate and update the Anaconda token, as mentioned above it seems the current one has expired.",
"I think @LysandreJik has access ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/6656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6656/comments | https://api.github.com/repos/huggingface/datasets/issues/6656/events | https://github.com/huggingface/datasets/issues/6656 | 2,127,338,377 | I_kwDODunzps5-zJuJ | 6,656 | Error when loading a big local json file | {
"avatar_url": "https://avatars.githubusercontent.com/u/10062216?v=4",
"events_url": "https://api.github.com/users/Riccorl/events{/privacy}",
"followers_url": "https://api.github.com/users/Riccorl/followers",
"following_url": "https://api.github.com/users/Riccorl/following{/other_user}",
"gists_url": "https://api.github.com/users/Riccorl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Riccorl",
"id": 10062216,
"login": "Riccorl",
"node_id": "MDQ6VXNlcjEwMDYyMjE2",
"organizations_url": "https://api.github.com/users/Riccorl/orgs",
"received_events_url": "https://api.github.com/users/Riccorl/received_events",
"repos_url": "https://api.github.com/users/Riccorl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Riccorl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Riccorl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Riccorl"
} | [] | open | false | null | [] | null | 0 | "2024-02-09T15:14:21" | "2024-02-09T15:14:21" | null | NONE | null | null | null | ### Describe the bug
When trying to load big json files from a local directory, `load_dataset` throws the following error
```
Traceback (most recent call last):
File "/miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/builder.py", line 1989, in _prepare_split_single
writer.write_table(table)
File "miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 573, in write_table
pa_table = pa_table.combine_chunks()
File "pyarrow/table.pxi", line 3638, in pyarrow.lib.Table.combine_chunks
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
```
### Steps to reproduce the bug
1. Download a big file, e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-train.json.gz`
2. Load it like `data = load_dataset("json", data_files=["nq-train.json"], split="train")`
```python
from datasets import load_dataset
data = load_dataset("json", data_files=["nq-train.json"], split="train")
```
A similarly formatted but smaller file, e.g. e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-dev.json.gz` is loaded without issues
```python
from datasets import load_dataset
data = load_dataset("json", data_files=["nq-dev.json"], split="train")
```
### Expected behavior
It should load normally
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6656/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6656/timeline | null | null | 26 | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6655/comments | https://api.github.com/repos/huggingface/datasets/issues/6655/events | https://github.com/huggingface/datasets/issues/6655 | 2,127,020,042 | I_kwDODunzps5-x8AK | 6,655 | Cannot load the dataset go_emotions | {
"avatar_url": "https://avatars.githubusercontent.com/u/688324?v=4",
"events_url": "https://api.github.com/users/arame/events{/privacy}",
"followers_url": "https://api.github.com/users/arame/followers",
"following_url": "https://api.github.com/users/arame/following{/other_user}",
"gists_url": "https://api.github.com/users/arame/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arame",
"id": 688324,
"login": "arame",
"node_id": "MDQ6VXNlcjY4ODMyNA==",
"organizations_url": "https://api.github.com/users/arame/orgs",
"received_events_url": "https://api.github.com/users/arame/received_events",
"repos_url": "https://api.github.com/users/arame/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arame/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arame/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arame"
} | [] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | 4 | "2024-02-09T12:15:39" | "2024-02-12T09:35:55" | null | NONE | null | null | null | ### Describe the bug
When I run the following code I get an exception;
`go_emotions = load_dataset("go_emotions")`
> AttributeError Traceback (most recent call last)
Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1)
----> [1](vscode-notebook-cell:?execution_count=6&line=1) go_emotions = load_dataset("go_emotions")
[2](vscode-notebook-cell:?execution_count=6&line=2) data = go_emotions.data
File [c:\Users\hijik\anaconda3\Lib\site-packages\datasets\load.py:2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
[2518](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2518) verification_mode = VerificationMode(
[2519](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2519) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
[2520](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2520) )
[2522](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2522) # Create a dataset builder
-> [2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523) builder_instance = load_dataset_builder(
[2524](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2524) path=path,
[2525](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2525) name=name,
[2526](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2526) data_dir=data_dir,
[2527](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2527) data_files=data_files,
[2528](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2528) cache_dir=cache_dir,
[2529](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2529) features=features,
[2530](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2530) download_config=download_config,
[2531](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2531) download_mode=download_mode,
[2532](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2532) revision=revision,
[2533](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2533) token=token,
[2534](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2534) storage_options=storage_options,
[2535](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2535) trust_remote_code=trust_remote_code,
[2536](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2536) _require_default_config_name=name is None,
...
---> [63](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:63) if issubclass(obj_type, transformers.PreTrainedTokenizerBase):
[64](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:64) pklregister(obj_type)(_save_transformersPreTrainedTokenizerBase)
[66](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:66) # Unwrap `torch.compile`-ed functions
AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase'
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?10bc0728-6947-456e-9a3e-f056872b04c6) or open in a [text editor](command:workbench.action.openLargeOutput?10bc0728-6947-456e-9a3e-f056872b04c6). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Steps to reproduce the bug
```
from datasets import load_dataset
go_emotions = load_dataset("go_emotions")
```
### Expected behavior
Should simply load the variable with the data from the file
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.16.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.3
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6655/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6655/timeline | null | null | 27 | false | [
"Thanks for reporting, @arame.\r\n\r\nI guess you have an old version of `transformers` (that submodule is present in `transformers` since version 3.0.1, since nearly 4 years ago). If you update it, the error should disappear:\r\n```shell\r\npip install -U transformers\r\n```\r\n\r\nOn the other hand, I am wondering: does it make sense to use `transformers` in this case, even if we don't need it to load the `go_emotions` dataset (already converted to Parquet files)?\r\n- Maybe @mariosasko can give some insight, as he included these code lines:\r\n - #6454\r\n\r\nhttps://github.com/huggingface/datasets/blob/9751fb14594d354e952f0ebdfaf31cb203b011e7/src/datasets/utils/_dill.py#L60-L63\r\n",
"The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n\r\nHowever, the logic does not account for `transformers<3`, so we should add a version check to fix that.",
"> The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n> \r\n> However, the logic does not account for `transformers<3`, so we should add a version check to fix that.\r\n\r\nThank you for that Mario. Would this fix solve the problem and do you have any idea when it will be done? \r\nI tried the pip install suggested by Albert and it made no difference.",
"I tried running the code today and the problem appears to be fixed."
] |
https://api.github.com/repos/huggingface/datasets/issues/6654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6654/comments | https://api.github.com/repos/huggingface/datasets/issues/6654/events | https://github.com/huggingface/datasets/issues/6654 | 2,126,939,358 | I_kwDODunzps5-xoTe | 6,654 | Batched dataset map throws exception that cannot cast fixed length array to Sequence | {
"avatar_url": "https://avatars.githubusercontent.com/u/1029671?v=4",
"events_url": "https://api.github.com/users/keesjandevries/events{/privacy}",
"followers_url": "https://api.github.com/users/keesjandevries/followers",
"following_url": "https://api.github.com/users/keesjandevries/following{/other_user}",
"gists_url": "https://api.github.com/users/keesjandevries/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/keesjandevries",
"id": 1029671,
"login": "keesjandevries",
"node_id": "MDQ6VXNlcjEwMjk2NzE=",
"organizations_url": "https://api.github.com/users/keesjandevries/orgs",
"received_events_url": "https://api.github.com/users/keesjandevries/received_events",
"repos_url": "https://api.github.com/users/keesjandevries/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/keesjandevries/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keesjandevries/subscriptions",
"type": "User",
"url": "https://api.github.com/users/keesjandevries"
} | [] | closed | false | null | [] | null | 2 | "2024-02-09T11:23:19" | "2024-02-12T08:26:53" | "2024-02-12T08:26:53" | NONE | null | null | null | ### Describe the bug
I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 2093, failing to correctly process sequence lengths.
### Steps to reproduce the bug
Create virtual environment and activate
```
virtualenv venv
source venv/bin/activate
```
Then install the datasets package (I'm using the latest version)
```
pip install datasets==2.16.1
```
Then run
```python
# bug.py
from datasets import Dataset
from datasets.features import Features, Sequence, Value
data = {
"num": [[1, 2], [3, 4]],
}
features = Features({'num': Sequence(feature=Value(dtype='int32'), length=2)})
dataset = Dataset.from_dict(data, features=features)
dataset.map(lambda x: x, batched=True, batch_size=1)
```
### Expected behavior
I get the following stack trace
```
Map: 50%|█████ | 1/2 [00:00<00:00, 423.92 examples/s]
Traceback (most recent call last):
File "/PATH/TO/BUG_PORT/bug.py", line 9, in <module>
dataset.map(lambda x: x, batched=True, batch_size=1)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3093, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single
writer.write_batch(batch)
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 551, in write_batch
array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[2]
to
Sequence(feature=Value(dtype='int32', id=None), length=2, id=None)
```
After some debugging, I found that the if-statement that is actually failing is line 2093 in `datasets/table.py`
```python
# datasets/table.py
...
2093 if feature.length * len(array) == len(array_values):
2094 return pa.FixedSizeListArray.from_arrays(_c(array_values, feature.feature), feature.length)
...
```
### Environment info
Platform: MacOS
Datasets version: datasets==2.16.1
Python version: 3.9.6 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6654/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6654/timeline | null | completed | 28 | false | [
"Hi ! This issue has been fixed by https://github.com/huggingface/datasets/pull/6283\r\n\r\nCan you try again with the new release 2.17.0 ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\n",
"Amazing! It's indeed fixed now. Thanks!"
] |
Dataset Card for GitHub Issues
Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the task-category-tag
with an appropriate other:other-task-name
).
task-category-tag
: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a high/low metric name. The (model name or model class) model currently achieves the following score. [IF A LEADERBOARD IS AVAILABLE]: This task has an active leaderboard which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.
Languages
Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
When relevant, please provide BCP-47 codes, which consist of a primary language subtag, with a script subtag and/or region subtag if available.
Dataset Structure
Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
{
'example_field': ...,
...
}
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
example_field
: description ofexample_field
Note that the descriptions can be initialized with the Show Markdown Data Fields output of the tagging app, you will then only need to refine the generated descriptions.
Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
Tain | Valid | Test | |
---|---|---|---|
Input Sentences | |||
Average Sentence Length |
Dataset Creation
Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their Hugging Face version.
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See Larson 2017 for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
Considerations for Using the Data
Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
Additional Information
Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
Licensing Information
Provide the license and link to the license webpage if available.
Citation Information
Provide the BibTex-formatted reference for the dataset. For example:
@article{article_id,
author = {Author List},
title = {Dataset Paper Title},
journal = {Publication Venue},
year = {2525}
}
If the dataset has a DOI, please provide it here.
@misc{huggingfacecourse,
author = {Hugging Face},
title = {The Hugging Face Course, 2022},
howpublished = "\url{https://huggingface.co/course}",
year = {2022},
note = "[Online; accessed <today>]"
}
Contributions
Thanks to @alex-atelo for adding this dataset.
- Downloads last month
- 38