url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.23B
| node_id
stringlengths 18
32
| number
int64 1
4.31k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,652B
| updated_at
int64 1,587B
1,652B
| closed_at
int64 1,587B
1,652B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3201/comments | https://api.github.com/repos/huggingface/datasets/issues/3201/events | https://github.com/huggingface/datasets/issues/3201 | 1,043,209,142 | I_kwDODunzps4-Lhu2 | 3,201 | Add GSM8K dataset | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,928,604,000 | 1,649,850,972,000 | 1,649,850,971,000 | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** GSM8K (short for Grade School Math 8k)
- **Description:** GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers.
- **Paper:** https://openai.com/blog/grade-school-math/
- **Data:** https://github.com/openai/grade-school-math
- **Motivation:** The dataset is useful to investigate the reasoning abilities of large Transformer models, such as GPT-3.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3201/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3200/comments | https://api.github.com/repos/huggingface/datasets/issues/3200/events | https://github.com/huggingface/datasets/pull/3200 | 1,042,887,291 | PR_kwDODunzps4uAZLu | 3,200 | Catch token invalid error in CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,890,186,000 | 1,635,932,468,000 | 1,635,932,468,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3200",
"html_url": "https://github.com/huggingface/datasets/pull/3200",
"diff_url": "https://github.com/huggingface/datasets/pull/3200.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3200.patch",
"merged_at": 1635932468000
} | The staging back end sometimes returns invalid token errors when trying to delete a repo.
I modified the fixture in the test that uses staging to ignore this error | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3200/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3199/comments | https://api.github.com/repos/huggingface/datasets/issues/3199/events | https://github.com/huggingface/datasets/pull/3199 | 1,042,860,935 | PR_kwDODunzps4uAVzQ | 3,199 | Bump huggingface_hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,888,550,000 | 1,636,854,491,000 | 1,635,889,300,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3199",
"html_url": "https://github.com/huggingface/datasets/pull/3199",
"diff_url": "https://github.com/huggingface/datasets/pull/3199.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3199.patch",
"merged_at": 1635889300000
} | huggingface_hub just released its first minor version, so we need to update the dependency
It was supposed to be part of 1.15.0 but I'm adding it for 1.15.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3199/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3198/comments | https://api.github.com/repos/huggingface/datasets/issues/3198/events | https://github.com/huggingface/datasets/pull/3198 | 1,042,679,548 | PR_kwDODunzps4t_5G8 | 3,198 | Add Multi-Lingual LibriSpeech | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,877,439,000 | 1,636,045,762,000 | 1,636,045,762,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3198",
"html_url": "https://github.com/huggingface/datasets/pull/3198",
"diff_url": "https://github.com/huggingface/datasets/pull/3198.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3198.patch",
"merged_at": 1636045762000
} | Add https://www.openslr.org/94/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3198/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3197/comments | https://api.github.com/repos/huggingface/datasets/issues/3197/events | https://github.com/huggingface/datasets/pull/3197 | 1,042,541,127 | PR_kwDODunzps4t_cry | 3,197 | Fix optimized encoding for arrays | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,868,553,000 | 1,635,880,344,000 | 1,635,880,343,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3197",
"html_url": "https://github.com/huggingface/datasets/pull/3197",
"diff_url": "https://github.com/huggingface/datasets/pull/3197.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3197.patch",
"merged_at": 1635880343000
} | Hi !
#3124 introduced a regression that made the benchmarks CI fail because of a bad array comparison when checking the first encoded element. This PR fixes this by making sure that encoding is applied on all sequence types except lists.
cc @eladsegal fyi (no big deal) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3197/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3197/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3196/comments | https://api.github.com/repos/huggingface/datasets/issues/3196/events | https://github.com/huggingface/datasets/pull/3196 | 1,042,223,913 | PR_kwDODunzps4t-bxy | 3,196 | QOL improvements: auto-flatten_indices and desc in map calls | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,852,530,000 | 1,635,867,669,000 | 1,635,867,668,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3196",
"html_url": "https://github.com/huggingface/datasets/pull/3196",
"diff_url": "https://github.com/huggingface/datasets/pull/3196.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3196.patch",
"merged_at": 1635867668000
} | This PR:
* automatically calls `flatten_indices` where needed: in `unique` and `save_to_disk` to avoid saving the indices file
* adds descriptions to the map calls
Fix #3040 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3196/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3195/comments | https://api.github.com/repos/huggingface/datasets/issues/3195/events | https://github.com/huggingface/datasets/pull/3195 | 1,042,204,044 | PR_kwDODunzps4t-ZR0 | 3,195 | More robust `None` handling | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,851,710,000 | 1,639,060,020,000 | 1,639,060,018,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3195",
"html_url": "https://github.com/huggingface/datasets/pull/3195",
"diff_url": "https://github.com/huggingface/datasets/pull/3195.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3195.patch",
"merged_at": 1639060017000
} | PyArrow has explicit support for `null` values, so it makes sense to support Nones on our side as well.
[Colab Notebook with examples](https://colab.research.google.com/drive/1zcK8BnZYnRe3Ao2271u1T19ag9zLEiy3?usp=sharing)
Changes:
* allow None for the features types with special encoding (`ClassLabel, TranslationVariableLanguages, Value, _ArrayXD`)
* handle None in `class_encode_column` (also there is an option to stringify Nones and treat them as a class)
* support None sorting in `sort` (use pandas for that)
* handle None in align_labels_with_mapping
* support for None in ArrayXD (converts `None` to `np.nan` to align the behavior with PyArrow)
* support for None in the Audio/Image feature
* allow promotion when concatenating tables (`pa.concat_tables(table_list, promote=True)`) and `null` row/~~column~~ broadcasting similar to pandas
Additional notes:
* use `null` instead of `none` for function arguments for consistency with existing `disable_nullable`
* fixes a bug with the `update_metadata_with_features` call in `Dataset.rename_columns`
* had to update some tests, let me know if that's ok
TODO:
- [x] check how the Audio features behaves with Nones
- [x] Better None handling in `concatenate_datasets`/`add_item`
- [x] Fix formatting with Nones
- [x] Add Colab with examples
- [x] Tests
TODOs for subsequent PRs:
- Mention None handling in the docs
- Add `drop_null`/`fill_null` to `Dataset`/`DatasetDict`
Fix #3181 #3253 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3195/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3194/comments | https://api.github.com/repos/huggingface/datasets/issues/3194/events | https://github.com/huggingface/datasets/pull/3194 | 1,041,999,535 | PR_kwDODunzps4t91Eg | 3,194 | Update link to Datasets Tagging app in Spaces | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,840,830,000 | 1,636,367,783,000 | 1,636,367,782,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3194",
"html_url": "https://github.com/huggingface/datasets/pull/3194",
"diff_url": "https://github.com/huggingface/datasets/pull/3194.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3194.patch",
"merged_at": 1636367782000
} | Fix #3193. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3194/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3193/comments | https://api.github.com/repos/huggingface/datasets/issues/3193/events | https://github.com/huggingface/datasets/issues/3193 | 1,041,971,117 | I_kwDODunzps4-Gzet | 3,193 | Update link to datasets-tagging app | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,838,799,000 | 1,636,367,782,000 | 1,636,367,782,000 | MEMBER | null | null | null | Once datasets-tagging has been transferred to Spaces:
- huggingface/datasets-tagging#22
We should update the link in Datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3193/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3192/comments | https://api.github.com/repos/huggingface/datasets/issues/3192/events | https://github.com/huggingface/datasets/issues/3192 | 1,041,308,086 | I_kwDODunzps4-ERm2 | 3,192 | Multiprocessing filter/map (tests) not working on Windows | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,780,968,000 | 1,635,782,223,000 | null | CONTRIBUTOR | null | null | null | While running the tests, I found that the multiprocessing examples fail on Windows, or rather they do not complete: they cause a deadlock. I haven't dug deep into it, but they do not seem to work as-is. I currently have no time to tests this in detail but at least the tests seem not to run correctly (deadlocking).
## Steps to reproduce the bug
```shell
pytest tests/test_arrow_dataset.py -k "test_filter_multiprocessing"
pytest tests/test_arrow_dataset.py -k "test_map_multiprocessing"
```
## Expected results
The functionality to work on all platforms.
## Actual results
Deadlock.
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2, also tested with 3.7.9
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3192/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3191/comments | https://api.github.com/repos/huggingface/datasets/issues/3191/events | https://github.com/huggingface/datasets/issues/3191 | 1,041,225,111 | I_kwDODunzps4-D9WX | 3,191 | Dataset viewer issue for '*compguesswhat*' | {
"login": "benotti",
"id": 2545336,
"node_id": "MDQ6VXNlcjI1NDUzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2545336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benotti",
"html_url": "https://github.com/benotti",
"followers_url": "https://api.github.com/users/benotti/followers",
"following_url": "https://api.github.com/users/benotti/following{/other_user}",
"gists_url": "https://api.github.com/users/benotti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/benotti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benotti/subscriptions",
"organizations_url": "https://api.github.com/users/benotti/orgs",
"repos_url": "https://api.github.com/users/benotti/repos",
"events_url": "https://api.github.com/users/benotti/events{/privacy}",
"received_events_url": "https://api.github.com/users/benotti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,776,209,000 | 1,649,764,561,000 | null | NONE | null | null | null | ## Dataset viewer issue for '*compguesswhat*'
**Link:** https://huggingface.co/datasets/compguesswhat
File not found
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3191/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3190/comments | https://api.github.com/repos/huggingface/datasets/issues/3190/events | https://github.com/huggingface/datasets/issues/3190 | 1,041,153,631 | I_kwDODunzps4-Dr5f | 3,190 | combination of shuffle and filter results in a bug | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,772,049,000 | 1,635,850,249,000 | 1,635,850,249,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
Hi,
I would like to shuffle a dataset, then filter it based on each existing label. however, the combination of `filter`, `shuffle` seems to results in a bug. In the minimal example below, as you see in the filtered results, the filtered labels are not unique, meaning filter has not worked. Any suggestions as a temporary fix is appreciated @lhoestq.
Thanks.
Best regards
Rabeeh
## Steps to reproduce the bug
```python
import numpy as np
import datasets
datasets = datasets.load_dataset('super_glue', 'rte', script_version="master")
shuffled_data = datasets["train"].shuffle(seed=42)
for label in range(2):
print("label ", label)
data = shuffled_data.filter(lambda example: int(example['label']) == label)
print("length ", len(data), np.unique(data['label']))
```
## Expected results
Filtering per label, should only return the data with that specific label.
## Actual results
As you can see, filtered data per label, has still two labels of [0, 1]
```
label 0
length 1249 [0 1]
label 1
length 1241 [0 1]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: linux
- Python version: 3.7.11
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3190/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3189/comments | https://api.github.com/repos/huggingface/datasets/issues/3189/events | https://github.com/huggingface/datasets/issues/3189 | 1,041,044,986 | I_kwDODunzps4-DRX6 | 3,189 | conll2003 incorrect label explanation | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,764,610,000 | 1,636,454,458,000 | 1,636,454,458,000 | CONTRIBUTOR | null | null | null | In the [conll2003](https://huggingface.co/datasets/conll2003#data-fields) README, the labels are described as follows
> - `id`: a `string` feature.
> - `tokens`: a `list` of `string` features.
> - `pos_tags`: a `list` of classification labels, with possible values including `"` (0), `''` (1), `#` (2), `$` (3), `(` (4).
> - `chunk_tags`: a `list` of classification labels, with possible values including `O` (0), `B-ADJP` (1), `I-ADJP` (2), `B-ADVP` (3), `I-ADVP` (4).
> - `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4) `B-LOC` (5), `I-LOC` (6) `B-MISC` (7), `I-MISC` (8).
First of all, it would be great if we can get a list of ALL possible pos_tags.
Second, the chunk tags labels cannot be correct. The description says the values go from 0 to 4 whereas the data shows values from at least 11 to 21 and 0.
EDIT: not really a bug, sorry for mistagging. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3189/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3188/comments | https://api.github.com/repos/huggingface/datasets/issues/3188/events | https://github.com/huggingface/datasets/issues/3188 | 1,040,980,712 | I_kwDODunzps4-DBro | 3,188 | conll2002 issues | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,760,164,000 | 1,636,984,259,000 | 1,636,737,491,000 | CONTRIBUTOR | null | null | null | **Link:** https://huggingface.co/datasets/conll2002
The dataset viewer throws a server error when trying to preview the dataset.
```
Message: Extraction protocol 'train' for file at 'https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/esp.train' is not implemented yet
```
In addition, the "point of contact" has encoding issues and does not work when clicked.
Am I the one who added this dataset ? No, @lhoestq did | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3188/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3187/comments | https://api.github.com/repos/huggingface/datasets/issues/3187/events | https://github.com/huggingface/datasets/pull/3187 | 1,040,412,869 | PR_kwDODunzps4t44Ab | 3,187 | Add ChrF(++) (as implemented in sacrebleu) | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,670,438,000 | 1,635,864,650,000 | 1,635,863,486,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3187",
"html_url": "https://github.com/huggingface/datasets/pull/3187",
"diff_url": "https://github.com/huggingface/datasets/pull/3187.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3187.patch",
"merged_at": 1635863486000
} | Similar to my [PR for TER](https://github.com/huggingface/datasets/pull/3153), it feels only right to also include ChrF and friends. These are present in Sacrebleu and are therefore very similar to implement as TER and sacrebleu. I tested the implementation with sacrebleu's tests to verify. You can try this below for yourself
```python
import datasets
EPSILON = 1e-4
chrf = datasets.load_metric(r"path\to\datasets\metrics\chrf")
test_cases = [
(["abcdefg"], ["hijklmnop"], 0.0),
(["a"], ["b"], 0.0),
([""], ["b"], 0.0),
([""], ["ref"], 0.0),
([""], ["reference"], 0.0),
(["aa"], ["ab"], 8.3333),
(["a", "b"], ["a", "c"], 8.3333),
(["a"], ["a"], 16.6667),
(["a b c"], ["a b c"], 50.0),
(["a b c"], ["abc"], 50.0),
([" risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."],
["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."], 63.361730),
([" Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich. "],
["Das Verhältnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."], 64.1302698),
(["Niemand hat die Absicht, eine Mauer zu errichten"], ["Niemand hat die Absicht, eine Mauer zu errichten"], 100.0),
]
for hyp, ref, score in test_cases:
# Note the reference transformation which is different from scarebleu's input format
results = chrf.compute(predictions=hyp, references=[[r] for r in ref],
char_order=6, word_order=0, beta=3, eps_smoothing=True)
if abs(score - results["score"]) > EPSILON:
print(f"expected {score}, got {results['score']} for {hyp} - {ref}")
test_cases_effective_order = [
(["a"], ["a"], 100.0),
([""], ["reference"], 0.0),
(["a b c"], ["a b c"], 100.0),
(["a b c"], ["abc"], 100.0),
([""], ["c"], 0.0),
(["a", "b"], ["a", "c"], 50.0),
(["aa"], ["ab"], 25.0),
]
for hyp, ref, score in test_cases_effective_order:
# Note the reference transformation which is different from scarebleu's input format
results = chrf.compute(predictions=hyp, references=[[r] for r in ref],
char_order=6, word_order=0, beta=3, eps_smoothing=False)
if abs(score - results["score"]) > EPSILON:
print(f"expected {score}, got {results['score']} for {hyp} - {ref}")
test_cases_keep_whitespace = [
(
["Die Beziehung zwischen Obama und Netanjahu ist nicht gerade freundlich."],
["Das Verhältnis zwischen Obama und Netanyahu ist nicht gerade freundschaftlich."],
67.3481606,
),
(
["risk assessment must be made of those who are qualified and expertise in the sector - these are the scientists ."],
["risk assessment has to be undertaken by those who are qualified and expert in that area - that is the scientists ."],
65.2414427,
),
]
for hyp, ref, score in test_cases_keep_whitespace:
# Note the reference transformation which is different from scarebleu's input format
results = chrf.compute(predictions=hyp, references=[[r] for r in ref],
char_order=6, word_order=0, beta=3,
whitespace=True)
if abs(score - results["score"]) > EPSILON:
print(f"expected {score}, got {results['score']} for {hyp} - {ref}")
predictions = ["The relationship between Obama and Netanyahu is not exactly friendly."]
references = [["The ties between Obama and Netanyahu are not particularly friendly."]]
print(chrf.compute(predictions=predictions, references=references))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3187/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3186/comments | https://api.github.com/repos/huggingface/datasets/issues/3186/events | https://github.com/huggingface/datasets/issues/3186 | 1,040,369,397 | I_kwDODunzps4-Asb1 | 3,186 | Dataset viewer for nli_tr | {
"login": "e-budur",
"id": 2246791,
"node_id": "MDQ6VXNlcjIyNDY3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-budur",
"html_url": "https://github.com/e-budur",
"followers_url": "https://api.github.com/users/e-budur/followers",
"following_url": "https://api.github.com/users/e-budur/following{/other_user}",
"gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-budur/subscriptions",
"organizations_url": "https://api.github.com/users/e-budur/orgs",
"repos_url": "https://api.github.com/users/e-budur/repos",
"events_url": "https://api.github.com/users/e-budur/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-budur/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,652,593,000 | 1,649,764,587,000 | null | CONTRIBUTOR | null | null | null | ## Dataset viewer issue for '*nli_tr*'
**Link:** https://huggingface.co/datasets/nli_tr
Hello,
Thank you for the new dataset preview feature that will help the users to view the datasets online.
We just noticed that the dataset viewer widget in the `nli_tr` dataset shows the error below. The error must be due to a temporary problem that may have blocked access to the dataset through the dataset viewer. But the dataset is currently accessible through the link in the error message. May we kindly ask if it would be possible to rerun the job so that it can access the dataset for the dataset viewer function?
Thank you.
Emrah
------------------------------------------
Server Error
Status code: 404
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: 'zip://snli_tr_1.0_train.jsonl::https://tabilab.cmpe.boun.edu.tr/datasets/nli_datasets/snli_tr_1.0.zip
------------------------------------------
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3186/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3185/comments | https://api.github.com/repos/huggingface/datasets/issues/3185/events | https://github.com/huggingface/datasets/issues/3185 | 1,040,291,961 | I_kwDODunzps4-AZh5 | 3,185 | 7z dataset preview not implemented? | {
"login": "Kirili4ik",
"id": 30757466,
"node_id": "MDQ6VXNlcjMwNzU3NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/30757466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kirili4ik",
"html_url": "https://github.com/Kirili4ik",
"followers_url": "https://api.github.com/users/Kirili4ik/followers",
"following_url": "https://api.github.com/users/Kirili4ik/following{/other_user}",
"gists_url": "https://api.github.com/users/Kirili4ik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kirili4ik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kirili4ik/subscriptions",
"organizations_url": "https://api.github.com/users/Kirili4ik/orgs",
"repos_url": "https://api.github.com/users/Kirili4ik/repos",
"events_url": "https://api.github.com/users/Kirili4ik/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kirili4ik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,625,107,000 | 1,649,764,096,000 | 1,649,764,087,000 | NONE | null | null | null | ## Dataset viewer issue for dataset 'samsum'
**Link:** https://huggingface.co/datasets/samsum
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not implemented yet
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3185/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3185/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3184/comments | https://api.github.com/repos/huggingface/datasets/issues/3184/events | https://github.com/huggingface/datasets/pull/3184 | 1,040,114,102 | PR_kwDODunzps4t4J61 | 3,184 | RONEC v2 | {
"login": "dumitrescustefan",
"id": 22746816,
"node_id": "MDQ6VXNlcjIyNzQ2ODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/22746816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dumitrescustefan",
"html_url": "https://github.com/dumitrescustefan",
"followers_url": "https://api.github.com/users/dumitrescustefan/followers",
"following_url": "https://api.github.com/users/dumitrescustefan/following{/other_user}",
"gists_url": "https://api.github.com/users/dumitrescustefan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dumitrescustefan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dumitrescustefan/subscriptions",
"organizations_url": "https://api.github.com/users/dumitrescustefan/orgs",
"repos_url": "https://api.github.com/users/dumitrescustefan/repos",
"events_url": "https://api.github.com/users/dumitrescustefan/events{/privacy}",
"received_events_url": "https://api.github.com/users/dumitrescustefan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,591,003,000 | 1,635,868,943,000 | 1,635,868,942,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3184",
"html_url": "https://github.com/huggingface/datasets/pull/3184",
"diff_url": "https://github.com/huggingface/datasets/pull/3184.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3184.patch",
"merged_at": 1635868942000
} | Hi, as we've recently finished with the new RONEC (Romanian Named Entity Corpus), we'd like to update the dataset here as well. It's actually essential as links to V1 are no longer valid.
In reality we'd like to replace completely v1, as v2 is a full re-annotation of v1 with additional data (up to 2x size vs v1).
I've run the make style and all the dummy and real data test, and they passed.
I hope it's okay to merge the new RONEC v2 in the datasets.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3184/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3183/comments | https://api.github.com/repos/huggingface/datasets/issues/3183/events | https://github.com/huggingface/datasets/pull/3183 | 1,039,761,120 | PR_kwDODunzps4t3Dag | 3,183 | Add missing docstring to DownloadConfig | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,526,595,000 | 1,635,848,738,000 | 1,635,848,737,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3183",
"html_url": "https://github.com/huggingface/datasets/pull/3183",
"diff_url": "https://github.com/huggingface/datasets/pull/3183.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3183.patch",
"merged_at": 1635848737000
} | Document the `use_etag` and `num_proc` attributes in `DownloadConig`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3183/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3182/comments | https://api.github.com/repos/huggingface/datasets/issues/3182/events | https://github.com/huggingface/datasets/pull/3182 | 1,039,739,606 | PR_kwDODunzps4t2-9J | 3,182 | Don't memoize strings when hashing since two identical strings may have different python ids | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,524,777,000 | 1,635,845,738,000 | 1,635,845,737,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3182",
"html_url": "https://github.com/huggingface/datasets/pull/3182",
"diff_url": "https://github.com/huggingface/datasets/pull/3182.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3182.patch",
"merged_at": 1635845737000
} | When hashing an object that has several times the same string, the hashing could return a different hash if the identical strings share the same python `id()` or not.
Here is an example code that shows how the issue can affect the caching:
```python
import json
import pyarrow as pa
from datasets.features import Features
from datasets.fingerprint import Hasher
schema = pa.schema([pa.field("some_string", pa.string()), pa.field("another_string", pa.string())])
features_from_schema = Features.from_arrow_schema(schema)
Hasher.hash(features_from_schema) # dffa9dca9a73fd8c
features_dict = json.loads('{"some_string": {"dtype": "string", "id": null, "_type": "Value"}, "another_string": {"dtype": "string", "id": null, "_type": "Value"}}')
features_from_json = Features.from_dict(features_dict)
Hasher.hash(features_from_json) # 3812e76b15e6420e
features_from_schema == features_from_json # True
```
This is because in `features_dict`, some strings like "dtype" are repeated but don't share the same id, contrary to the ones in `features_from_schema`.
I fixed that by disabling memoization for strings.
This could be optimized in the future by implementing a smarter memoization with a special handling for strings. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3182/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3181/comments | https://api.github.com/repos/huggingface/datasets/issues/3181/events | https://github.com/huggingface/datasets/issues/3181 | 1,039,682,097 | I_kwDODunzps49-Eox | 3,181 | `None` converted to `"None"` when loading a dataset | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,521,033,000 | 1,639,185,400,000 | 1,639,060,017,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text"]["section_name"])
```
When installing version 1.1.40, the output is
`[None, 'Introduction', 'Benchmark Datasets', ...]`
When installing from the master branch, the output is
`['None', 'Introduction', 'Benchmark Datasets', ...]`
Notice how the first element was changed from `NoneType` to `str`.
## Expected results
`None` should stay as is.
## Actual results
`None` is converted to a string.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: master
- Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3181/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3180/comments | https://api.github.com/repos/huggingface/datasets/issues/3180/events | https://github.com/huggingface/datasets/pull/3180 | 1,039,641,316 | PR_kwDODunzps4t2qQn | 3,180 | fix label mapping | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,518,544,000 | 1,635,860,467,000 | 1,635,849,432,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3180",
"html_url": "https://github.com/huggingface/datasets/pull/3180",
"diff_url": "https://github.com/huggingface/datasets/pull/3180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3180.patch",
"merged_at": 1635849432000
} | Fixing label mapping for hlgd.
0 correponds to same event and 1 corresponds to different event
<img width="642" alt="Capture d’écran 2021-10-29 à 10 39 58 AM" src="https://user-images.githubusercontent.com/16107619/139454810-1f225e3d-ad48-44a8-b8b1-9205c9533839.png">
<img width="638" alt="Capture d’écran 2021-10-29 à 10 40 09 AM" src="https://user-images.githubusercontent.com/16107619/139454813-93066a3c-7d33-4f56-b133-2f1a7661e438.png">
nt | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3180/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3179/comments | https://api.github.com/repos/huggingface/datasets/issues/3179/events | https://github.com/huggingface/datasets/issues/3179 | 1,039,571,928 | I_kwDODunzps499pvY | 3,179 | Cannot load dataset when the config name is "special" | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,514,247,000 | 1,635,514,521,000 | 1,635,514,521,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
After https://github.com/huggingface/datasets/pull/3159, we can get the config name of "Check/region_1", which is "Check___region_1".
But now we cannot load the dataset (not sure it's related to the above PR though). It's the case for all the similar datasets, listed in https://github.com/huggingface/datasets-preview-backend/issues/78
## Steps to reproduce the bug
```python
>>> from datasets import get_dataset_config_names
>>> get_dataset_config_names("Check/region_1")
['Check___region_1']
>>> load_dataset("Check/region_1")
Using custom data configuration Check___region_1-d2b3bc48f11c9be2
Downloading and preparing dataset json/Check___region_1 to /home/slesage/.cache/huggingface/datasets/json/Check___region_1-d2b3bc48f11c9be2/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4443.12it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1277.19it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1159, in _prepare_split
writer.write_table(table)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 442, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 442, in <listcomp>
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__
File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column
File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index
KeyError: 'Field "builder_name" does not exist in table schema'
```
Loading in streaming mode also returns something strange:
```python
>>> list(load_dataset("Check/region_1", streaming=True, split="train"))
Using custom data configuration Check___region_1-d2b3bc48f11c9be2
[{'builder_name': None, 'citation': '', 'config_name': None, 'dataset_size': None, 'description': '', 'download_checksums': None, 'download_size': None, 'features': {'speech': {'feature': {'dtype': 'float64', 'id': None, '_type': 'Value'}, 'length': -1, 'id': None, '_type': 'Sequence'}, 'sampling_rate': {'dtype': 'int64', 'id': None, '_type': 'Value'}, 'label': {'dtype': 'string', 'id': None, '_type': 'Value'}}, 'homepage': '', 'license': '', 'post_processed': None, 'post_processing_size': None, 'size_in_bytes': None, 'splits': None, 'supervised_keys': None, 'task_templates': None, 'version': None}, {'_data_files': [{'filename': 'dataset.arrow'}], '_fingerprint': 'f1702bb5533c549c', '_format_columns': ['speech', 'sampling_rate', 'label'], '_format_kwargs': {}, '_format_type': None, '_indexes': {}, '_indices_data_files': None, '_output_all_columns': False, '_split': None}]
```
## Expected results
The dataset should be loaded
## Actual results
An error occurs
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: Linux-5.11.0-1020-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3179/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3178/comments | https://api.github.com/repos/huggingface/datasets/issues/3178/events | https://github.com/huggingface/datasets/issues/3178 | 1,039,539,076 | I_kwDODunzps499huE | 3,178 | "Property couldn't be hashed properly" even though fully picklable | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,512,169,000 | 1,648,821,398,000 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable.
## Steps to reproduce the bug
Here is a [colab](https://colab.research.google.com/drive/1gt75LCBIzsmBMvvipEOvWulvyZseBiA7?usp=sharing) but for some reason I cannot reproduce it there. That may have to do with logging/tqdm on Colab, or with running things in notebooks. I tried below code on Windows and Ubuntu as a Python script and getting the same issue (warning below).
```python
import pickle
from datasets import load_dataset
import spacy
class Processor:
def __init__(self):
self.nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner", "lemmatizer"])
@staticmethod
def collate(batch):
return [d["en"] for d in batch]
def parse(self, batch):
batch = batch["translation"]
return {"translation_tok": [{"en_tok": " ".join([t.text for t in doc])} for doc in self.nlp.pipe(self.collate(batch))]}
def process(self):
ds = load_dataset("wmt16", "de-en", split="train[:10%]")
ds = ds.map(self.parse, batched=True, num_proc=6)
if __name__ == '__main__':
pr = Processor()
# succeeds
with open("temp.pkl", "wb") as f:
pickle.dump(pr, f)
print("Successfully pickled!")
pr.process()
```
---
Here is a small change that includes `Hasher.hash` that shows that the hasher cannot seem to successfully pickle parts form the NLP object.
```python
from datasets.fingerprint import Hasher
import pickle
from datasets import load_dataset
import spacy
class Processor:
def __init__(self):
self.nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner", "lemmatizer"])
@staticmethod
def collate(batch):
return [d["en"] for d in batch]
def parse(self, batch):
batch = batch["translation"]
return {"translation_tok": [{"en_tok": " ".join([t.text for t in doc])} for doc in self.nlp.pipe(self.collate(batch))]}
def process(self):
ds = load_dataset("wmt16", "de-en", split="train[:10]")
return ds.map(self.parse, batched=True)
if __name__ == '__main__':
pr = Processor()
# succeeds
with open("temp.pkl", "wb") as f:
pickle.dump(pr, f)
print("Successfully pickled class instance!")
# succeeds
with open("temp.pkl", "wb") as f:
pickle.dump(pr.nlp, f)
print("Successfully pickled nlp!")
# fails
print(Hasher.hash(pr.nlp))
pr.process()
```
## Expected results
This to be picklable, working (fingerprinted), and no warning.
## Actual results
In the first snippet, I get this warning
Parameter 'function'=<function Processor.parse at 0x7f44982247a0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
In the second, I get this traceback which directs to the `Hasher.hash` line.
```
Traceback (most recent call last):
File " \Python\Python36\lib\pickle.py", line 918, in save_global
obj2, parent = _getattribute(module, name)
File " \Python\Python36\lib\pickle.py", line 266, in _getattribute
.format(name, obj))
AttributeError: Can't get local attribute 'add_codes.<locals>.ErrorsWithCodes' on <function add_codes at 0x00000296FF606EA0>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File " scratch_4.py", line 40, in <module>
print(Hasher.hash(pr.nlp))
File " \lib\site-packages\datasets\fingerprint.py", line 191, in hash
return cls.hash_default(value)
File " \lib\site-packages\datasets\fingerprint.py", line 184, in hash_default
return cls.hash_bytes(dumps(value))
File " \lib\site-packages\datasets\utils\py_utils.py", line 345, in dumps
dump(obj, file)
File " \lib\site-packages\datasets\utils\py_utils.py", line 320, in dump
Pickler(file, recurse=True).dump(obj)
File " \lib\site-packages\dill\_dill.py", line 498, in dump
StockPickler.dump(self, obj)
File " \Python\Python36\lib\pickle.py", line 409, in dump
self.save(obj)
File " \Python\Python36\lib\pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File " \Python\Python36\lib\pickle.py", line 634, in save_reduce
save(state)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File " \Python\Python36\lib\pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems
save(v)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 781, in save_list
self._batch_appends(obj)
File " \Python\Python36\lib\pickle.py", line 805, in _batch_appends
save(x)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 736, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File " \Python\Python36\lib\pickle.py", line 634, in save_reduce
save(state)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 736, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File " \Python\Python36\lib\pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems
save(v)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 1176, in save_instancemethod0
pickler.save_reduce(MethodType, (obj.__func__, obj.__self__), obj=obj)
File " \Python\Python36\lib\pickle.py", line 610, in save_reduce
save(args)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 736, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\datasets\utils\py_utils.py", line 523, in save_function
obj=obj,
File " \Python\Python36\lib\pickle.py", line 610, in save_reduce
save(args)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 751, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File " \Python\Python36\lib\pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems
save(v)
File " \Python\Python36\lib\pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File " \Python\Python36\lib\pickle.py", line 605, in save_reduce
save(cls)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 1439, in save_type
StockPickler.save_global(pickler, obj, name=name)
File " \Python\Python36\lib\pickle.py", line 922, in save_global
(obj, module_name, name))
_pickle.PicklingError: Can't pickle <class 'spacy.errors.add_codes.<locals>.ErrorsWithCodes'>: it's not found as spacy.errors.add_codes.<locals>.ErrorsWithCodes
```
## Environment info
Tried on both Linux and Windows
- `datasets` version: 1.14.0
- Platform: Windows-10-10.0.19041-SP0 + Python 3.7.9; Linux-5.11.0-38-generic-x86_64-with-Ubuntu-20.04-focal + Python 3.7.12
- PyArrow version: 6.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3178/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3178/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3177/comments | https://api.github.com/repos/huggingface/datasets/issues/3177/events | https://github.com/huggingface/datasets/issues/3177 | 1,039,487,780 | I_kwDODunzps499VMk | 3,177 | More control over TQDM when using map/filter with multiple processes | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,508,576,000 | 1,635,853,130,000 | null | CONTRIBUTOR | null | null | null | It would help with the clutter in my terminal if tqdm is only shown for rank 0 when using `num_proces>0` in the map and filter methods of datasets.
```python
dataset.map(lambda examples: tokenize(examples["text"]), batched=True, num_proc=6)
```
The above snippet leads to a lot of TQDM bars and depending on your terminal, these will not overwrite but keep pushing each other down.
```
#0: 0%| | 0/13 [00:00<?, ?ba/s]
#1: 0%| | 0/13 [00:00<?, ?ba/s]
#2: 0%| | 0/13 [00:00<?, ?ba/s]
#3: 0%| | 0/13 [00:00<?, ?ba/s]
#4: 0%| | 0/13 [00:00<?, ?ba/s]
#5: 0%| | 0/13 [00:00<?, ?ba/s]
#0: 8%| | 1/13 [00:00<?, ?ba/s]
#1: 8%| | 1/13 [00:00<?, ?ba/s]
...
```
Instead, it would be welcome if we had the option to only show the progress of rank 0. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3177/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3176/comments | https://api.github.com/repos/huggingface/datasets/issues/3176/events | https://github.com/huggingface/datasets/pull/3176 | 1,039,068,312 | PR_kwDODunzps4t00xS | 3,176 | OpenSLR dataset: update generate_examples to properly extract data for SLR83 | {
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,469,167,000 | 1,636,042,845,000 | 1,635,501,849,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3176",
"html_url": "https://github.com/huggingface/datasets/pull/3176",
"diff_url": "https://github.com/huggingface/datasets/pull/3176.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3176.patch",
"merged_at": 1635501849000
} | Fixed #3168.
The SLR38 indices are CSV files and there wasn't any code in openslr.py to process these files properly. The end result was an empty table.
I've added code to properly process these CSV files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3176/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3176/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3175/comments | https://api.github.com/repos/huggingface/datasets/issues/3175/events | https://github.com/huggingface/datasets/pull/3175 | 1,038,945,271 | PR_kwDODunzps4t0bXw | 3,175 | Add docs for `to_tf_dataset` | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,454,522,000 | 1,635,953,976,000 | 1,635,934,043,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3175",
"html_url": "https://github.com/huggingface/datasets/pull/3175",
"diff_url": "https://github.com/huggingface/datasets/pull/3175.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3175.patch",
"merged_at": 1635934043000
} | This PR adds some documentation for new features released in v1.13.0, with the main addition being `to_tf_dataset`:
- Show how to use `to_tf_dataset` in the tutorial, and move `set_format(type='tensorflow'...)` to the Process section (let me know if I'm missing anything @Rocketknight1 😅).
- Add an example for loading dataset from multiple zipped CSV files to the Load section.
- Add an example for removing columns for an `IterableDataset`.
- Add graphic for visualizing streaming. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3175/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3174/comments | https://api.github.com/repos/huggingface/datasets/issues/3174/events | https://github.com/huggingface/datasets/pull/3174 | 1,038,427,245 | PR_kwDODunzps4tyuQ_ | 3,174 | Asserts replaced by exceptions (huggingface#3171) | {
"login": "joseporiolayats",
"id": 5772490,
"node_id": "MDQ6VXNlcjU3NzI0OTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5772490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joseporiolayats",
"html_url": "https://github.com/joseporiolayats",
"followers_url": "https://api.github.com/users/joseporiolayats/followers",
"following_url": "https://api.github.com/users/joseporiolayats/following{/other_user}",
"gists_url": "https://api.github.com/users/joseporiolayats/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joseporiolayats/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joseporiolayats/subscriptions",
"organizations_url": "https://api.github.com/users/joseporiolayats/orgs",
"repos_url": "https://api.github.com/users/joseporiolayats/repos",
"events_url": "https://api.github.com/users/joseporiolayats/events{/privacy}",
"received_events_url": "https://api.github.com/users/joseporiolayats/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,422,145,000 | 1,636,180,532,000 | 1,635,512,923,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3174",
"html_url": "https://github.com/huggingface/datasets/pull/3174",
"diff_url": "https://github.com/huggingface/datasets/pull/3174.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3174.patch",
"merged_at": 1635512923000
} | I've replaced two asserts with their proper exceptions following the guidelines described in issue #3171 by following the contributing guidelines.
PS: This is one of my first PRs, hoping I don't break anything! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3174/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3174/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3173/comments | https://api.github.com/repos/huggingface/datasets/issues/3173/events | https://github.com/huggingface/datasets/pull/3173 | 1,038,404,300 | PR_kwDODunzps4typcA | 3,173 | Fix issue with filelock filename being too long on encrypted filesystems | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,420,537,000 | 1,635,500,544,000 | 1,635,500,544,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3173",
"html_url": "https://github.com/huggingface/datasets/pull/3173",
"diff_url": "https://github.com/huggingface/datasets/pull/3173.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3173.patch",
"merged_at": 1635500544000
} | Infer max filename length in filelock on Unix-like systems. Should fix problems on encrypted filesystems such as eCryptfs.
Fix #2924
cc: @lmmx | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3173/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3173/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3172/comments | https://api.github.com/repos/huggingface/datasets/issues/3172/events | https://github.com/huggingface/datasets/issues/3172 | 1,038,351,587 | I_kwDODunzps494_zj | 3,172 | `SystemError 15` thrown in `Dataset.__del__` when using `Dataset.map()` with `num_proc>1` | {
"login": "vlievin",
"id": 9859840,
"node_id": "MDQ6VXNlcjk4NTk4NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9859840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vlievin",
"html_url": "https://github.com/vlievin",
"followers_url": "https://api.github.com/users/vlievin/followers",
"following_url": "https://api.github.com/users/vlievin/following{/other_user}",
"gists_url": "https://api.github.com/users/vlievin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vlievin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vlievin/subscriptions",
"organizations_url": "https://api.github.com/users/vlievin/orgs",
"repos_url": "https://api.github.com/users/vlievin/repos",
"events_url": "https://api.github.com/users/vlievin/events{/privacy}",
"received_events_url": "https://api.github.com/users/vlievin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,416,940,000 | 1,644,543,065,000 | 1,635,938,770,000 | NONE | null | null | null | ## Describe the bug
I use `datasets.map` to preprocess some data in my application. The error `SystemError 15` is thrown at the end of the execution of `Dataset.map()` (only with `num_proc>1`. Traceback included bellow.
The exception is raised only when the code runs within a specific context. Despite ~10h spent investigating this issue, I have failed to isolate the bug, so let me describe my setup.
In my project, `Dataset` is wrapped into a `LightningDataModule` and the data is preprocessed when calling `LightningDataModule.setup()`. Calling `.setup()` in an isolated script works fine (even when wrapped with `hydra.main()`). However, when calling `.setup()` within the experiment script (depends on `pytorch_lightning`), the script crashes and `SystemError 15`.
I could avoid throwing this error by modifying ` Dataset.__del__()` (see bellow), but I believe this only moves the problem somewhere else. I am completely stuck with this issue, any hint would be welcome.
```python
class Dataset()
...
def __del__(self):
if hasattr(self, "_data"):
_ = self._data # <- ugly trick that allows avoiding the issue.
del self._data
if hasattr(self, "_indices"):
del self._indices
```
## Steps to reproduce the bug
```python
# Unfortunately I couldn't isolate the bug.
```
## Expected results
Calling `Dataset.map()` without throwing an exception. Or at least raising a more detailed exception/traceback.
## Actual results
```
Exception ignored in: <function Dataset.__del__ at 0x7f7cec179160>███████████████████████████████████████████████████| 5/5 [00:05<00:00, 1.17ba/s]
Traceback (most recent call last):
File ".../python3.8/site-packages/datasets/arrow_dataset.py", line 906, in __del__
del self._data
File ".../python3.8/site-packages/ray/worker.py", line 1033, in sigterm_handler
sys.exit(signum)
SystemExit: 15
```
## Environment info
Tested on 2 environments:
**Environment 1.**
- `datasets` version: 1.14.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 6.0.0
**Environment 2.**
- `datasets` version: 1.14.0
- Platform: Linux-4.18.0-305.19.1.el8_4.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.7
- PyArrow version: 6.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3172/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3171/comments | https://api.github.com/repos/huggingface/datasets/issues/3171/events | https://github.com/huggingface/datasets/issues/3171 | 1,037,728,059 | I_kwDODunzps492nk7 | 3,171 | Raise exceptions instead of using assertions for control flow | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,359,212,000 | 1,640,277,637,000 | 1,640,277,637,000 | CONTRIBUTOR | null | null | null | Motivated by https://github.com/huggingface/transformers/issues/12789 in Transformers, one welcoming change would be replacing assertions with proper exceptions. The only type of assertions we should keep are those used as sanity checks.
Currently, there is a total of 87 files with the `assert` statements (located under `datasets` and `src/datasets`), so when working on this, to manage the PR size, only modify 4-5 files at most before submitting a PR. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3171/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3170/comments | https://api.github.com/repos/huggingface/datasets/issues/3170/events | https://github.com/huggingface/datasets/pull/3170 | 1,037,601,926 | PR_kwDODunzps4twDUo | 3,170 | Preserve ordering in `zip_dict` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,350,850,000 | 1,635,512,977,000 | 1,635,512,977,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3170",
"html_url": "https://github.com/huggingface/datasets/pull/3170",
"diff_url": "https://github.com/huggingface/datasets/pull/3170.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3170.patch",
"merged_at": 1635512977000
} | Replace `set` with the `unique_values` generator in `zip_dict`.
This PR fixes the problem with the different ordering of the example keys across different Python sessions caused by the `zip_dict` call in `Features.decode_example`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3170/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3169/comments | https://api.github.com/repos/huggingface/datasets/issues/3169/events | https://github.com/huggingface/datasets/pull/3169 | 1,036,773,357 | PR_kwDODunzps4ttYmZ | 3,169 | Configurable max filename length in file locks | {
"login": "lmmx",
"id": 2979452,
"node_id": "MDQ6VXNlcjI5Nzk0NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2979452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lmmx",
"html_url": "https://github.com/lmmx",
"followers_url": "https://api.github.com/users/lmmx/followers",
"following_url": "https://api.github.com/users/lmmx/following{/other_user}",
"gists_url": "https://api.github.com/users/lmmx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lmmx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lmmx/subscriptions",
"organizations_url": "https://api.github.com/users/lmmx/orgs",
"repos_url": "https://api.github.com/users/lmmx/repos",
"events_url": "https://api.github.com/users/lmmx/events{/privacy}",
"received_events_url": "https://api.github.com/users/lmmx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,285,175,000 | 1,635,437,654,000 | 1,635,437,653,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3169",
"html_url": "https://github.com/huggingface/datasets/pull/3169",
"diff_url": "https://github.com/huggingface/datasets/pull/3169.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3169.patch",
"merged_at": null
} | Resolve #2924 (https://github.com/huggingface/datasets/issues/2924#issuecomment-952330956) wherein the assumption of file lock maximum filename length to be 255 raises an OSError on encrypted drives (ecryptFS on Linux uses part of the lower filename, reducing the maximum filename size to 143). Allowing this limit to be set in the config module allows this to be modified by users. Will not affect Windows users, as their class passes 255 on init explicitly.
Reproduced with the following example ([the first few lines of a script from Lightning Flash](https://lightning-flash.readthedocs.io/en/latest/reference/speech_recognition.html), fine-tuning a HF model):
```py
import torch
import flash
from flash.audio import SpeechRecognition, SpeechRecognitionData
from flash.core.data.utils import download_data
# 1. Create the DataModule
download_data("https://pl-flash-data.s3.amazonaws.com/timit_data.zip", "./data")
datamodule = SpeechRecognitionData.from_json(
input_fields="file",
target_fields="text",
train_file="data/timit/train.json",
test_file="data/timit/test.json",
)
```
Which gave this traceback:
```py
Traceback (most recent call last):
File "lf_ft.py", line 10, in <module>
datamodule = SpeechRecognitionData.from_json(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json
return cls.from_data_source(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source
train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets
train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset
data = load_data(data, mock_dataset)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data
dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)})
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1599, in load_dataset
builder_instance = load_dataset_builder(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1457, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/builder.py", line 285, in __init__
with FileLock(lock_path):
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock'
```
Note the filename is 145 chars long:
```
>>> len("_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock")
145
```
After installing datasets as an editable local package and modifying the script I was running to first include:
```py
import datasets
datasets.config.MAX_DATASET_CONFIG_ID_READABLE_LENGTH = 143
```
The error goes away.
If I instead deliberately set the value incorrectly as 144, the OSError returns:
```
Traceback (most recent call last):
File "lf_ft.py", line 14, in <module>
datamodule = SpeechRecognitionData.from_json(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json
return cls.from_data_source(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source
train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets(
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets
train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset
data = load_data(data, mock_dataset)
File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data
dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)})
File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1605, in load_dataset
builder_instance = load_dataset_builder(
File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1463, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/louis/dev/hf_datasets/src/datasets/builder.py", line 285, in __init__
with FileLock(lock_path):
File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 326, in __enter__
self.acquire()
File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 275, in acquire
self._acquire()
File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 406, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-32c812b5c1272d64_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279...-5794079643713042223.lock'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3169/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3168/comments | https://api.github.com/repos/huggingface/datasets/issues/3168/events | https://github.com/huggingface/datasets/issues/3168 | 1,036,673,263 | I_kwDODunzps49ymDv | 3,168 | OpenSLR/83 is empty | {
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,277,341,000 | 1,635,501,849,000 | 1,635,501,849,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
As the summary says, openslr / SLR83 / train is empty.
The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('openslr', 'SLR83')
```
## Expected results
```
DatasetDict({
train: Dataset({
features: ['path', 'audio', 'sentence'],
num_rows: 17877
})
})
```
## Actual results
```
DatasetDict({
train: Dataset({
features: ['path', 'audio', 'sentence'],
num_rows: 0
})
})
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.1.dev0 (master HEAD)
- Platform: Ubuntu 20.04
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3168/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3168/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3167/comments | https://api.github.com/repos/huggingface/datasets/issues/3167/events | https://github.com/huggingface/datasets/issues/3167 | 1,036,488,992 | I_kwDODunzps49x5Eg | 3,167 | bookcorpusopen no longer works | {
"login": "lucadiliello",
"id": 23355969,
"node_id": "MDQ6VXNlcjIzMzU1OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucadiliello",
"html_url": "https://github.com/lucadiliello",
"followers_url": "https://api.github.com/users/lucadiliello/followers",
"following_url": "https://api.github.com/users/lucadiliello/following{/other_user}",
"gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions",
"organizations_url": "https://api.github.com/users/lucadiliello/orgs",
"repos_url": "https://api.github.com/users/lucadiliello/repos",
"events_url": "https://api.github.com/users/lucadiliello/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucadiliello/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,264,375,000 | 1,637,164,426,000 | 1,637,164,426,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatically killed because of the RAM usage (the machine has 1TB of RAM...).
This did not happen with 1.4.1.
I tried also `rm -rf ~/.cache/huggingface` but did not help.
Changing python version between 3.7, 3.8 and 3.9 did not help too.
## Steps to reproduce the bug
```python
import datasets
d = datasets.load_dataset('bookcorpusopen')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: Linux-5.4.0-1054-aws-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3167/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3166/comments | https://api.github.com/repos/huggingface/datasets/issues/3166/events | https://github.com/huggingface/datasets/pull/3166 | 1,036,450,283 | PR_kwDODunzps4tsVQJ | 3,166 | Deprecate prepare_module | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,262,104,000 | 1,636,104,457,000 | 1,636,104,456,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3166",
"html_url": "https://github.com/huggingface/datasets/pull/3166",
"diff_url": "https://github.com/huggingface/datasets/pull/3166.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3166.patch",
"merged_at": 1636104456000
} | In version 1.13, `prepare_module` was deprecated.
This PR adds a deprecation warning and removes it from all the library, using `dataset_module_factory` or `metric_module_factory` instead.
Fix #3165. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3166/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3165/comments | https://api.github.com/repos/huggingface/datasets/issues/3165/events | https://github.com/huggingface/datasets/issues/3165 | 1,036,448,998 | I_kwDODunzps49xvTm | 3,165 | Deprecate prepare_module | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,262,035,000 | 1,636,104,456,000 | 1,636,104,456,000 | MEMBER | null | null | null | In version 1.13, `prepare_module` was deprecated.
Add deprecation warning and remove its usage from all the library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3165/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3165/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3164/comments | https://api.github.com/repos/huggingface/datasets/issues/3164/events | https://github.com/huggingface/datasets/issues/3164 | 1,035,662,830 | I_kwDODunzps49uvXu | 3,164 | Add raw data files to the Hub with GitHub LFS for canonical dataset | {
"login": "zlucia",
"id": 40370937,
"node_id": "MDQ6VXNlcjQwMzcwOTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/40370937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zlucia",
"html_url": "https://github.com/zlucia",
"followers_url": "https://api.github.com/users/zlucia/followers",
"following_url": "https://api.github.com/users/zlucia/following{/other_user}",
"gists_url": "https://api.github.com/users/zlucia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zlucia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zlucia/subscriptions",
"organizations_url": "https://api.github.com/users/zlucia/orgs",
"repos_url": "https://api.github.com/users/zlucia/repos",
"events_url": "https://api.github.com/users/zlucia/events{/privacy}",
"received_events_url": "https://api.github.com/users/zlucia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,204,501,000 | 1,635,623,691,000 | 1,635,623,691,000 | NONE | null | null | null | I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team. From what I can tell, this option is not immediately supported if one follows the sharing steps detailed here: [https://huggingface.co/docs/datasets/share_dataset.html#sharing-a-canonical-dataset](https://huggingface.co/docs/datasets/share_dataset.html#sharing-a-canonical-dataset), since GitHub LFS is not supported for public forks. Is there a way to request this? Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3164/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3163/comments | https://api.github.com/repos/huggingface/datasets/issues/3163/events | https://github.com/huggingface/datasets/pull/3163 | 1,035,475,061 | PR_kwDODunzps4tpI44 | 3,163 | Add Image feature | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,188,868,000 | 1,640,846,241,000 | 1,638,812,942,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3163",
"html_url": "https://github.com/huggingface/datasets/pull/3163",
"diff_url": "https://github.com/huggingface/datasets/pull/3163.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3163.patch",
"merged_at": 1638812942000
} | Adds the Image feature. This feature is heavily inspired by the recently added Audio feature (#2324). Currently, this PR is pretty simple.
Some considerations that need further discussion:
* I've decided to use `Pillow`/`PIL` as the image decoding library. Another candidate I considered is `torchvision`, mostly because of its `accimage` backend, which should be faster for loading `jpeg` images than `Pillow`. However, `torchvision`'s io module only supports png and jpeg images, has `torch` as a hard dependency, and requires magic to work with image bytes ( `torch.ByteTensor(torch.ByteStorage.from_buffer(image_bytes)))`).
* Currently, I'm converting `PIL`'s `Image` type to `np.ndarray`. The vision models in Transformers such as ViT prefer the raw `Image` type and not the decoded tensors, so there is a small overhead due to [this conversion](https://github.com/huggingface/transformers/blob/3e8761ab8077e3bb243fe2f78b2a682bd2257cf1/src/transformers/image_utils.py#L62-L73). IMO this is justified to keep this part aligned with the Audio feature, which also returns `np.ndarray`. What do you think?
* Still have to work on the channel decoding logic:
* PyTorch prefers the channel-first ordering (C, H, W); TF and Flax the channel-last ordering (H, W, C). One cool feature would be adjusting the channel order based on the selected formatter (`torch`, `tf`, `jax`).
* By default, `Image.open` returns images of shape (H, W, C). However, ViT's feature extractor expects the format (C, H, W) if the image is passed as an array (explained [here](https://huggingface.co/transformers/model_doc/vit.html#transformers.ViTFeatureExtractor.__call__)), so I'm more inclined to the format (C, H, W). Which one do you prefer, (C, H, W) or (H, W, C)?
* Are there any options you'd like to see? (the user could change those via `cast_column`, such as `sampling_rate` in the Audio feature)
TODOs:
* [x] tests
* in subsequent PRs:
* docs - a section in the docs, which gives some additional info on the Image and Audio feature and compares them to
`ArrayND`
* streaming (waiting for #3129 and #3133 to get merged first)
* update the image tasks and the datasets to use the new feature
* Image/Audio formatting
[Colab Notebook](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c?usp=sharing) where you can play with this feature.
I'm also adding a link to the [Image](https://github.com/tensorflow/datasets/blob/7ac7d506488d46038a5854961d068926b3f93c7f/tensorflow_datasets/core/features/image_feature.py#L155) feature in TFDS because one of our goals is to parse TFDS scripts eventually, so our Image feature has to (at least) support all the formats theirs does.
Feel free to cc anyone who might be interested.
P.S. Please ignore the changes in the `datasets/**/*.py` files 😄. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3163/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/3163/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3162/comments | https://api.github.com/repos/huggingface/datasets/issues/3162/events | https://github.com/huggingface/datasets/issues/3162 | 1,035,462,136 | I_kwDODunzps49t-X4 | 3,162 | `datasets-cli test` should work with datasets without scripts | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,187,950,000 | 1,637,856,269,000 | null | CONTRIBUTOR | null | null | null | It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not).
I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/tree/main) -- although @lhoestq came to save the day!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3162/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3161/comments | https://api.github.com/repos/huggingface/datasets/issues/3161/events | https://github.com/huggingface/datasets/pull/3161 | 1,035,444,292 | PR_kwDODunzps4tpCsm | 3,161 | Add riddle_sense dataset | {
"login": "ziyiwu9494",
"id": 44691149,
"node_id": "MDQ6VXNlcjQ0NjkxMTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/44691149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ziyiwu9494",
"html_url": "https://github.com/ziyiwu9494",
"followers_url": "https://api.github.com/users/ziyiwu9494/followers",
"following_url": "https://api.github.com/users/ziyiwu9494/following{/other_user}",
"gists_url": "https://api.github.com/users/ziyiwu9494/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ziyiwu9494/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ziyiwu9494/subscriptions",
"organizations_url": "https://api.github.com/users/ziyiwu9494/orgs",
"repos_url": "https://api.github.com/users/ziyiwu9494/repos",
"events_url": "https://api.github.com/users/ziyiwu9494/events{/privacy}",
"received_events_url": "https://api.github.com/users/ziyiwu9494/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,186,656,000 | 1,636,034,475,000 | 1,636,034,475,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3161",
"html_url": "https://github.com/huggingface/datasets/pull/3161",
"diff_url": "https://github.com/huggingface/datasets/pull/3161.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3161.patch",
"merged_at": 1636034474000
} | Adding a new dataset for QA with riddles. I'm confused about the tagging process because it looks like the streamlit app loads data from the current repo, so is it something that should be done after merging or off my fork? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3161/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3160/comments | https://api.github.com/repos/huggingface/datasets/issues/3160/events | https://github.com/huggingface/datasets/pull/3160 | 1,035,274,640 | PR_kwDODunzps4tofO0 | 3,160 | Better error msg if `len(predictions)` doesn't match `len(references)` in metrics | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,175,505,000 | 1,636,112,699,000 | 1,636,104,662,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3160",
"html_url": "https://github.com/huggingface/datasets/pull/3160",
"diff_url": "https://github.com/huggingface/datasets/pull/3160.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3160.patch",
"merged_at": 1636104662000
} | Improve the error message in `Metric.add_batch` if `len(predictions)` doesn't match `len(references)`.
cc: @BramVanroy (feel free to test this code on your examples and review this PR) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3160/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3159/comments | https://api.github.com/repos/huggingface/datasets/issues/3159/events | https://github.com/huggingface/datasets/pull/3159 | 1,035,174,560 | PR_kwDODunzps4toKD5 | 3,159 | Make inspect.get_dataset_config_names always return a non-empty list | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,170,383,000 | 1,635,513,277,000 | 1,635,399,889,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3159",
"html_url": "https://github.com/huggingface/datasets/pull/3159",
"diff_url": "https://github.com/huggingface/datasets/pull/3159.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3159.patch",
"merged_at": 1635399889000
} | Make all named configs cases, so that no special unnamed config case needs to be handled differently.
Fix #3135. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3159/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3159/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3158/comments | https://api.github.com/repos/huggingface/datasets/issues/3158/events | https://github.com/huggingface/datasets/pull/3158 | 1,035,158,070 | PR_kwDODunzps4toGpe | 3,158 | Fix string encoding for Value type | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,169,453,000 | 1,635,171,126,000 | 1,635,171,125,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3158",
"html_url": "https://github.com/huggingface/datasets/pull/3158",
"diff_url": "https://github.com/huggingface/datasets/pull/3158.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3158.patch",
"merged_at": 1635171125000
} | Some metrics have `string` features but currently it fails if users pass integers instead. Indeed feature encoding that handles the conversion of the user's objects to the right python type is missing a case for `string`, while it already works as expected for integers, floats and booleans
Here is an example code that didn't work previously, but that works with this fix:
```python
import datasets
# Note that 'id' is an integer while the SQuAD metric uses strings
predictions = [{'prediction_text': '1976', 'id': 5}]
references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': 5}]
squad_metric = datasets.load_metric("squad")
squad_metric.add_batch(predictions=predictions, references=references)
results = squad_metric.compute()
# {'exact_match': 100.0, 'f1': 100.0}
```
cc @sgugger @philschmid | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3158/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3158/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3157/comments | https://api.github.com/repos/huggingface/datasets/issues/3157/events | https://github.com/huggingface/datasets/pull/3157 | 1,034,775,165 | PR_kwDODunzps4tm3_I | 3,157 | Fixed: duplicate parameter and missing parameter in docstring | {
"login": "PanQiWei",
"id": 46810637,
"node_id": "MDQ6VXNlcjQ2ODEwNjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/46810637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PanQiWei",
"html_url": "https://github.com/PanQiWei",
"followers_url": "https://api.github.com/users/PanQiWei/followers",
"following_url": "https://api.github.com/users/PanQiWei/following{/other_user}",
"gists_url": "https://api.github.com/users/PanQiWei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PanQiWei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PanQiWei/subscriptions",
"organizations_url": "https://api.github.com/users/PanQiWei/orgs",
"repos_url": "https://api.github.com/users/PanQiWei/repos",
"events_url": "https://api.github.com/users/PanQiWei/events{/privacy}",
"received_events_url": "https://api.github.com/users/PanQiWei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,146,760,000 | 1,635,170,539,000 | 1,635,170,539,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3157",
"html_url": "https://github.com/huggingface/datasets/pull/3157",
"diff_url": "https://github.com/huggingface/datasets/pull/3157.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3157.patch",
"merged_at": 1635170538000
} | changing duplicate parameter `data_files` in `DatasetBuilder.__init__` to the missing parameter `data_dir` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3157/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3156/comments | https://api.github.com/repos/huggingface/datasets/issues/3156/events | https://github.com/huggingface/datasets/issues/3156 | 1,034,478,844 | I_kwDODunzps49qOT8 | 3,156 | Rouge and Meteor for multiple references | {
"login": "avinashsai",
"id": 22453634,
"node_id": "MDQ6VXNlcjIyNDUzNjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avinashsai",
"html_url": "https://github.com/avinashsai",
"followers_url": "https://api.github.com/users/avinashsai/followers",
"following_url": "https://api.github.com/users/avinashsai/following{/other_user}",
"gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions",
"organizations_url": "https://api.github.com/users/avinashsai/orgs",
"repos_url": "https://api.github.com/users/avinashsai/repos",
"events_url": "https://api.github.com/users/avinashsai/events{/privacy}",
"received_events_url": "https://api.github.com/users/avinashsai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,098,931,000 | 1,639,289,786,000 | null | CONTRIBUTOR | null | null | null | Hi,
Currently rogue and meteor supports only single references. Can we use these metrics to calculate for multiple references? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3156/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3155/comments | https://api.github.com/repos/huggingface/datasets/issues/3155/events | https://github.com/huggingface/datasets/issues/3155 | 1,034,468,757 | I_kwDODunzps49qL2V | 3,155 | Illegal instruction (core dumped) at datasets import | {
"login": "hacobe",
"id": 91226467,
"node_id": "MDQ6VXNlcjkxMjI2NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/91226467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hacobe",
"html_url": "https://github.com/hacobe",
"followers_url": "https://api.github.com/users/hacobe/followers",
"following_url": "https://api.github.com/users/hacobe/following{/other_user}",
"gists_url": "https://api.github.com/users/hacobe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hacobe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hacobe/subscriptions",
"organizations_url": "https://api.github.com/users/hacobe/orgs",
"repos_url": "https://api.github.com/users/hacobe/repos",
"events_url": "https://api.github.com/users/hacobe/events{/privacy}",
"received_events_url": "https://api.github.com/users/hacobe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,096,096,000 | 1,637,262,424,000 | 1,637,262,423,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
I install datasets using conda and when I import datasets I get: "Illegal instruction (core dumped)"
## Steps to reproduce the bug
```
conda create --prefix path/to/env
conda activate path/to/env
conda install -c huggingface -c conda-forge datasets
# exits with output "Illegal instruction (core dumped)"
python -m datasets
```
## Environment info
When I run "datasets-cli env", I also get "Illegal instruction (core dumped)"
If I run the following commands:
```
conda create --prefix path/to/another/new/env
conda activate path/to/another/new/env
conda install -c huggingface transformers
transformers-cli env
```
Then I get:
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-67-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Let me know what additional information you need in order to debug this issue. Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3155/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3154/comments | https://api.github.com/repos/huggingface/datasets/issues/3154/events | https://github.com/huggingface/datasets/issues/3154 | 1,034,361,806 | I_kwDODunzps49pxvO | 3,154 | Sacrebleu unexpected behaviour/requirement for data format | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,635,065,733,000 | 1,635,671,312,000 | 1,635,671,311,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/datasets/pull/3153).
In the below snippet, the original sacrebleu snippet works just fine whereas the datasets implementation throws an error.
## Steps to reproduce the bug
```python
import sacrebleu
import datasets
refs = [
['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],
['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'],
]
hyps = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.']
expected_bleu = 48.530827
ds_bleu = datasets.load_metric("sacrebleu")
bleu_score_sb = sacrebleu.corpus_bleu(hyps, refs).score
print(bleu_score_sb, expected_bleu)
# works: 48.5308...
bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"]
print(bleu_score_ds, expected_bleu)
# ValueError: Predictions and/or references don't match the expected format.
```
This seems to be related to how datasets forces the features format here:
https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L94-L99
and then manipulates the references during the compute stage here
https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L119-L122
I do not quite understand why that is required since sacrebleu handles argument parsing quite well [by itself](https://github.com/mjpost/sacrebleu/blob/2787185dd0f8d224c72ee5a831d163c2ac711a47/sacrebleu/metrics/base.py#L229).
## Actual results
Traceback (most recent call last):
File "C:\Users\bramv\AppData\Roaming\JetBrains\PyCharm2020.3\scratches\scratch_23.py", line 23, in <module>
bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"]
File "C:\dev\python\datasets\src\datasets\metric.py", line 392, in compute
self.add_batch(predictions=predictions, references=references)
File "C:\dev\python\datasets\src\datasets\metric.py", line 439, in add_batch
raise ValueError(
ValueError: Predictions and/or references don't match the expected format.
Expected format: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Value(dtype='string', id='sequence'), length=-1, id='references')},
Input predictions: ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.'],
Input references: [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'], ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']]
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3154/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3153/comments | https://api.github.com/repos/huggingface/datasets/issues/3153/events | https://github.com/huggingface/datasets/pull/3153 | 1,034,179,198 | PR_kwDODunzps4tlEVE | 3,153 | Add TER (as implemented in sacrebleu) | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,999,205,000 | 1,635,851,051,000 | 1,635,851,051,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3153",
"html_url": "https://github.com/huggingface/datasets/pull/3153",
"diff_url": "https://github.com/huggingface/datasets/pull/3153.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3153.patch",
"merged_at": 1635851051000
} | Implements TER (Translation Edit Rate) as per its implementation in sacrebleu. Sacrebleu for BLEU scores is already implemented in `datasets` so I thought this would be a nice addition.
I started from the sacrebleu implementation, as the two metrics have a lot in common.
Verified with sacrebleu's [testing suite](https://github.com/mjpost/sacrebleu/blob/078c440168c6adc89ba75fe6d63f0d922d42bcfe/test/test_ter.py) that this indeed works as intended.
```python
import datasets
test_cases = [
(['aaaa bbbb cccc dddd'], ['aaaa bbbb cccc dddd'], 0), # perfect match
(['dddd eeee ffff'], ['aaaa bbbb cccc'], 1), # no overlap
([''], ['a'], 1), # corner case, empty hypothesis
(['d e f g h a b c'], ['a b c d e f g h'], 1 / 8), # a single shift fixes MT
(
[
'wählen Sie " Bild neu berechnen , " um beim Ändern der Bildgröße Pixel hinzuzufügen oder zu entfernen , damit das Bild ungefähr dieselbe Größe aufweist wie die andere Größe .',
'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren möchten , wählen Sie im Menü des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "',
'klicken Sie auf der Registerkarte " Optionen " auf die Schaltfläche " Benutzerdefiniert " und geben Sie Werte für " Fehlerkorrektur-Level " und " Y / X-Verhältnis " ein .',
'Sie können beispielsweise ein Dokument erstellen , das ein Auto über die Bühne enthält .',
'wählen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "',
],
[
'wählen Sie " Bild neu berechnen , " um beim Ändern der Bildgröße Pixel hinzuzufügen oder zu entfernen , damit die Darstellung des Bildes in einer anderen Größe beibehalten wird .',
'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren möchten , wählen Sie im Menü des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "',
'klicken Sie auf der Registerkarte " Optionen " auf die Schaltfläche " Benutzerdefiniert " und geben Sie für " Fehlerkorrektur-Level " und " Y / X-Verhältnis " niedrigere Werte ein .',
'Sie können beispielsweise ein Dokument erstellen , das ein Auto enthalt , das sich über die Bühne bewegt .',
'wählen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "',
],
0.136 # realistic example from WMT dev data (2019)
),
]
ter = datasets.load_metric(r"path\to\datasets\metrics\ter")
predictions = ["hello there general kenobi", "foo bar foobar"]
references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]]
print(ter.compute(predictions=predictions, references=references))
for hyp, ref, score in test_cases:
# Note the reference transformation which is different from scarebleu's input format
results = ter.compute(predictions=hyp, references=[[r] for r in ref])
assert 100*score == results["score"], f"expected {100*score}, got {results['score']}"
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3153/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3152/comments | https://api.github.com/repos/huggingface/datasets/issues/3152/events | https://github.com/huggingface/datasets/pull/3152 | 1,034,039,379 | PR_kwDODunzps4tkqi- | 3,152 | Fix some typos in the documentation | {
"login": "h4iku",
"id": 3812788,
"node_id": "MDQ6VXNlcjM4MTI3ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3812788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h4iku",
"html_url": "https://github.com/h4iku",
"followers_url": "https://api.github.com/users/h4iku/followers",
"following_url": "https://api.github.com/users/h4iku/following{/other_user}",
"gists_url": "https://api.github.com/users/h4iku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h4iku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h4iku/subscriptions",
"organizations_url": "https://api.github.com/users/h4iku/orgs",
"repos_url": "https://api.github.com/users/h4iku/repos",
"events_url": "https://api.github.com/users/h4iku/events{/privacy}",
"received_events_url": "https://api.github.com/users/h4iku/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,953,115,000 | 1,635,172,056,000 | 1,635,170,628,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3152",
"html_url": "https://github.com/huggingface/datasets/pull/3152",
"diff_url": "https://github.com/huggingface/datasets/pull/3152.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3152.patch",
"merged_at": 1635170628000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3152/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3151/comments | https://api.github.com/repos/huggingface/datasets/issues/3151/events | https://github.com/huggingface/datasets/pull/3151 | 1,033,890,501 | PR_kwDODunzps4tkL7t | 3,151 | Re-add faiss to windows testing suite | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,931,269,000 | 1,635,850,054,000 | 1,635,847,563,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3151",
"html_url": "https://github.com/huggingface/datasets/pull/3151",
"diff_url": "https://github.com/huggingface/datasets/pull/3151.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3151.patch",
"merged_at": 1635847563000
} | In recent versions, `faiss-cpu` seems to be available for Windows as well. See the [PyPi page](https://pypi.org/project/faiss-cpu/#files) to confirm. We can therefore included it for Windows in the setup file.
At first tests didn't pass due to problems with permissions as caused by `NamedTemporaryFile` on Windows. This built-in library is notoriously poor in playing nice on Windows. The required change isn't pretty, but it works. First set `delete=False` to not automatically try to delete the file on `exit`. Then, manually delete the file with `unlink`. It's weird, I know, but it works.
```python
with tempfile.NamedTemporaryFile(delete=False) as tmp_file:
# do stuff
os.unlink(tmp_file.name)
```
closes #3150 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3151/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3150/comments | https://api.github.com/repos/huggingface/datasets/issues/3150/events | https://github.com/huggingface/datasets/issues/3150 | 1,033,831,530 | I_kwDODunzps49nwRq | 3,150 | Faiss _is_ available on Windows | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,926,036,000 | 1,635,847,563,000 | 1,635,847,563,000 | CONTRIBUTOR | null | null | null | In the setup file, I find the following:
https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/setup.py#L171
However, FAISS does install perfectly fine on Windows on my system. You can also confirm this on the [PyPi page](https://pypi.org/project/faiss-cpu/#files), where Windows wheels are available. Maybe this was true for older versions? For current versions, this can be removed I think.
(This isn't really a bug but didn't know how else to tag.)
If you agree I can do a quick PR and remove that line. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3150/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3149/comments | https://api.github.com/repos/huggingface/datasets/issues/3149/events | https://github.com/huggingface/datasets/pull/3149 | 1,033,747,625 | PR_kwDODunzps4tjuUt | 3,149 | Add CMU Hinglish DoG Dataset for MT | {
"login": "Ishan-Kumar2",
"id": 46553104,
"node_id": "MDQ6VXNlcjQ2NTUzMTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/46553104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ishan-Kumar2",
"html_url": "https://github.com/Ishan-Kumar2",
"followers_url": "https://api.github.com/users/Ishan-Kumar2/followers",
"following_url": "https://api.github.com/users/Ishan-Kumar2/following{/other_user}",
"gists_url": "https://api.github.com/users/Ishan-Kumar2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ishan-Kumar2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ishan-Kumar2/subscriptions",
"organizations_url": "https://api.github.com/users/Ishan-Kumar2/orgs",
"repos_url": "https://api.github.com/users/Ishan-Kumar2/repos",
"events_url": "https://api.github.com/users/Ishan-Kumar2/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ishan-Kumar2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,919,445,000 | 1,636,976,202,000 | 1,636,972,065,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3149",
"html_url": "https://github.com/huggingface/datasets/pull/3149",
"diff_url": "https://github.com/huggingface/datasets/pull/3149.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3149.patch",
"merged_at": 1636972065000
} | Address part of #2841
Added the CMU Hinglish DoG Dataset as in GLUECoS. Added it as a seperate dataset as unlike other tasks of GLUE CoS this can't be evaluated for a BERT like model.
Consists of parallel dataset between Hinglish (Hindi-English) and English, can be used for Machine Translation between the two.
The data processing part is inspired from the GLUECoS repo [here](https://github.com/microsoft/GLUECoS/blob/7fdc51653e37a32aee17505c47b7d1da364fa77e/Data/Preprocess_Scripts/preprocess_mt_en_hi.py)
The dummy data part is not working properly, it shows
``` UnboundLocalError: local variable 'generator_splits' referenced before assignment ```
when I run without ``--auto_generate``.
Please let me know how I can fix that.
Thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3149/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3148/comments | https://api.github.com/repos/huggingface/datasets/issues/3148/events | https://github.com/huggingface/datasets/issues/3148 | 1,033,685,208 | I_kwDODunzps49nMjY | 3,148 | Streaming with num_workers != 0 | {
"login": "justheuristic",
"id": 3491902,
"node_id": "MDQ6VXNlcjM0OTE5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3491902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justheuristic",
"html_url": "https://github.com/justheuristic",
"followers_url": "https://api.github.com/users/justheuristic/followers",
"following_url": "https://api.github.com/users/justheuristic/following{/other_user}",
"gists_url": "https://api.github.com/users/justheuristic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/justheuristic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justheuristic/subscriptions",
"organizations_url": "https://api.github.com/users/justheuristic/orgs",
"repos_url": "https://api.github.com/users/justheuristic/repos",
"events_url": "https://api.github.com/users/justheuristic/events{/privacy}",
"received_events_url": "https://api.github.com/users/justheuristic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,915,237,000 | 1,641,393,049,000 | null | NONE | null | null | null | ## Describe the bug
When using dataset streaming with pytorch DataLoader, the setting num_workers to anything other than 0 causes the code to freeze forever before yielding the first batch.
The code owner is likely @lhoestq
## Steps to reproduce the bug
For your convenience, we've prepped a colab notebook that reproduces the bug
https://colab.research.google.com/drive/1Mgl0oTZSNIE3UeGl_oX9wPCOIxRg19h1?usp=sharing
```python
!pip install datasets==1.14.0
should_freeze_forever = True
# ^-- set this to True in order to freeze forever, set to False in order to work normally
import torch
from datasets import load_dataset
data = load_dataset("oscar", "unshuffled_deduplicated_bn", split="train", streaming=True)
data = data.map(lambda x: {"text": x["text"], "orig": f"oscar[{x['id']}]"}, batched=True)
data = data.shuffle(100, seed=1337)
data = data.with_format("torch")
loader = torch.utils.data.DataLoader(data, batch_size=2, num_workers=2 if should_freeze_forever else 0)
# v-- the code should freeze forever at this line
for i, row in enumerate(loader):
print(row)
if i > 10: break
print("DONE!")
```
## Expected results
The code should not freeze forever with num_workers=2
## Actual results
The code freezes forever with num_workers=2
## Environment info
- `datasets` version: 1.14.0 (also found in previous versions)
- Platform: google colab (also locally)
- Python version: 3.7, (also 3.8)
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3148/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3147/comments | https://api.github.com/repos/huggingface/datasets/issues/3147/events | https://github.com/huggingface/datasets/pull/3147 | 1,033,607,659 | PR_kwDODunzps4tjRHG | 3,147 | Fix CLI test to ignore verfications when saving infos | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,910,766,000 | 1,635,321,710,000 | 1,635,321,709,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3147",
"html_url": "https://github.com/huggingface/datasets/pull/3147",
"diff_url": "https://github.com/huggingface/datasets/pull/3147.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3147.patch",
"merged_at": 1635321709000
} | Fix #3146. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3147/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3146/comments | https://api.github.com/repos/huggingface/datasets/issues/3146/events | https://github.com/huggingface/datasets/issues/3146 | 1,033,605,947 | I_kwDODunzps49m5M7 | 3,146 | CLI test command throws NonMatchingSplitsSizesError when saving infos | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,910,653,000 | 1,635,321,709,000 | 1,635,321,709,000 | MEMBER | null | null | null | When trying to generate a datset JSON metadata, a `NonMatchingSplitsSizesError` is thrown:
```
$ datasets-cli test datasets/arabic_billion_words --save_infos --all_configs
Testing builder 'Alittihad' (1/10)
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: Unknown size, post-processed: Unknown size, total: 332.13 MiB) to .cache\arabic_billion_words\Alittihad\1.1.0\8175ff1c9714c6d5d15b1141b6042e5edf048276bb81a9c14e35e149a7a62ae4...
Traceback (most recent call last):
File "path\huggingface\datasets\.venv\Scripts\datasets-cli-script.py", line 33, in <module>
sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())
File "path\huggingface\datasets\src\datasets\commands\datasets_cli.py", line 33, in main
service.run()
File "path\huggingface\datasets\src\datasets\commands\test.py", line 144, in run
builder.download_and_prepare(
File "path\huggingface\datasets\src\datasets\builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "path\huggingface\datasets\src\datasets\builder.py", line 709, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "path\huggingface\datasets\src\datasets\utils\info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words')}]
```
This is due because a previous run generated a wrong `dataset_info.json`.
This error can be avoided by passing `--ignore_verifications`, but I think this should be assumed when passing `--save_infos`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3146/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3145/comments | https://api.github.com/repos/huggingface/datasets/issues/3145/events | https://github.com/huggingface/datasets/issues/3145 | 1,033,580,009 | I_kwDODunzps49my3p | 3,145 | [when Image type will exist] provide a way to get the data as binary + filename | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,909,029,000 | 1,640,171,137,000 | 1,640,171,136,000 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image to the disk, with the correct filename, and optionally to know its mimetype, in order to serve it on the web.
Note: this issue would apply exactly the same for the `Audio` type.
**Describe the solution you'd like**
If a "cell" has the type `Image`, provide a way to get the binary content of the file, and the filename, eg as:
```python
filename: str
data: bytes
```
**Describe alternatives you've considered**
A way to write the cell to the disk (passing a local directory), and then return the pathname, filename, and mimetype.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3145/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3144/comments | https://api.github.com/repos/huggingface/datasets/issues/3144/events | https://github.com/huggingface/datasets/issues/3144 | 1,033,573,760 | I_kwDODunzps49mxWA | 3,144 | Infer the features if missing | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,908,653,000 | 1,634,908,653,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Some datasets, in particular community datasets, have no info file, thus no features.
**Describe the solution you'd like**
If a dataset has no features, the first loaded data (5-10 rows) could be used to infer the type.
Related: `datasets` would provide a way to load the data, and get the rows AND the features as the result.
**Describe alternatives you've considered**
The HF hub could also provide some UI to help the dataset maintainers to explicit the types of their rows, or automatically infer them as an initial proposal. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3144/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3143/comments | https://api.github.com/repos/huggingface/datasets/issues/3143/events | https://github.com/huggingface/datasets/issues/3143 | 1,033,569,655 | I_kwDODunzps49mwV3 | 3,143 | Provide a way to check if the features (in info) match with the data of a split | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,908,416,000 | 1,634,908,676,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
I understand that currently the data loaded has not always the type described in the info features
**Describe the solution you'd like**
Provide a way to check if the rows have the type described by info features
**Describe alternatives you've considered**
Always check it, and raise an error when loading the data if their type doesn't match the features.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3143/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3142/comments | https://api.github.com/repos/huggingface/datasets/issues/3142/events | https://github.com/huggingface/datasets/issues/3142 | 1,033,566,034 | I_kwDODunzps49mvdS | 3,142 | Provide a way to write a streamed dataset to the disk | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,908,193,000 | 1,635,506,079,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
The streaming mode allows to get the 100 first rows of a dataset very quickly. But it does not cache the answer, so a posterior call to get the same 100 rows will send a request to the server again and again.
**Describe the solution you'd like**
Provide a way to write the streamed rows of a dataset on the disk, and to load from it later.
**Describe alternatives you've considered**
Provide a third mode: `lazy`, which would use the local cache for the data that have already been fetched previously, and use streaming to get the rest of the requested data.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3142/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3142/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3141/comments | https://api.github.com/repos/huggingface/datasets/issues/3141/events | https://github.com/huggingface/datasets/pull/3141 | 1,033,555,910 | PR_kwDODunzps4tjGYz | 3,141 | Fix caching bugs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,907,565,000 | 1,634,935,928,000 | 1,634,910,425,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3141",
"html_url": "https://github.com/huggingface/datasets/pull/3141",
"diff_url": "https://github.com/huggingface/datasets/pull/3141.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3141.patch",
"merged_at": 1634910424000
} | This PR fixes some caching bugs (most likely introduced in the latest refactor):
* remove ")" added by accident in the dataset dir name
* correctly pass the namespace kwargs in `CachedDatasetModuleFactory`
* improve the warning message if `HF_DATASETS_OFFLINE is `True`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3141/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3141/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3140/comments | https://api.github.com/repos/huggingface/datasets/issues/3140/events | https://github.com/huggingface/datasets/issues/3140 | 1,033,524,132 | I_kwDODunzps49mlOk | 3,140 | Add DER metric | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,905,331,000 | 1,634,905,348,000 | null | MEMBER | null | null | null | Add DER metric for speaker diarization task.
This is used by SUPERB beenchmark, for example. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3140/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3139/comments | https://api.github.com/repos/huggingface/datasets/issues/3139/events | https://github.com/huggingface/datasets/issues/3139 | 1,033,524,079 | I_kwDODunzps49mlNv | 3,139 | Fix file/directory deletion on Windows | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,905,328,000 | 1,634,905,328,000 | null | CONTRIBUTOR | null | null | null | Currently, on Windows, some attempts to delete a dataset file/directory will fail with the `PerimissionError`.
Examples:
- download a dataset, then force redownload it in the same session while keeping a reference to the downloaded dataset
```python
from datasets import load_dataset
dset = load_dataset("sst", split="train")
dset = load_dataset("sst", split="train", download_mode="force_redownload")
```
- try to clean up the cache files while keeping a reference to those files (via the mapped dataset):
```python
from datasets import load_dataset
dset = load_dataset("sst", split="train")
dset_mapped = dset.map(lambda _: {"dummy_col": 1})
dset.cleanup_cache_files()
```
We should fix those.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3139/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3138/comments | https://api.github.com/repos/huggingface/datasets/issues/3138/events | https://github.com/huggingface/datasets/issues/3138 | 1,033,379,997 | I_kwDODunzps49mCCd | 3,138 | More fine-grained taxonomy of error types | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,895,329,000 | 1,634,895,335,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Exceptions like `FileNotFoundError` can be raised by different parts of the code, and it's hard to detect which one did
**Describe the solution you'd like**
Give a specific exception type for every group of similar errors
**Describe alternatives you've considered**
Rely on the error message, using regex
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3138/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3137/comments | https://api.github.com/repos/huggingface/datasets/issues/3137/events | https://github.com/huggingface/datasets/pull/3137 | 1,033,363,652 | PR_kwDODunzps4tievk | 3,137 | Fix numpy deprecation warning for ragged tensors | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,894,266,000 | 1,634,918,655,000 | 1,634,918,654,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3137",
"html_url": "https://github.com/huggingface/datasets/pull/3137",
"diff_url": "https://github.com/huggingface/datasets/pull/3137.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3137.patch",
"merged_at": 1634918654000
} | Numpy shows a deprecation warning when we call `np.array` on a list of ragged tensors without specifying the `dtype`. If their shapes match, the tensors can be collated together, otherwise the resulting array should have `dtype=np.object`.
Fix #3084
cc @Rocketknight1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3137/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3136/comments | https://api.github.com/repos/huggingface/datasets/issues/3136/events | https://github.com/huggingface/datasets/pull/3136 | 1,033,360,396 | PR_kwDODunzps4tieFi | 3,136 | Fix script of Arabic Billion Words dataset to return all data | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,894,064,000 | 1,634,909,321,000 | 1,634,909,320,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3136",
"html_url": "https://github.com/huggingface/datasets/pull/3136",
"diff_url": "https://github.com/huggingface/datasets/pull/3136.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3136.patch",
"merged_at": 1634909319000
} | The script has a bug and only parses and generates a portion of the entire dataset.
This PR fixes the loading script so that is properly parses the entire dataset.
Current implementation generates the same number of examples as reported in the [original paper](https://arxiv.org/abs/1611.04033) for all configurations except for one:
- For "Youm7" we generate more examples (1172136) than the ones reported by the paper (1025027)
| | Number of examples | Number of examples according to the source |
|:---------------|-------------------:|-----:|
| Alittihad | 349342 |349342 |
| Almasryalyoum | 291723 |291723 |
| Almustaqbal | 446873 |446873 |
| Alqabas | 817274 |817274 |
| Echoroukonline | 139732 |139732 |
| Ryiadh | 858188 | 858188 |
| Sabanews | 92149 |92149 |
| SaudiYoum | 888068 |888068 |
| Techreen | 314597 |314597 |
| Youm7 | 1172136 |1025027 |
Fix #3126. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3136/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3135/comments | https://api.github.com/repos/huggingface/datasets/issues/3135/events | https://github.com/huggingface/datasets/issues/3135 | 1,033,294,299 | I_kwDODunzps49ltHb | 3,135 | Make inspect.get_dataset_config_names always return a non-empty list of configs | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,889,770,000 | 1,635,399,889,000 | 1,635,399,889,000 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to
**Describe the solution you'd like**
In that sense inspect.get_dataset_config_names should always return at least one configuration name, be it `default` or `Check___region_1` (for community datasets like `Check/region_1`).
https://github.com/huggingface/datasets/blob/c5747a5e1dde2670b7f2ca6e79e2ffd99dff85af/src/datasets/inspect.py#L161
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3135/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3134/comments | https://api.github.com/repos/huggingface/datasets/issues/3134/events | https://github.com/huggingface/datasets/issues/3134 | 1,033,251,755 | I_kwDODunzps49liur | 3,134 | Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py | {
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,886,472,000 | 1,642,600,952,000 | 1,642,600,951,000 | NONE | null | null | null | datasets version: 1.12.1
`metric = datasets.load_metric('rouge')`
The error:
> ConnectionError Traceback (most recent call last)
> <ipython-input-3-dd10a0c5212f> in <module>
> ----> 1 metric = datasets.load_metric('rouge')
>
> /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
> 613 download_config=download_config,
> 614 download_mode=download_mode,
> --> 615 dataset=False,
> 616 )
> 617 metric_cls = import_main_class(module_path, dataset=False)
>
> /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs)
> 328 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version)
> 329 try:
> --> 330 local_path = cached_path(file_path, download_config=download_config)
> 331 except FileNotFoundError:
> 332 if script_version is not None:
>
> /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
> 296 use_etag=download_config.use_etag,
> 297 max_retries=download_config.max_retries,
> --> 298 use_auth_token=download_config.use_auth_token,
> 299 )
> 300 elif os.path.exists(url_or_filename):
>
> /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
> 603 raise FileNotFoundError("Couldn't find file at {}".format(url))
> 604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
> --> 605 raise ConnectionError("Couldn't reach {}".format(url))
> 606
> 607 # Try a second time
>
> ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
Is there any remedy to solve the connection issue ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3134/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3133/comments | https://api.github.com/repos/huggingface/datasets/issues/3133/events | https://github.com/huggingface/datasets/pull/3133 | 1,032,511,710 | PR_kwDODunzps4tftyZ | 3,133 | Support Audio feature in streaming mode | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,823,477,000 | 1,636,726,385,000 | 1,636,726,384,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3133",
"html_url": "https://github.com/huggingface/datasets/pull/3133",
"diff_url": "https://github.com/huggingface/datasets/pull/3133.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3133.patch",
"merged_at": 1636726384000
} | Fix #3132. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3133/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3132/comments | https://api.github.com/repos/huggingface/datasets/issues/3132/events | https://github.com/huggingface/datasets/issues/3132 | 1,032,505,430 | I_kwDODunzps49ishW | 3,132 | Support Audio feature in streaming mode | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,823,138,000 | 1,636,726,384,000 | 1,636,726,384,000 | MEMBER | null | null | null | Currently, Audio feature is only supported for non-streaming datasets.
Due to the large size of many speech datasets, we should also support Audio feature in streaming mode.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3132/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3131/comments | https://api.github.com/repos/huggingface/datasets/issues/3131/events | https://github.com/huggingface/datasets/issues/3131 | 1,032,309,865 | I_kwDODunzps49h8xp | 3,131 | Add ADE20k | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,811,189,000 | 1,647,925,399,000 | null | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** ADE20k (actually it's called the MIT Scene Parsing Benchmark, it's actually a subset of ADE20k but a lot of authors still call it ADE20k)
- **Description:** A semantic segmentation dataset, consisting of 150 classes.
- **Paper:** http://people.csail.mit.edu/bzhou/publication/scene-parse-camera-ready.pdf
- **Data:** http://sceneparsing.csail.mit.edu/
- **Motivation:** I am currently adding Transformer-based semantic segmentation models that achieve SOTA on this dataset. It would be great to directly access this dataset using HuggingFace Datasets, in order to make example scripts in HuggingFace Transformers.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3131/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3130/comments | https://api.github.com/repos/huggingface/datasets/issues/3130/events | https://github.com/huggingface/datasets/pull/3130 | 1,032,299,417 | PR_kwDODunzps4tfBJU | 3,130 | Create SECURITY.md | {
"login": "zidingz",
"id": 28839565,
"node_id": "MDQ6VXNlcjI4ODM5NTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/28839565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zidingz",
"html_url": "https://github.com/zidingz",
"followers_url": "https://api.github.com/users/zidingz/followers",
"following_url": "https://api.github.com/users/zidingz/following{/other_user}",
"gists_url": "https://api.github.com/users/zidingz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zidingz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zidingz/subscriptions",
"organizations_url": "https://api.github.com/users/zidingz/orgs",
"repos_url": "https://api.github.com/users/zidingz/repos",
"events_url": "https://api.github.com/users/zidingz/events{/privacy}",
"received_events_url": "https://api.github.com/users/zidingz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,810,583,000 | 1,634,826,808,000 | 1,634,826,710,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3130",
"html_url": "https://github.com/huggingface/datasets/pull/3130",
"diff_url": "https://github.com/huggingface/datasets/pull/3130.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3130.patch",
"merged_at": null
} | To let the repository confirm feedback@huggingface.co as its security contact. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3130/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3129/comments | https://api.github.com/repos/huggingface/datasets/issues/3129/events | https://github.com/huggingface/datasets/pull/3129 | 1,032,234,167 | PR_kwDODunzps4tezlA | 3,129 | Support Audio feature for TAR archives in sequential access | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,806,611,000 | 1,637,170,928,000 | 1,637,170,927,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3129",
"html_url": "https://github.com/huggingface/datasets/pull/3129",
"diff_url": "https://github.com/huggingface/datasets/pull/3129.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3129.patch",
"merged_at": 1637170927000
} | Add Audio feature support for TAR archived files in sequential access.
Fix #3128. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3129/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3128/comments | https://api.github.com/repos/huggingface/datasets/issues/3128/events | https://github.com/huggingface/datasets/issues/3128 | 1,032,201,870 | I_kwDODunzps49hiaO | 3,128 | Support Audio feature for TAR archives in sequential access | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,804,581,000 | 1,637,170,927,000 | 1,637,170,927,000 | MEMBER | null | null | null | Currently, Audio feature accesses each audio file by their file path.
However, streamed TAR archive files do not allow random access to their archived files.
Therefore, we should enhance the Audio feature to support TAR archived files in sequential access. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3128/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3127/comments | https://api.github.com/repos/huggingface/datasets/issues/3127/events | https://github.com/huggingface/datasets/issues/3127 | 1,032,100,613 | I_kwDODunzps49hJsF | 3,127 | datasets-cli: convertion of a tfds dataset to a huggingface one. | {
"login": "vitalyshalumov",
"id": 33824221,
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitalyshalumov",
"html_url": "https://github.com/vitalyshalumov",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,796,867,000 | 1,635,334,565,000 | null | NONE | null | null | null | ### Discussed in https://github.com/huggingface/datasets/discussions/3079
<div type='discussions-op-text'>
<sup>Originally posted by **vitalyshalumov** October 14, 2021</sup>
I'm trying to convert a tfds dataset to a huggingface one.
I've tried:
1. datasets-cli convert --tfds_path ~/tensorflow_datasets/mnist/3.0.1/ --datasets_directory ~/.cache/huggingface/datasets/mnist/3.0.1/
2. datasets-cli convert --tfds_path ~/tensorflow_datasets/mnist/3.0.1/ --datasets_directory ~/.cache/huggingface/datasets/
and other permutations.
The script appears to be running and finishing without an error but when looking in the huggingface/datasets/ folder nothing is created.
</div> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3127/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3126/comments | https://api.github.com/repos/huggingface/datasets/issues/3126/events | https://github.com/huggingface/datasets/issues/3126 | 1,032,093,055 | I_kwDODunzps49hH1_ | 3,126 | "arabic_billion_words" dataset does not create the full dataset | {
"login": "vitalyshalumov",
"id": 33824221,
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitalyshalumov",
"html_url": "https://github.com/vitalyshalumov",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,796,158,000 | 1,634,909,320,000 | 1,634,909,320,000 | NONE | null | null | null | ## Describe the bug
When running:
raw_dataset = load_dataset('arabic_billion_words','Alittihad')
the correct dataset file is pulled from the url.
But, the generated dataset includes just a small portion of the data included in the file.
This is true for all other portions of the "arabic_billion_words" dataset ('Almasryalyoum',.....)
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
raw_dataset = load_dataset('arabic_billion_words','Alittihad')
#The screen message
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 20.62 MiB, post-processed: Unknown size, total: 352.74 MiB)
## Expected results
over 100K sentences
## Actual results
only 11K sentences
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3126/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3125/comments | https://api.github.com/repos/huggingface/datasets/issues/3125/events | https://github.com/huggingface/datasets/pull/3125 | 1,032,046,666 | PR_kwDODunzps4teNPC | 3,125 | Add SLR83 to OpenSLR | {
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,790,360,000 | 1,634,933,405,000 | 1,634,891,422,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3125",
"html_url": "https://github.com/huggingface/datasets/pull/3125",
"diff_url": "https://github.com/huggingface/datasets/pull/3125.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3125.patch",
"merged_at": 1634891422000
} | The PR resolves #3119, adding SLR83 (UK and Ireland dialects) to the previously created OpenSLR dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3125/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3124/comments | https://api.github.com/repos/huggingface/datasets/issues/3124/events | https://github.com/huggingface/datasets/pull/3124 | 1,031,976,286 | PR_kwDODunzps4td-5w | 3,124 | More efficient nested features encoding | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,781,331,000 | 1,635,865,633,000 | 1,635,851,044,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3124",
"html_url": "https://github.com/huggingface/datasets/pull/3124",
"diff_url": "https://github.com/huggingface/datasets/pull/3124.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3124.patch",
"merged_at": 1635851044000
} | Nested encoding of features wastes a lot of time on operations which are effectively doing nothing when lists are used.
For example, if in the input we have a list of integers, `encoded_nested_example` will iterate over it and apply `encoded_nested_example` on every element even though it just return the int as is.
A similar issue is handled at an earlier stage when casting pytorch/tensorflow/pandas objects to python lists/numpy arrays:
https://github.com/huggingface/datasets/blob/c98c23c4260edadab00f997d1a5d66b7f2e93ce9/src/datasets/features/features.py#L149-L156
https://github.com/huggingface/datasets/blob/c98c23c4260edadab00f997d1a5d66b7f2e93ce9/src/datasets/features/features.py#L212-L228
In this pull request I suggest to use the same approach in `encoded_nested_example`.
In my setup there was a major speedup with this change: loading the data was at least x4 faster. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3124/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3123/comments | https://api.github.com/repos/huggingface/datasets/issues/3123/events | https://github.com/huggingface/datasets/issues/3123 | 1,031,793,207 | I_kwDODunzps49f-o3 | 3,123 | Segmentation fault when loading datasets from file | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,760,971,000 | 1,635,865,027,000 | 1,635,865,027,000 | MEMBER | null | null | null | ## Describe the bug
Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/
## Steps to reproduce the bug
Download an example file:
```
wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e5051507651ad/tiny_kelm.jsonl
```
Then in Python:
```
import datasets
tiny_kelm = datasets.load_dataset("json", data_files="tiny_kelm.jsonl", chunksize=100000)
```
## Expected results
a `tiny_kelm` functional dataset
## Actual results
☠️ `Segmentation fault (core dumped)` ☠️
## Environment info
- `datasets` version: 1.14.0
- Platform: Linux-5.11.0-38-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3123/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3122/comments | https://api.github.com/repos/huggingface/datasets/issues/3122/events | https://github.com/huggingface/datasets/issues/3122 | 1,031,787,509 | I_kwDODunzps49f9P1 | 3,122 | OSError with a custom dataset loading script | {
"login": "suzanab",
"id": 38602977,
"node_id": "MDQ6VXNlcjM4NjAyOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/38602977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suzanab",
"html_url": "https://github.com/suzanab",
"followers_url": "https://api.github.com/users/suzanab/followers",
"following_url": "https://api.github.com/users/suzanab/following{/other_user}",
"gists_url": "https://api.github.com/users/suzanab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suzanab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suzanab/subscriptions",
"organizations_url": "https://api.github.com/users/suzanab/orgs",
"repos_url": "https://api.github.com/users/suzanab/repos",
"events_url": "https://api.github.com/users/suzanab/events{/privacy}",
"received_events_url": "https://api.github.com/users/suzanab/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,760,519,000 | 1,637,661,338,000 | 1,637,661,338,000 | NONE | null | null | null | ## Describe the bug
I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory structure, yet I am only getting an error with janes_tag.
## Steps to reproduce the bug
```python
dataset = datasets.load_dataset('classla/janes_tag', split='validation')
```
## Expected results
Dataset correctly loaded.
## Actual results
Traceback (most recent call last):
File "C:/mypath/test.py", line 91, in <module>
load_and_print('janes_tag')
File "C:/mypath/test.py", line 32, in load_and_print
dataset = datasets.load_dataset('classla/{}'.format(ds_name), split='validation')
File "C:\mypath\venv\lib\site-packages\datasets\load.py", line 1632, in load_dataset
use_auth_token=use_auth_token,
File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 608, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 704, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: 'C:\\mypath\\.cache\\huggingface\\datasets\\downloads\\2c9996e44bdc5af9c89bffb9e6d7a3e42fdb2f56bacab45de13b20f3032ea7ca\\data\\train_all.conllup'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.5
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3122/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3121/comments | https://api.github.com/repos/huggingface/datasets/issues/3121/events | https://github.com/huggingface/datasets/pull/3121 | 1,031,673,115 | PR_kwDODunzps4tc_6q | 3,121 | Use huggingface_hub.HfApi to list datasets/metrics | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,752,109,000 | 1,636,112,708,000 | 1,636,105,716,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3121",
"html_url": "https://github.com/huggingface/datasets/pull/3121",
"diff_url": "https://github.com/huggingface/datasets/pull/3121.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3121.patch",
"merged_at": 1636105715000
} | Delete `datasets.inspect.HfApi` and use `huggingface_hub.HfApi` instead.
WIP until https://github.com/huggingface/huggingface_hub/pull/429 is merged, then wait for the new release of `huggingface_hub`, update the `huggingface_hub` version in `setup.py` and merge this PR.
cc: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3121/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3120/comments | https://api.github.com/repos/huggingface/datasets/issues/3120/events | https://github.com/huggingface/datasets/pull/3120 | 1,031,574,511 | PR_kwDODunzps4tcril | 3,120 | Correctly update metadata to preserve features when concatenating datasets with axis=1 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,745,298,000 | 1,634,891,331,000 | 1,634,827,821,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3120",
"html_url": "https://github.com/huggingface/datasets/pull/3120",
"diff_url": "https://github.com/huggingface/datasets/pull/3120.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3120.patch",
"merged_at": 1634827821000
} | This PR correctly updates metadata to preserve higher-level feature types (e.g. `ClassLabel`) in `datasets.concatenate_datasets` when `axis=1`. Previously, we would delete the feature metadata in `datasets.concatenate_datasets` if `axis=1` and restore the feature types from the arrow table schema in `Dataset.__init__`. However, this approach only works for simple feature types (e.g. `Value`).
Fixes #3111 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3120/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3119/comments | https://api.github.com/repos/huggingface/datasets/issues/3119/events | https://github.com/huggingface/datasets/issues/3119 | 1,031,328,044 | I_kwDODunzps49eNEs | 3,119 | Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech | {
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,731,507,000 | 1,634,929,252,000 | 1,634,891,422,000 | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** *openslr**
- **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.*
- **Paper:** *https://www.openslr.org/resources/83/about.html*
- **Data:** *Eleven separate data files can be found via https://www.openslr.org/resources/83/*
- **Motivation:** *Increase english ASR data with UK and Irish dialects*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
The *openslr* dataset already exists, this will add additional subset, *SLR83*. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3119/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3118/comments | https://api.github.com/repos/huggingface/datasets/issues/3118/events | https://github.com/huggingface/datasets/pull/3118 | 1,031,309,549 | PR_kwDODunzps4tb0LY | 3,118 | Fix CI error at each release commit | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,730,278,000 | 1,634,734,956,000 | 1,634,734,956,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3118",
"html_url": "https://github.com/huggingface/datasets/pull/3118",
"diff_url": "https://github.com/huggingface/datasets/pull/3118.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3118.patch",
"merged_at": 1634734955000
} | Fix test_load_dataset_canonical at release commit.
Fix #3117. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3118/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3117/comments | https://api.github.com/repos/huggingface/datasets/issues/3117/events | https://github.com/huggingface/datasets/issues/3117 | 1,031,308,083 | I_kwDODunzps49eIMz | 3,117 | CI error at each release commit | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,730,173,000 | 1,634,734,955,000 | 1,634,734,955,000 | MEMBER | null | null | null | After 1.12.0, there is a recurrent CI error at each release commit: https://app.circleci.com/pipelines/github/huggingface/datasets/8289/workflows/665d954d-e409-4602-8202-e678594d2946/jobs/51110
```
____________________ LoadTest.test_load_dataset_canonical _____________________
[gw0] win32 -- Python 3.6.8 C:\tools\miniconda3\python.exe
self = <tests.test_load.LoadTest testMethod=test_load_dataset_canonical>
def test_load_dataset_canonical(self):
scripts_version = os.getenv("HF_SCRIPTS_VERSION", SCRIPTS_VERSION)
with self.assertRaises(FileNotFoundError) as context:
datasets.load_dataset("_dummy")
self.assertIn(
f"https://raw.githubusercontent.com/huggingface/datasets/{scripts_version}/datasets/_dummy/_dummy.py",
> str(context.exception),
)
E AssertionError: 'https://raw.githubusercontent.com/huggingface/datasets/1.14.0/datasets/_dummy/_dummy.py' not found in "Couldn't find a dataset script at C:\\Users\\circleci\\datasets\\_dummy\\_dummy.py or any data file in the same directory. Couldn't find '_dummy' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/_dummy/_dummy.py"
tests\test_load.py:358: AssertionError
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3117/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3116/comments | https://api.github.com/repos/huggingface/datasets/issues/3116/events | https://github.com/huggingface/datasets/pull/3116 | 1,031,270,611 | PR_kwDODunzps4tbr6g | 3,116 | Update doc links to point to new docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,727,647,000 | 1,634,891,368,000 | 1,634,891,205,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3116",
"html_url": "https://github.com/huggingface/datasets/pull/3116",
"diff_url": "https://github.com/huggingface/datasets/pull/3116.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3116.patch",
"merged_at": 1634891205000
} | This PR:
* updates the README links and the ADD_NEW_DATASET template to point to the new docs (the new docs don't have a section with the list of all the possible features, so I added that info to the `Features` docstring, which is then referenced in the ADD_NEW_DATASET template)
* fixes some broken links in the `.rst` files (fixed with the `make linkcheck` tool) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3116/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3115/comments | https://api.github.com/repos/huggingface/datasets/issues/3115/events | https://github.com/huggingface/datasets/pull/3115 | 1,030,737,524 | PR_kwDODunzps4tZ-Vr | 3,115 | Fill in dataset card for NCBI disease dataset | {
"login": "edugp",
"id": 17855740,
"node_id": "MDQ6VXNlcjE3ODU1NzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/17855740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edugp",
"html_url": "https://github.com/edugp",
"followers_url": "https://api.github.com/users/edugp/followers",
"following_url": "https://api.github.com/users/edugp/following{/other_user}",
"gists_url": "https://api.github.com/users/edugp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edugp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edugp/subscriptions",
"organizations_url": "https://api.github.com/users/edugp/orgs",
"repos_url": "https://api.github.com/users/edugp/repos",
"events_url": "https://api.github.com/users/edugp/events{/privacy}",
"received_events_url": "https://api.github.com/users/edugp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,677,025,000 | 1,634,891,107,000 | 1,634,891,107,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3115",
"html_url": "https://github.com/huggingface/datasets/pull/3115",
"diff_url": "https://github.com/huggingface/datasets/pull/3115.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3115.patch",
"merged_at": 1634891107000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3115/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3114/comments | https://api.github.com/repos/huggingface/datasets/issues/3114/events | https://github.com/huggingface/datasets/issues/3114 | 1,030,693,130 | I_kwDODunzps49byEK | 3,114 | load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem | {
"login": "francisco-perez-sorrosal",
"id": 918006,
"node_id": "MDQ6VXNlcjkxODAwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francisco-perez-sorrosal",
"html_url": "https://github.com/francisco-perez-sorrosal",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,673,705,000 | 1,644,847,228,000 | 1,644,847,228,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter.
## Steps to reproduce the bug
The documentation for the `fs` parameter states:
```
fs (:class:`~filesystems.S3FileSystem` or ``fsspec.spec.AbstractFileSystem``, optional, default ``None``):
Instance of the remote filesystem used to download the files from.
```
`PyArrowHDFS` from [fsspec](https://filesystem-spec.readthedocs.io/en/latest/_modules/fsspec/implementations/hdfs.html) implements `fsspec.spec.AbstractFileSystem`. However, when using it as shown below, I get an error.
```python
from fsspec.implementations.hdfs import PyArrowHDFS
...
transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/"
fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket)
dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True)
```
## Expected results
Previous to load from disk, I have managed to successfully store in HDFS the data and meta-information of a DatasetDict by doing:
```python
transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/"
fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket)
my_datasets.save_to_disk(transformed_corpus_path, fs=fs)
```
As I have 3 datasets in the DatasetDict named `my_datasets`, the previous Python code creates the following contents in HDFS:
```sh
$ hadoop fs -ls "/user/my_user/clickbait/transformed_ds/"
Found 4 items
-rw------- 3 my_user users 43 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/dataset_dict.json
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/test
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/train
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/validation
```
I would expect to recover on `dss` the Arrow-backed datasets I previously saved in HDFS calling the `save_to_disk` method on the `DatasetDict` object when invoking `DatasetDict.load_from_disk(...)` as described above.
## Actual results
However, when trying to recover the saved datasets, I get this error:
```
...
File "/home/fperez/dev/neuromancer/neuromancer/corpus.py", line 186, in load_transformed_corpus_from_disk
dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True)
File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/dataset_dict.py", line 748, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)
File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1048, in load_from_disk
fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True)
File "pyarrow/_hdfsio.pyx", line 438, in pyarrow._hdfsio.HadoopFileSystem.download
TypeError: download() got an unexpected keyword argument 'recursive'
```
Examining the [signature of the download method in pyarrow 5.0.0](https://github.com/apache/arrow/blob/54d2bd89c99df72fa091b025452f85dd5d88e3cf/python/pyarrow/_hdfsio.pyx#L438) we can see that there's no download parameter:
```python
def download(self, path, stream, buffer_size=None):
with self.open(path, 'rb') as f:
f.download(stream, buffer_size=buffer_size)
```
## Environment info
- `datasets` version: 1.13.3
- Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3114/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3113/comments | https://api.github.com/repos/huggingface/datasets/issues/3113/events | https://github.com/huggingface/datasets/issues/3113 | 1,030,667,547 | I_kwDODunzps49br0b | 3,113 | Loading Data from HDF files | {
"login": "FeryET",
"id": 30388648,
"node_id": "MDQ6VXNlcjMwMzg4NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/30388648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FeryET",
"html_url": "https://github.com/FeryET",
"followers_url": "https://api.github.com/users/FeryET/followers",
"following_url": "https://api.github.com/users/FeryET/following{/other_user}",
"gists_url": "https://api.github.com/users/FeryET/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FeryET/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FeryET/subscriptions",
"organizations_url": "https://api.github.com/users/FeryET/orgs",
"repos_url": "https://api.github.com/users/FeryET/repos",
"events_url": "https://api.github.com/users/FeryET/events{/privacy}",
"received_events_url": "https://api.github.com/users/FeryET/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,671,606,000 | 1,634,672,568,000 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset.
**Describe the solution you'd like**
I would love to see a `from_h5` method that gets an interface implemented by the user on how items are extracted from dataset (in case of multiple datasets containing elements like arrays and metadata and etc).
**Describe alternatives you've considered**
Currently I manually load hdf files using `h5py` and implement PyTorch dataset interface. For small h5 files I load them into a pandas dataframe and use `from_pandas` function in the `datasets` package to load them, but for big datasets this is not feasible.
**Additional context**
HDF files are widespread throughout different domains and are one of the go to's for many researchers/scientists/engineers who work with numerical data. Given `datasets`' usecases have outgrown NLP use cases, it will make a lot of sense focusing on things like supporting HDF files.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3113/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3112/comments | https://api.github.com/repos/huggingface/datasets/issues/3112/events | https://github.com/huggingface/datasets/issues/3112 | 1,030,613,083 | I_kwDODunzps49behb | 3,112 | OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB | {
"login": "BenoitDalFerro",
"id": 69694610,
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenoitDalFerro",
"html_url": "https://github.com/BenoitDalFerro",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,667,701,000 | 1,634,669,549,000 | null | NONE | null | null | null | ## Describe the bug
Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of writer_batch_size (say 2,4,8,16,32,64 and 128 in my case), it returns the following error :
> OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
Note that I always run `batch_size=writer_batch_size` :
## Steps to reproduce the bug
```python
datasets.map(lambda example : {"column_name" : function(arguments)}, batched=False, remove_columns = datasets.column_names, batch_size=batch_size, writer_batch_size=batch_size, disable_nullable=True, num_proc=None, desc="blablabla")
```
## Introspecting CUDA memory during bug
Placed within `function(arguments)` the following statement to introspect memory usage, merely a little over 1/4 of 2Gb
`print(torch.cuda.memory_summary(device=device, abbreviated=False))`
> |===========================================================================|
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 0 | cudaMalloc retries: 0 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 541418 KB | 545725 KB | 555695 KB | 14276 KB |
| from large pool | 540672 KB | 544431 KB | 544431 KB | 3759 KB |
| from small pool | 746 KB | 1714 KB | 11264 KB | 10517 KB |
|---------------------------------------------------------------------------|
| Active memory | 541418 KB | 545725 KB | 555695 KB | 14276 KB |
| from large pool | 540672 KB | 544431 KB | 544431 KB | 3759 KB |
| from small pool | 746 KB | 1714 KB | 11264 KB | 10517 KB |
|---------------------------------------------------------------------------|
| GPU reserved memory | 598016 KB | 598016 KB | 598016 KB | 0 B |
| from large pool | 595968 KB | 595968 KB | 595968 KB | 0 B |
| from small pool | 2048 KB | 2048 KB | 2048 KB | 0 B |
|---------------------------------------------------------------------------|
| Non-releasable memory | 36117 KB | 52292 KB | 274275 KB | 238158 KB |
| from large pool | 34816 KB | 51537 KB | 261713 KB | 226897 KB |
| from small pool | 1301 KB | 2045 KB | 12562 KB | 11261 KB |
|---------------------------------------------------------------------------|
| Allocations | 198 | 224 | 478 | 280 |
| from large pool | 74 | 75 | 75 | 1 |
| from small pool | 124 | 150 | 403 | 279 |
|---------------------------------------------------------------------------|
| Active allocs | 198 | 224 | 478 | 280 |
| from large pool | 74 | 75 | 75 | 1 |
| from small pool | 124 | 150 | 403 | 279 |
|---------------------------------------------------------------------------|
| GPU reserved segments | 21 | 21 | 21 | 0 |
| from large pool | 20 | 20 | 20 | 0 |
| from small pool | 1 | 1 | 1 | 0 |
|---------------------------------------------------------------------------|
| Non-releasable allocs | 18 | 23 | 166 | 148 |
| from large pool | 17 | 18 | 19 | 2 |
| from small pool | 1 | 6 | 147 | 146 |
|===========================================================================|
## Expected results
Efficiently process the datasets and write it down to disk.
## Actual results
--------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2390 else:
-> 2391 writer.write(example)
2392 else:
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write(self, example, key, writer_batch_size)
367
--> 368 self.write_examples_on_file()
369
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self)
316 if not isinstance(pa_array[0], pa.lib.FloatScalar):
--> 317 raise OverflowError(
318 "There was an overflow in the {}. Try to reduce writer_batch_size to have batches smaller than 2GB".format(
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
During handling of the above exception, another exception occurred:
OverflowError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_16268/2456940807.py in <module>
3 #tracker = OfflineEmissionsTracker(country_iso_code="FRA", project_name='xxx'+time_stamp,output_dir='./codecarbon')
4 #tracker.start()
----> 5 process_datasets(source_datasets_paths, dataset_dir, LM_tokenizer, LMhead_model, datasets_selection=['wikipedia'], from_scratch=True,
6 clean_sentences=False, negative_sampling=False, translate=False, tokenize=False, generate_embeddings=True, concatenate_embeddings=False,
7 max_sample=10000, padding='do_not_pad', truncation=True, cpu_batch_size=1000, gpu_batch_size=2, cpu_writer_batch_size=1000, gpu_writer_batch_size=2, disable_nullable=True, num_proc=None) #
~\xxx\xxx.py in process_datasets(source_datasets_paths, dataset_dir, LM_tokenizer, LMhead_model, datasets_selection, from_scratch, clean_sentences, translate, negative_sampling, tokenize, generate_embeddings, concatenate_embeddings, max_sample, padding, truncation, cpu_batch_size, gpu_batch_size, cpu_writer_batch_size, gpu_writer_batch_size, disable_nullable, num_proc)
481 for column in tqdm(dataset.column_names, desc=f'Processing column', leave=False):
482 if "xxx_" in column:
--> 483 dataset = dataset.map(lambda example :
484 {"embeddings_"+str(column).replace("translated_",""):function(input_ids=example[column],
485 token_type_ids=example[column.replace("input_ids","token_type_ids")],
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2034
2035 if num_proc is None or num_proc == 1:
-> 2036 return self._map_single(
2037 function=function,
2038 with_indices=with_indices,
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in wrapper(*args, **kwargs)
501 self: "Dataset" = kwargs.pop("self")
502 # apply actual function
--> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
505 for dataset in datasets:
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in wrapper(*args, **kwargs)
468 }
469 # apply actual function
--> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
472 # re-apply format to the output
~\anaconda3\envs\xxx\lib\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2425 if update_data:
2426 if writer is not None:
-> 2427 writer.finalize()
2428 if tmp_file is not None:
2429 tmp_file.close()
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in finalize(self, close_stream)
440 # Re-intializing to empty list for next batch
441 self.hkey_record = []
--> 442 self.write_examples_on_file()
443 if self.pa_writer is None:
444 if self._schema is not None:
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self)
315 # This check fails with FloatArrays with nans, which is not what we want, so account for that:
316 if not isinstance(pa_array[0], pa.lib.FloatScalar):
--> 317 raise OverflowError(
318 "There was an overflow in the {}. Try to reduce writer_batch_size to have batches smaller than 2GB".format(
319 type(pa_array)
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.13.3
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.8.11
- PyArrow version: 3.0.0
##Next steps
Testing on Linux.
@albertvillanova
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3112/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3111/comments | https://api.github.com/repos/huggingface/datasets/issues/3111/events | https://github.com/huggingface/datasets/issues/3111 | 1,030,598,983 | I_kwDODunzps49bbFH | 3,111 | concatenate_datasets removes ClassLabel typing. | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,666,731,000 | 1,634,827,821,000 | 1,634,827,821,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
When concatenating two datasets, we lose typing of ClassLabel columns.
I can work on this if this is a legitimate bug,
## Steps to reproduce the bug
```python
import datasets
from datasets import Dataset, ClassLabel, Value, concatenate_datasets
DS_LEN = 100
my_dataset = Dataset.from_dict(
{
"sentence": [f"{chr(i % 10)}" for i in range(DS_LEN)],
"label": [i % 2 for i in range(DS_LEN)]
}
)
my_predictions = Dataset.from_dict(
{
"pred": [(i + 1) % 2 for i in range(DS_LEN)]
}
)
my_dataset = my_dataset.cast(datasets.Features({"sentence": Value("string"), "label": ClassLabel(2, names=["POS", "NEG"])}))
print("Original")
print(my_dataset)
print(my_dataset.features)
concat_ds = concatenate_datasets([my_dataset, my_predictions], axis=1)
print("Concatenated")
print(concat_ds)
print(concat_ds.features)
```
## Expected results
The features of `concat_ds` should contain ClassLabel.
## Actual results
On master, I get:
```
{'sentence': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None), 'pred': Value(dtype='int64', id=None)}
```
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.11
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3111/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3110/comments | https://api.github.com/repos/huggingface/datasets/issues/3110/events | https://github.com/huggingface/datasets/pull/3110 | 1,030,558,484 | PR_kwDODunzps4tZakS | 3,110 | Stream TAR-based dataset using iter_archive | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,663,784,000 | 1,636,134,529,000 | 1,636,134,528,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3110",
"html_url": "https://github.com/huggingface/datasets/pull/3110",
"diff_url": "https://github.com/huggingface/datasets/pull/3110.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3110.patch",
"merged_at": 1636134528000
} | I converted all the dataset based on TAR archive to use iter_archive instead, so that they can be streamable.
It means that around 80 datasets become streamable :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3110/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3109/comments | https://api.github.com/repos/huggingface/datasets/issues/3109/events | https://github.com/huggingface/datasets/pull/3109 | 1,030,543,284 | PR_kwDODunzps4tZXmC | 3,109 | Update BibTeX entry | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,662,771,000 | 1,634,663,608,000 | 1,634,663,607,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3109",
"html_url": "https://github.com/huggingface/datasets/pull/3109",
"diff_url": "https://github.com/huggingface/datasets/pull/3109.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3109.patch",
"merged_at": 1634663607000
} | Update BibTeX entry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3109/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3108/comments | https://api.github.com/repos/huggingface/datasets/issues/3108/events | https://github.com/huggingface/datasets/pull/3108 | 1,030,405,618 | PR_kwDODunzps4tY8ID | 3,108 | Add Google BLEU (aka GLEU) metric | {
"login": "slowwavesleep",
"id": 44175589,
"node_id": "MDQ6VXNlcjQ0MTc1NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/44175589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slowwavesleep",
"html_url": "https://github.com/slowwavesleep",
"followers_url": "https://api.github.com/users/slowwavesleep/followers",
"following_url": "https://api.github.com/users/slowwavesleep/following{/other_user}",
"gists_url": "https://api.github.com/users/slowwavesleep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slowwavesleep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slowwavesleep/subscriptions",
"organizations_url": "https://api.github.com/users/slowwavesleep/orgs",
"repos_url": "https://api.github.com/users/slowwavesleep/repos",
"events_url": "https://api.github.com/users/slowwavesleep/events{/privacy}",
"received_events_url": "https://api.github.com/users/slowwavesleep/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,654,918,000 | 1,635,170,824,000 | 1,635,170,824,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3108",
"html_url": "https://github.com/huggingface/datasets/pull/3108",
"diff_url": "https://github.com/huggingface/datasets/pull/3108.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3108.patch",
"merged_at": 1635170824000
} | This PR adds the NLTK implementation of Google BLEU metric. This is also a part of an effort to resolve an unfortunate naming collision between GLEU for machine translation and GLEU for grammatical error correction.
I used [this page](https://huggingface.co/docs/datasets/add_metric.html) for reference. Please, point me to the right direction if I missed anything. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3108/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3107/comments | https://api.github.com/repos/huggingface/datasets/issues/3107/events | https://github.com/huggingface/datasets/pull/3107 | 1,030,357,527 | PR_kwDODunzps4tYyhF | 3,107 | Add paper BibTeX citation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,652,491,000 | 1,634,653,582,000 | 1,634,653,581,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3107",
"html_url": "https://github.com/huggingface/datasets/pull/3107",
"diff_url": "https://github.com/huggingface/datasets/pull/3107.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3107.patch",
"merged_at": 1634653581000
} | Add paper BibTeX citation to README file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3107/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3106/comments | https://api.github.com/repos/huggingface/datasets/issues/3106/events | https://github.com/huggingface/datasets/pull/3106 | 1,030,112,473 | PR_kwDODunzps4tYA6i | 3,106 | Fix URLs in blog_authorship_corpus dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,637,965,000 | 1,634,647,840,000 | 1,634,647,839,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3106",
"html_url": "https://github.com/huggingface/datasets/pull/3106",
"diff_url": "https://github.com/huggingface/datasets/pull/3106.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3106.patch",
"merged_at": 1634647839000
} | After contacting the authors of the paper "Effects of Age and Gender on Blogging", they confirmed:
- the old URLs are no longer valid
- there are alternative host URLs
Fix #3091. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3106/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3105/comments | https://api.github.com/repos/huggingface/datasets/issues/3105/events | https://github.com/huggingface/datasets/issues/3105 | 1,029,098,843 | I_kwDODunzps49Vs1b | 3,105 | download_mode=`force_redownload` does not work on removed datasets | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,562,758,000 | 1,634,895,370,000 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
If a cached dataset is removed from the library, I don't see how to delete it programmatically. I thought that using `force_redownload` would try to refresh the cache, then raise an exception, but it reuses the cache instead.
## Steps to reproduce the bug
_requires to already have `wit` in the cache_: see https://github.com/huggingface/datasets/pull/2981
```python
import datasets as ds
dataset = ds.load_dataset("wit", split="train", download_mode='force_redownload')
```
## Expected results
It should raise an exception, since the dataset does not exist anymore.
## Actual results
It uses the cached result
```
Using the latest cached version of the module from /home/slesage/.cache/huggingface/modules/datasets_modules/datasets/wit/107afbffd48e058b19101bddc47fbee25fa68eb6d50a733e262875f1285a5171 (last modified on Wed Sep 29 08:21:10 2021) since it couldn't be found locally at wit, or remotely on the Hugging Face Hub.
```
## Environment info
- `datasets` version: 1.13.4.dev0
- Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 4.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3105/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3104/comments | https://api.github.com/repos/huggingface/datasets/issues/3104/events | https://github.com/huggingface/datasets/issues/3104 | 1,029,080,412 | I_kwDODunzps49VoVc | 3,104 | Missing Zenodo 1.13.3 release | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,561,838,000 | 1,634,908,945,000 | 1,634,908,944,000 | MEMBER | null | null | null | After `datasets` 1.13.3 release, this does not appear in Zenodo releases: https://zenodo.org/record/5570305
TODO:
- [x] Contact Zenodo support
- [x] Check it is fixed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3104/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3103/comments | https://api.github.com/repos/huggingface/datasets/issues/3103/events | https://github.com/huggingface/datasets/pull/3103 | 1,029,069,310 | PR_kwDODunzps4tUzJQ | 3,103 | Fix project description in PyPI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,561,249,000 | 1,634,561,997,000 | 1,634,561,996,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3103",
"html_url": "https://github.com/huggingface/datasets/pull/3103",
"diff_url": "https://github.com/huggingface/datasets/pull/3103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3103.patch",
"merged_at": 1634561996000
} | Fix project description appearing in PyPI, so that it contains the content of the README.md file (like transformers).
Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/
Fix #3102. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3103/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3102/comments | https://api.github.com/repos/huggingface/datasets/issues/3102/events | https://github.com/huggingface/datasets/issues/3102 | 1,029,067,062 | I_kwDODunzps49VlE2 | 3,102 | Unsuitable project description in PyPI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] | 1,634,561,100,000 | 1,634,561,996,000 | 1,634,561,996,000 | MEMBER | null | null | null | Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3102/timeline | null | false |