url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.2B
| node_id
stringlengths 18
32
| number
int64 1
4.12k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,649B
| updated_at
int64 1,587B
1,649B
| closed_at
int64 1,587B
1,649B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4122/comments | https://api.github.com/repos/huggingface/datasets/issues/4122/events | https://github.com/huggingface/datasets/issues/4122 | 1,196,095,072 | I_kwDODunzps5HSvZg | 4,122 | medical_dialog zh has very slow _generate_examples | {
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,649,340,051,000 | 1,649,340,051,000 | null | NONE | null | ## Describe the bug
After downloading the files from Google Drive, `load_dataset("medical_dialog", "zh", data_dir="./")` takes an unreasonable amount of time. Generating the train/test split for 33% of the dataset takes over 4.5 hours.
## Steps to reproduce the bug
The easiest way I've found to download files from Google Drive is to use `gdown` and use Google Colab because the download speeds will be very high due to the fact that they are both in Google Cloud.
```python
file_ids = [
"1AnKxGEuzjeQsDHHqL3NqI_aplq2hVL_E",
"1tt7weAT1SZknzRFyLXOT2fizceUUVRXX",
"1A64VBbsQ_z8wZ2LDox586JIyyO6mIwWc",
"1AKntx-ECnrxjB07B6BlVZcFRS4YPTB-J",
"1xUk8AAua_x27bHUr-vNoAuhEAjTxOvsu",
"1ezKTfe7BgqVN5o-8Vdtr9iAF0IueCSjP",
"1tA7bSOxR1RRNqZst8cShzhuNHnayUf7c",
"1pA3bCFA5nZDhsQutqsJcH3d712giFb0S",
"1pTLFMdN1A3ro-KYghk4w4sMz6aGaMOdU",
"1dUSnG0nUPq9TEQyHd6ZWvaxO0OpxVjXD",
"1UfCH05nuWiIPbDZxQzHHGAHyMh8dmPQH",
]
for i in file_ids:
url = f"https://drive.google.com/uc?id={i}"
!gdown $url
from datasets import load_dataset
ds = load_dataset("medical_dialog", "zh", data_dir="./")
```
## Expected results
Faster load time
## Actual results
`Generating train split: 33%: 625519/1921127 [4:31:03<31:39:20, 11.37 examples/s]`
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
@vrindaprabhu , could you take a look at this since you implemented it? I think the `_generate_examples` function might need to be rewritten | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4122/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4121/comments | https://api.github.com/repos/huggingface/datasets/issues/4121/events | https://github.com/huggingface/datasets/issues/4121 | 1,196,000,018 | I_kwDODunzps5HSYMS | 4,121 | datasets.load_metric can not load a local metirc | {
"login": "Gare-Ng",
"id": 51749469,
"node_id": "MDQ6VXNlcjUxNzQ5NDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/51749469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gare-Ng",
"html_url": "https://github.com/Gare-Ng",
"followers_url": "https://api.github.com/users/Gare-Ng/followers",
"following_url": "https://api.github.com/users/Gare-Ng/following{/other_user}",
"gists_url": "https://api.github.com/users/Gare-Ng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gare-Ng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gare-Ng/subscriptions",
"organizations_url": "https://api.github.com/users/Gare-Ng/orgs",
"repos_url": "https://api.github.com/users/Gare-Ng/repos",
"events_url": "https://api.github.com/users/Gare-Ng/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gare-Ng/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,649,335,736,000 | 1,649,339,607,000 | 1,649,339,607,000 | NONE | null | ## Describe the bug
No matter how I hard try to tell load_metric that I want to load a local metric file, it still continues to fetch things on the Internet. And unfortunately it says 'ConnectionError: Couldn't reach'. However I can download this file without connectionerror and tell load_metric its local directory. And it comes back where it begins...
## Steps to reproduce the bug
```python
metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
metric = load_metric(path='bleu')
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.12.1/metrics/bleu/bleu.py
metric = load_metric(path='./blue/bleu.py')
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
```
## Expected results
I do read the docs [here](https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.load_metric). There are no other parameters that help function to distinguish from local and online file but path. As what I code above, it should load from local.
## Actual results
> metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
> ~\AppData\Local\Temp\ipykernel_19636\1855752034.py in <module>
----> 1 metric = load_metric(path=r'C:\Users\Gare\PycharmProjects\Gare\blue\bleu.py')
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
817 if data_files is None and data_dir is not None:
818 data_files = os.path.join(data_dir, "**")
--> 819
820 self.name = name
821 self.revision = revision
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
639 self,
640 path: str,
--> 641 download_config: Optional[DownloadConfig] = None,
642 download_mode: Optional[DownloadMode] = None,
643 dynamic_modules_path: Optional[str] = None,
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
297 token = hf_api.HfFolder.get_token()
298 if token:
--> 299 headers["authorization"] = f"Bearer {token}"
300 return headers
301
D:\Program Files\Anaconda\envs\Gare\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
604 def _resumable_file_manager():
605 with open(incomplete_path, "a+b") as f:
--> 606 yield f
607
608 temp_file_manager = _resumable_file_manager
ConnectionError: Couldn't reach https://github.com/tensorflow/nmt/raw/master/nmt/scripts/bleu.py
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.7.13
- PyArrow version: 7.0.0
- Pandas version: 1.3.4
Any advice would be appreciated. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4121/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4120/comments | https://api.github.com/repos/huggingface/datasets/issues/4120/events | https://github.com/huggingface/datasets/issues/4120 | 1,195,887,430 | I_kwDODunzps5HR8tG | 4,120 | Representing dictionaries (json) objects as features | {
"login": "yanaiela",
"id": 8031035,
"node_id": "MDQ6VXNlcjgwMzEwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8031035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanaiela",
"html_url": "https://github.com/yanaiela",
"followers_url": "https://api.github.com/users/yanaiela/followers",
"following_url": "https://api.github.com/users/yanaiela/following{/other_user}",
"gists_url": "https://api.github.com/users/yanaiela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yanaiela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanaiela/subscriptions",
"organizations_url": "https://api.github.com/users/yanaiela/orgs",
"repos_url": "https://api.github.com/users/yanaiela/repos",
"events_url": "https://api.github.com/users/yanaiela/events{/privacy}",
"received_events_url": "https://api.github.com/users/yanaiela/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,649,329,661,000 | 1,649,329,661,000 | null | NONE | null | In the process of adding a new dataset to the hub, I stumbled upon the inability to represent dictionaries that contain different key names, unknown in advance (and may differ between samples), original asked in the [forum](https://discuss.huggingface.co/t/representing-nested-dictionary-with-different-keys/16442).
For instance:
```
sample1 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
}}
sample2 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
"c": {"id": 2, "text": "text3"},
}}
sample3 = {"nps": {
"a": {"id": 0, "text": "text1"},
"b": {"id": 1, "text": "text2"},
"c": {"id": 2, "text": "text3"},
"d": {"id": 3, "text": "text4"},
}}
```
the `nps` field cannot be represented as a Feature while maintaining its original structure.
@lhoestq suggested to add JSON as a new feature type, which will solve this problem.
It seems like an alternative solution would be to change the original data format, which isn't an optimal solution in my case. Moreover, JSON is a common structure, that will likely to be useful in future datasets as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4120/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4119/comments | https://api.github.com/repos/huggingface/datasets/issues/4119/events | https://github.com/huggingface/datasets/pull/4119 | 1,195,641,298 | PR_kwDODunzps41yXHF | 4,119 | Hotfix failing CI tests on Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,317,126,000 | 1,649,324,844,000 | 1,649,318,233,000 | MEMBER | null | This PR makes a hotfix for our CI Windows tests: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
Fix #4118
I guess this issue is related to this PR:
- huggingface/huggingface_hub#815 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4119/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4119",
"html_url": "https://github.com/huggingface/datasets/pull/4119",
"diff_url": "https://github.com/huggingface/datasets/pull/4119.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4119.patch",
"merged_at": 1649318233000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4118/comments | https://api.github.com/repos/huggingface/datasets/issues/4118/events | https://github.com/huggingface/datasets/issues/4118 | 1,195,638,944 | I_kwDODunzps5HRACg | 4,118 | Failing CI tests on Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,649,316,985,000 | 1,649,318,233,000 | 1,649,318,233,000 | MEMBER | null | ## Describe the bug
Our CI Windows tests are failing from yesterday: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4118/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4117/comments | https://api.github.com/repos/huggingface/datasets/issues/4117/events | https://github.com/huggingface/datasets/issues/4117 | 1,195,552,406 | I_kwDODunzps5HQq6W | 4,117 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api' | {
"login": "arymbe",
"id": 4567991,
"node_id": "MDQ6VXNlcjQ1Njc5OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4567991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arymbe",
"html_url": "https://github.com/arymbe",
"followers_url": "https://api.github.com/users/arymbe/followers",
"following_url": "https://api.github.com/users/arymbe/following{/other_user}",
"gists_url": "https://api.github.com/users/arymbe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arymbe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arymbe/subscriptions",
"organizations_url": "https://api.github.com/users/arymbe/orgs",
"repos_url": "https://api.github.com/users/arymbe/repos",
"events_url": "https://api.github.com/users/arymbe/events{/privacy}",
"received_events_url": "https://api.github.com/users/arymbe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @arymbe, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your problem.\r\n\r\nCould you please write the complete stack trace? That way we will be able to see which package originates the exception.",
"Hello, thank you for your fast replied. this is the complete error that I got\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\nInput In [27], in <module>\r\n----> 1 from datasets import load_dataset\r\n\r\nvenv/lib/python3.8/site-packages/datasets/__init__.py:39, in <module>\r\n 37 from .arrow_dataset import Dataset, concatenate_datasets\r\n 38 from .arrow_reader import ReadInstruction\r\n---> 39 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n 40 from .combine import interleave_datasets\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n\r\nvenv/lib/python3.8/site-packages/datasets/builder.py:40, in <module>\r\n 32 from .arrow_reader import (\r\n 33 HF_GCP_BASE_URL,\r\n 34 ArrowReader,\r\n (...)\r\n 37 ReadInstruction,\r\n 38 )\r\n 39 from .arrow_writer import ArrowWriter, BeamWriter\r\n---> 40 from .data_files import DataFilesDict, sanitize_patterns\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n 42 from .features import Features\r\n\r\nvenv/lib/python3.8/site-packages/datasets/data_files.py:297, in <module>\r\n 292 except FileNotFoundError:\r\n 293 raise FileNotFoundError(f\"The directory at {base_path} doesn't contain any data file\") from None\r\n 296 def _resolve_single_pattern_in_dataset_repository(\r\n--> 297 dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n 298 pattern: str,\r\n 299 allowed_extensions: Optional[list] = None,\r\n 300 ) -> List[PurePath]:\r\n 301 data_files_ignore = FILES_TO_IGNORE\r\n 302 fs = HfFileSystem(repo_info=dataset_info)\r\n\r\nAttributeError: module 'huggingface_hub' has no attribute 'hf_api'",
"This is weird... It is long ago that the package `huggingface_hub` has a submodule called `hf_api`.\r\n\r\nMaybe you have a problem with your installed `huggingface_hub`...\r\n\r\nCould you please try to update it?\r\n```shell\r\npip install -U huggingface_hub\r\n```",
"Yap, I've updated several times. Then, I've tried numeral combination of datasets and huggingface_hub versions. However, I think your point is right that there is a problem with my huggingface_hub installation. I'll try another way to find the solution. I'll update it later when I get the solution. Thank you :)",
"I'm sorry I can't reproduce your problem.\r\n\r\nMaybe you could try to create a new Python virtual environment and install all dependencies there from scratch. You can use either:\r\n- Python venv: https://docs.python.org/3/library/venv.html\r\n- or conda venv (if you are using conda): https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html"
] | 1,649,310,756,000 | 1,649,324,263,000 | null | NONE | null | ## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from datasets import list_datasets, load_dataset, list_metrics, load_metric
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-12.3-x86_64-i386-64bit
- Python version: 3.8.9
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- Huggingface-hub: 0.5.0
- Transformers: 4.18.0
Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4117/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4116/comments | https://api.github.com/repos/huggingface/datasets/issues/4116/events | https://github.com/huggingface/datasets/pull/4116 | 1,194,926,459 | PR_kwDODunzps41wCEO | 4,116 | Pretty print dataset info files | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"maybe just do it from now on no? (i.e. not for existing `dataset_infos.json` files)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4116). All of your documentation changes will be reflected on that endpoint.",
"> maybe just do it from now on no? (i.e. not for existing dataset_infos.json files)\r\n\r\nYes, or do this only for datasets created with `push_to_hub` to (always) keep the GH datasets small? \r\n",
"yep sounds good too on my side! ",
"I reverted the change to avoid the size increase and added the `pretty_print` flag, which pretty-prints the JSON, and that flag is only True for datasets created with `push_to_hub`. "
] | 1,649,266,848,000 | 1,649,341,071,000 | null | CONTRIBUTOR | null | Adds indentation to the `dataset_infos.json` file when saving for nicer diffs.
(suggested by @julien-c)
This PR also updates the info files of the GH datasets. Note that this change adds more than **10 MB** to the repo size (the total file size before the change: 29.672298 MB, after: 41.666475 MB), so I'm not sure this change is a good idea.
`src/datasets/info.py` is the only relevant file for reviewers.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4116/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4116",
"html_url": "https://github.com/huggingface/datasets/pull/4116",
"diff_url": "https://github.com/huggingface/datasets/pull/4116.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4116.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4115/comments | https://api.github.com/repos/huggingface/datasets/issues/4115/events | https://github.com/huggingface/datasets/issues/4115 | 1,194,907,555 | I_kwDODunzps5HONej | 4,115 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints' | {
"login": "cceyda",
"id": 15624271,
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cceyda",
"html_url": "https://github.com/cceyda",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"repos_url": "https://api.github.com/users/cceyda/repos",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Maybe it would be nice to ignore private dirs like this one (ones starting with `.`) by default. \r\n\r\nCC @mariosasko "
] | 1,649,266,183,000 | 1,649,266,300,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if the dataset is very large.
**Describe the solution you'd like**
maybe have an option `ignore` or something .gitignore style
`dataset = load_dataset("imagefolder", data_dir="./data/original", ignore="regex?")`
**Describe alternatives you've considered**
Could filter out manually
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4115/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4114/comments | https://api.github.com/repos/huggingface/datasets/issues/4114/events | https://github.com/huggingface/datasets/issues/4114 | 1,194,855,345 | I_kwDODunzps5HOAux | 4,114 | Allow downloading just some columns of a dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"In the general case you can’t always reduce the quantity of data to download, since you can’t parse CSV or JSON data without downloading the whole files right ? ^^ However we could explore this case-by-case I guess",
"Actually for csv pandas has `usecols` which allows loading a subset of columns in a more efficient way afaik, but yes, you're right this might be more complex than I thought."
] | 1,649,263,126,000 | 1,649,318,186,000 | null | MEMBER | null | **Is your feature request related to a problem? Please describe.**
Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case
**Describe the solution you'd like**
Be able to just download some columns of a dataset, such as doing
```python
load_dataset("huggan/wikiart",columns=["artist", "genre"])
```
Although this might make things a bit complicated in terms of local caching of datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4114/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4113/comments | https://api.github.com/repos/huggingface/datasets/issues/4113/events | https://github.com/huggingface/datasets/issues/4113 | 1,194,843,532 | I_kwDODunzps5HN92M | 4,113 | Multiprocessing with FileLock fails in python 3.9 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,649,262,429,000 | 1,649,262,429,000 | null | MEMBER | null | On python 3.9, this code hangs:
```python
from multiprocessing import Pool
from filelock import FileLock
def run(i):
print(f"got the lock in multi process [{i}]")
with FileLock("tmp.lock"):
with Pool(2) as pool:
pool.map(run, range(2))
```
This is because the subprocesses try to acquire the lock from the main process for some reason. This is not the case in older versions of python.
This can cause many issues in python 3.9. In particular, we use multiprocessing to fetch data files when you load a dataset (as long as there are >16 data files). Therefore `imagefolder` hangs, and I expect any dataset that needs to download >16 files to hang as well.
Let's see if we can fix this and have a CI that runs on 3.9.
cc @mariosasko @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4113/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4112/comments | https://api.github.com/repos/huggingface/datasets/issues/4112/events | https://github.com/huggingface/datasets/issues/4112 | 1,194,752,765 | I_kwDODunzps5HNnr9 | 4,112 | ImageFolder with Grayscale images dataset | {
"login": "ChainYo",
"id": 50595514,
"node_id": "MDQ6VXNlcjUwNTk1NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/50595514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChainYo",
"html_url": "https://github.com/ChainYo",
"followers_url": "https://api.github.com/users/ChainYo/followers",
"following_url": "https://api.github.com/users/ChainYo/following{/other_user}",
"gists_url": "https://api.github.com/users/ChainYo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChainYo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChainYo/subscriptions",
"organizations_url": "https://api.github.com/users/ChainYo/orgs",
"repos_url": "https://api.github.com/users/ChainYo/repos",
"events_url": "https://api.github.com/users/ChainYo/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChainYo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi! Replacing:\r\n```python\r\ntransformed_dataset = dataset.with_transform(transforms)\r\ntransformed_dataset.set_format(type=\"torch\", device=\"cuda\")\r\n```\r\n\r\nwith:\r\n```python\r\ndef transform_func(examples):\r\n examples[\"image\"] = [transforms(img).to(\"cuda\") for img in examples[\"image\"]]\r\n return examples\r\n\r\ntransformed_dataset = dataset.with_transform(transform_func)\r\n```\r\nshould fix the issue. `datasets` doesn't support chaining of transforms (you can think of `set_format`/`with_format` as a predefined transform func for `set_transform`/`with_transforms`), so the last transform (in your case, `set_format`) takes precedence over the previous ones (in your case `with_format`). And the PyTorch formatter is not supported by the Image feature, hence the error (adding support for that is on our short-term roadmap).",
"Ok thanks a lot for the code snippet!\r\n\r\nI love the way `datasets` is easy to use but it made it really long to pre-process all the images (400.000 in my case) before training anything. `ImageFolder` from pytorch is faster in my case but force to have the images on local.\r\n\r\nI don't know how to speed up the process without switching to `ImageFolder` :smile: ",
"You can pass `ignore_verifications=True` in `load_dataset` to skip checksum verification, which takes a lot of time if the number of files is large. We will consider making this the default behavior."
] | 1,649,257,800,000 | 1,649,332,246,000 | null | NONE | null | Hi, I'm facing a problem with a grayscale images dataset I have uploaded [here](https://huggingface.co/datasets/ChainYo/rvl-cdip) (RVL-CDIP)
I'm getting an error while I want to use images for training a model with PyTorch DataLoader. Here is the full traceback:
```bash
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1765, in __getitem__
return self._getitem(
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1750, in _getitem
formatted_output = format_table(
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 532, in format_table
return formatter(pa_table, query_type=query_type)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 281, in __call__
return self.format_row(pa_table)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 58, in format_row
return self.recursive_tensorize(row)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 54, in recursive_tensorize
return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 314, in map_nested
mapped = [
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 315, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in _single_map_nested
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 267, in <dictcomp>
return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 251, in _single_map_nested
return function(data_struct)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 51, in _recursive_tensorize
return self._tensorize(data_struct)
File "/home/chainyo/miniconda3/envs/gan-bird/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py", line 38, in _tensorize
if np.issubdtype(value.dtype, np.integer):
AttributeError: 'bytes' object has no attribute 'dtype'
```
I don't really understand why the image is still a bytes object while I used transformations on it. Here the code I used to upload the dataset (and it worked well):
```python
train_dataset = load_dataset("imagefolder", data_dir="data/train")
train_dataset = train_dataset["train"]
test_dataset = load_dataset("imagefolder", data_dir="data/test")
test_dataset = test_dataset["train"]
val_dataset = load_dataset("imagefolder", data_dir="data/val")
val_dataset = val_dataset["train"]
dataset = DatasetDict({
"train": train_dataset,
"val": val_dataset,
"test": test_dataset
})
dataset.push_to_hub("ChainYo/rvl-cdip")
```
Now here is the code I am using to get the dataset and prepare it for training:
```python
img_size = 512
batch_size = 128
normalize = [(0.5), (0.5)]
data_dir = "ChainYo/rvl-cdip"
dataset = load_dataset(data_dir, split="train")
transforms = transforms.Compose([
transforms.Resize(img_size),
transforms.CenterCrop(img_size),
transforms.ToTensor(),
transforms.Normalize(*normalize)
])
transformed_dataset = dataset.with_transform(transforms)
transformed_dataset.set_format(type="torch", device="cuda")
train_dataloader = torch.utils.data.DataLoader(
transformed_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True
)
```
But this get me the error above. I don't understand why it's doing this kind of weird thing?
Do I need to map something on the dataset? Something like this:
```python
labels = dataset.features["label"].names
num_labels = dataset.features["label"].num_classes
def preprocess_data(examples):
images = [ex.convert("RGB") for ex in examples["image"]]
labels = [ex for ex in examples["label"]]
return {"images": images, "labels": labels}
features = Features({
"images": Image(decode=True, id=None),
"labels": ClassLabel(num_classes=num_labels, names=labels)
})
decoded_dataset = dataset.map(preprocess_data, remove_columns=dataset.column_names, features=features, batched=True, batch_size=100)
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4112/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4111/comments | https://api.github.com/repos/huggingface/datasets/issues/4111/events | https://github.com/huggingface/datasets/pull/4111 | 1,194,660,699 | PR_kwDODunzps41vJCt | 4,111 | Update security policy | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,253,591,000 | 1,649,324,790,000 | 1,649,324,427,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4111/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4111",
"html_url": "https://github.com/huggingface/datasets/pull/4111",
"diff_url": "https://github.com/huggingface/datasets/pull/4111.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4111.patch",
"merged_at": 1649324427000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4110/comments | https://api.github.com/repos/huggingface/datasets/issues/4110/events | https://github.com/huggingface/datasets/pull/4110 | 1,194,581,375 | PR_kwDODunzps41u4Je | 4,110 | Matthews Correlation Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4110). All of your documentation changes will be reflected on that endpoint."
] | 1,649,249,975,000 | 1,649,276,083,000 | null | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4110/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4110",
"html_url": "https://github.com/huggingface/datasets/pull/4110",
"diff_url": "https://github.com/huggingface/datasets/pull/4110.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4110.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4109/comments | https://api.github.com/repos/huggingface/datasets/issues/4109/events | https://github.com/huggingface/datasets/pull/4109 | 1,194,579,257 | PR_kwDODunzps41u3sm | 4,109 | Add Spearmanr Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4109). All of your documentation changes will be reflected on that endpoint."
] | 1,649,249,873,000 | 1,649,250,517,000 | null | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4109/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4109",
"html_url": "https://github.com/huggingface/datasets/pull/4109",
"diff_url": "https://github.com/huggingface/datasets/pull/4109.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4109.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4108/comments | https://api.github.com/repos/huggingface/datasets/issues/4108/events | https://github.com/huggingface/datasets/pull/4108 | 1,194,578,584 | PR_kwDODunzps41u3j2 | 4,108 | Perplexity Speedup | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"WRT the high values, can you add some unit tests with some [string, model] pairs and their resulting perplexity code, and @TristanThrush can run the same pairs through his version of the code?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4108). All of your documentation changes will be reflected on that endpoint.",
"I thought that the perplexity metric should output the average perplexity value of all the strings that it gets as input (not a perplexity value per string, as the new version does).\r\n@lhoestq , @TristanThrush thoughts?",
"> I thought that the perplexity metric should output the average perplexity value of all the strings that it gets as input (not a perplexity value per string, as the new version does). @lhoestq , @TristanThrush thoughts?\r\n\r\nI support this change from Emi. If we have a perplexity function that loads GPT2 and then returns an average over all of the strings, then it is impossible to get multiple perplexities of a batch of strings efficiently. If we have this new perplexity function that is built for batching, then it is possible to get a batch of perplexities efficiently and you can still compute the average efficiently afterwards.",
"Thanks a lot for working on this @emibaylor @TristanThrush :)\r\n\r\nFor consistency with the other metrics, I think it's nice if we return the mean perplexity. Though I agree that having the separate perplexities per sample can also be useful. What do you think about returning both ?\r\n```python\r\nreturn {\"perplexities\": ppls, \"mean_perplexity\": np.mean(ppls)}\r\n```\r\nwe're also doing this for the COMET metric."
] | 1,649,249,841,000 | 1,649,343,709,000 | null | CONTRIBUTOR | null | This PR makes necessary changes to perplexity such that:
- it runs much faster (via batching)
- it throws an error when input is empty, or when input is one word without <BOS> token
- it adds the option to add a <BOS> token
Issues:
- The values returned are extremely high, and I'm worried they aren't correct. Even if they are correct, they are sometimes returned as `inf`, which is not very useful (see [comment below](https://github.com/huggingface/datasets/pull/4108#discussion_r843931094) for some of the output values).
- If the values are not correct, can you help me find the error?
- If the values are correct, it might be worth it to measure something like perplexity per word, which would allow us to get actual values for the larger perplexities, instead of just `inf`
Future:
- `stride` is not currently implemented here. I have some thoughts on how to make it happen with batching, but I think it would be better to get another set of eyes to look at any possible errors causing such large values now rather than later. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4108/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4108",
"html_url": "https://github.com/huggingface/datasets/pull/4108",
"diff_url": "https://github.com/huggingface/datasets/pull/4108.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4108.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4107/comments | https://api.github.com/repos/huggingface/datasets/issues/4107/events | https://github.com/huggingface/datasets/issues/4107 | 1,194,484,885 | I_kwDODunzps5HMmSV | 4,107 | Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows | {
"login": "Pavithree",
"id": 23344465,
"node_id": "MDQ6VXNlcjIzMzQ0NDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pavithree",
"html_url": "https://github.com/Pavithree",
"followers_url": "https://api.github.com/users/Pavithree/followers",
"following_url": "https://api.github.com/users/Pavithree/following{/other_user}",
"gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions",
"organizations_url": "https://api.github.com/users/Pavithree/orgs",
"repos_url": "https://api.github.com/users/Pavithree/repos",
"events_url": "https://api.github.com/users/Pavithree/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pavithree/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting. I'm looking at it",
" It's not related to the dataset viewer in itself. I can replicate the error with:\r\n\r\n```\r\n>>> import datasets as ds\r\n>>> d = ds.load_dataset('Pavithree/explainLikeImFive')\r\nUsing custom data configuration Pavithree--explainLikeImFive-b68b6d8112cd8a51\r\nDownloading and preparing dataset json/Pavithree--explainLikeImFive to /home/slesage/.cache/huggingface/datasets/json/Pavithree--explainLikeImFive-b68b6d8112cd8a51/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b...\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 305M/305M [00:03<00:00, 98.6MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17.9M/17.9M [00:00<00:00, 75.7MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.9M/11.9M [00:00<00:00, 70.6MB/s]\r\nDownloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00, 1.92s/it]\r\nExtracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1948.42it/s]\r\nFailed to read file '/home/slesage/.cache/huggingface/datasets/downloads/5fee9c8819754df277aee6f252e4db6897d785231c21938407b8862ca871d246' with error <class 'pyarrow.lib.ArrowInvalid'>: Exceeded maximum rows\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 144, in _generate_tables\r\n dataset = json.load(f)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 293, in load\r\n return loads(fp.read(),\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 357, in loads\r\n return _default_decoder.decode(s)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/decoder.py\", line 340, in decode\r\n raise JSONDecodeError(\"Extra data\", s, end)\r\njson.decoder.JSONDecodeError: Extra data: line 1 column 916 (char 915)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1691, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1151, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/tqdm/std.py\", line 1168, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 146, in _generate_tables\r\n raise e\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 122, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```\r\n\r\ncc @lhoestq @albertvillanova @mariosasko ",
"It seems that train.json is not a valid JSON Lines file: it has several JSON objects in the first line (the 915th character in the first line starts a new object, and there's no \"\\n\")\r\n\r\nYou need to have one JSON object per line",
"I'm closing this issue.\r\n\r\n@Pavithree, please, feel free to re-open it if fixing the JSON file does not solve it."
] | 1,649,245,035,000 | 1,649,255,995,000 | 1,649,255,995,000 | NONE | null | ## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows
**Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive*
*This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. However, the dataset preview for train split returns the below mentioned error:
Status code: 400
Exception: ArrowInvalid
Message: Exceeded maximum rows
When I try to load the same dataset it returns ArrowInvalid: Exceeded maximum rows error*
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4107/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4106/comments | https://api.github.com/repos/huggingface/datasets/issues/4106/events | https://github.com/huggingface/datasets/pull/4106 | 1,194,393,892 | PR_kwDODunzps41uPpa | 4,106 | Support huggingface_hub 0.5 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Looks like GH actions is not able to resolve `huggingface_hub` 0.5.0, I'm investivating",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4106). All of your documentation changes will be reflected on that endpoint.",
"I'm glad to see changes in `huggingface_hub` are simplifying code here.",
"seems to supersede #4102, feel free to close mine :)",
"maybe just cherry-pick the docstring fix",
"I think I've found the issue:\r\n- https://github.com/huggingface/huggingface_hub/pull/790",
"Good catch, `huggingface_hub` doesn't support python 3.6 anymore indeed, therefore we should keep support for 0.4.0. I'm reverting the requirement version bump for now.\r\n\r\nWe can update the requirement once we drop support for python 3.6 in `datasets`",
"@lhoestq, I've opened this PR on `huggingface_hub`: \r\n- https://github.com/huggingface/huggingface_hub/pull/823\r\n\r\nIs there any strong reason why `huggingface_hub` no longer supports Python 3.6? ",
"I think `datasets` can drop support for 3.6 soon. But for now maybe let's keep support for 0.4.0, python 3.6 users are not affected by https://github.com/huggingface/datasets/issues/4105 anyway.\r\n\r\n`huggingface_hub` doesn't not have to support 3.6 again just for the CI IMO",
"@lhoestq I commented on the PR, that IMO it is not a good practice to drop support for Python 3.6 without a previous deprecation cycle.",
"Re-added support for older versions. I ended up checking `huggingface_hub` version to use the old, deprecated API for <0.5.0",
"I find it good practice to have all dependency version related code in a single file so that when you decide to remove support for an old version of a dependency it's easy to find and remove them, hence suggesting `utils/_fixes.py` in https://github.com/huggingface/datasets/issues/4105#issuecomment-1090041204",
"good idea, thanks !",
"I used your suggestion @adrinjalali , I just replace the try/except with a check on the version of `huggingface_hub`"
] | 1,649,240,125,000 | 1,649,265,261,000 | null | MEMBER | null | Following https://github.com/huggingface/datasets/issues/4105
`huggingface_hub` deprecated some parameters in `HfApi` in 0.5. This PR updates all the calls to HfApi to remove all the deprecations, <s>and I set the `hugginface_hub` requirement to `>=0.5.0`</s>
cc @adrinjalali @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4106/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4106",
"html_url": "https://github.com/huggingface/datasets/pull/4106",
"diff_url": "https://github.com/huggingface/datasets/pull/4106.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4106.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4105/comments | https://api.github.com/repos/huggingface/datasets/issues/4105/events | https://github.com/huggingface/datasets/issues/4105 | 1,194,297,119 | I_kwDODunzps5HL4cf | 4,105 | push to hub fails with huggingface-hub 0.5.0 | {
"login": "frascuchon",
"id": 2518789,
"node_id": "MDQ6VXNlcjI1MTg3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2518789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frascuchon",
"html_url": "https://github.com/frascuchon",
"followers_url": "https://api.github.com/users/frascuchon/followers",
"following_url": "https://api.github.com/users/frascuchon/following{/other_user}",
"gists_url": "https://api.github.com/users/frascuchon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frascuchon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frascuchon/subscriptions",
"organizations_url": "https://api.github.com/users/frascuchon/orgs",
"repos_url": "https://api.github.com/users/frascuchon/repos",
"events_url": "https://api.github.com/users/frascuchon/events{/privacy}",
"received_events_url": "https://api.github.com/users/frascuchon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Indeed there was a breaking change in `huggingface_hub` 0.5.0 in `HfApi.create_repo`, which is called here in `datasets` by passing the org name in both the `repo_id` and the `organization` arguments:\r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nI think we should fix that in `huggingface_hub`, will keep you posted. In the meantime please use `huggingface_hub` 0.4.0",
"I'll be sending a fix for this later today on the `huggingface_hub` side.\r\n\r\nThe error would be converted to a `FutureWarning` if `datasets` uses kwargs instead of positional, for example here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nto be:\r\n\r\n``` python\r\n api.create_repo(\r\n name=dataset_name,\r\n token=token,\r\n repo_type=\"dataset\",\r\n organization=organization,\r\n private=private,\r\n )\r\n```\r\n\r\nBut `name` and `organization` are deprecated in `huggingface_hub=0.5`, and people should pass `repo_id='org/name` instead. Note that `repo_id` was introduced in 0.5 and if `datasets` wants to support older `huggingface_hub` versions (which I encourage it to do), there needs to be a helper function to do that. It can be something like:\r\n\r\n\r\n```python\r\ndef create_repo(\r\n client,\r\n name: str,\r\n token: Optional[str] = None,\r\n organization: Optional[str] = None,\r\n private: Optional[bool] = None,\r\n repo_type: Optional[str] = None,\r\n exist_ok: Optional[bool] = False,\r\n space_sdk: Optional[str] = None,\r\n) -> str:\r\n try:\r\n return client.create_repo(\r\n repo_id=f\"{organization}/{name}\",\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n except TypeError:\r\n return client.create_repo(\r\n name=name,\r\n organization=organization,\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n```\r\n\r\nin a `utils/_fixes.py` kinda file and and be used internally.\r\n\r\nI'll be sending a patch to `huggingface_hub` to convert the error reported in this issue to a `FutureWarning`.",
"PR with the hotfix on the `huggingface_hub` side: https://github.com/huggingface/huggingface_hub/pull/822",
"We can definitely change `push_to_hub` to use `repo_id` in `datasets` and require `huggingface_hub>=0.5.0`.\r\n\r\nLet me open a PR :)",
"`huggingface_hub` 0.5.1 just got released with a fix, feel free to update `huggingface_hub` ;)"
] | 1,649,235,597,000 | 1,649,247,704,000 | null | NONE | null | ## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your-token>")
```
## Expected results
The dataset is successfully uploaded
## Actual results
An error validation is raised:
```bash
if repo_id and (name or organization):
> raise ValueError(
"Only pass `repo_id` and leave deprecated `name` and "
"`organization` to be None."
E ValueError: Only pass `repo_id` and leave deprecated `name` and `organization` to be None.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.1
- `huggingface-hub`: 0.5
- Platform: macOS
- Python version: 3.8.12
- PyArrow version: 6.0.0
cc @adrinjalali
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4105/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4104/comments | https://api.github.com/repos/huggingface/datasets/issues/4104/events | https://github.com/huggingface/datasets/issues/4104 | 1,194,072,966 | I_kwDODunzps5HLBuG | 4,104 | Add time series data - stock market | {
"login": "INF800",
"id": 45640029,
"node_id": "MDQ6VXNlcjQ1NjQwMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/INF800",
"html_url": "https://github.com/INF800",
"followers_url": "https://api.github.com/users/INF800/followers",
"following_url": "https://api.github.com/users/INF800/following{/other_user}",
"gists_url": "https://api.github.com/users/INF800/gists{/gist_id}",
"starred_url": "https://api.github.com/users/INF800/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/INF800/subscriptions",
"organizations_url": "https://api.github.com/users/INF800/orgs",
"repos_url": "https://api.github.com/users/INF800/repos",
"events_url": "https://api.github.com/users/INF800/events{/privacy}",
"received_events_url": "https://api.github.com/users/INF800/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Can I use instructions present in below link for time series dataset as well? \r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md ",
"cc'ing @kashif and @NielsRogge for visibility!",
"@INF800 happy to add this dataset! I will try to set a PR by the end of the day... if you can kindly point me to the dataset? Also, note we have a bunch of time series datasets checked in e.g. `electricity_load_diagrams` or `monash_tsf`, and ideally this dataset could also be in a similar format. ",
"Thankyou. This is how raw data looks like before cleaning for an individual stocks:\r\n\r\n1. https://github.com/INF800/marktech/tree/raw-data/f/data/raw\r\n2. https://github.com/INF800/marktech/tree/raw-data/t/data/raw\r\n3. https://github.com/INF800/marktech/tree/raw-data/rdfn/data/raw\r\n4. https://github.com/INF800/marktech/tree/raw-data/irbt/data/raw\r\n5. https://github.com/INF800/marktech/tree/raw-data/hll/data/raw\r\n6. https://github.com/INF800/marktech/tree/raw-data/infy/data/raw\r\n7. https://github.com/INF800/marktech/tree/raw-data/reli/data/raw\r\n8. https://github.com/INF800/marktech/tree/raw-data/hdbk/data/raw\r\n\r\n> Scraping is automated using GitHub Actions. So, everyday we will see a new file added in the above links.\r\n\r\nI can rewrite the cleaning scripts to make sure it fits HF dataset standards. (P.S I am very much new to HF dataset)\r\n\r\nThe data set above can be converted into univariate regression / multivariate regression / sequence to sequence generation dataset etc. So, do we have some kind of transformation modules that will read the dataset as some type of dataset (`GenericTimeData`) and convert it to other possible dataset relating to a specific ML task. **By having this kind of transformation module, I only have to add data once** and use transformation module whenever necessary\r\n\r\nAdditionally, having some kind of versioning for the dataset will be really helpful because it will keep on updating - especially time series datasets ",
"thanks @INF800 I'll have a look. I believe it should be possible to incorporate this into the time-series format."
] | 1,649,224,018,000 | 1,649,235,564,000 | null | NONE | null | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing.com
- **Motivation:** Test applicability of transformer based model on stock market / time series problem
![image](https://user-images.githubusercontent.com/45640029/161904077-52fe97cb-3720-4e3f-98ee-7f6720a056e2.png) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4104/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4104/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4103/comments | https://api.github.com/repos/huggingface/datasets/issues/4103/events | https://github.com/huggingface/datasets/pull/4103 | 1,193,987,104 | PR_kwDODunzps41s3T4 | 4,103 | Add the `GSM8K` dataset | {
"login": "jon-tow",
"id": 41410219,
"node_id": "MDQ6VXNlcjQxNDEwMjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jon-tow",
"html_url": "https://github.com/jon-tow",
"followers_url": "https://api.github.com/users/jon-tow/followers",
"following_url": "https://api.github.com/users/jon-tow/following{/other_user}",
"gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions",
"organizations_url": "https://api.github.com/users/jon-tow/orgs",
"repos_url": "https://api.github.com/users/jon-tow/repos",
"events_url": "https://api.github.com/users/jon-tow/events{/privacy}",
"received_events_url": "https://api.github.com/users/jon-tow/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,649,218,072,000 | 1,649,272,062,000 | null | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4103/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4103",
"html_url": "https://github.com/huggingface/datasets/pull/4103",
"diff_url": "https://github.com/huggingface/datasets/pull/4103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4103.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4102/comments | https://api.github.com/repos/huggingface/datasets/issues/4102/events | https://github.com/huggingface/datasets/pull/4102 | 1,193,616,722 | PR_kwDODunzps41roGx | 4,102 | [hub] Fix `api.create_repo` call? | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4102). All of your documentation changes will be reflected on that endpoint."
] | 1,649,186,512,000 | 1,649,247,134,000 | null | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4102/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4102",
"html_url": "https://github.com/huggingface/datasets/pull/4102",
"diff_url": "https://github.com/huggingface/datasets/pull/4102.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4102.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4101/comments | https://api.github.com/repos/huggingface/datasets/issues/4101/events | https://github.com/huggingface/datasets/issues/4101 | 1,193,399,204 | I_kwDODunzps5HIdOk | 4,101 | How can I download only the train and test split for full numbers using load_dataset()? | {
"login": "Nakkhatra",
"id": 64383902,
"node_id": "MDQ6VXNlcjY0MzgzOTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/64383902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nakkhatra",
"html_url": "https://github.com/Nakkhatra",
"followers_url": "https://api.github.com/users/Nakkhatra/followers",
"following_url": "https://api.github.com/users/Nakkhatra/following{/other_user}",
"gists_url": "https://api.github.com/users/Nakkhatra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nakkhatra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nakkhatra/subscriptions",
"organizations_url": "https://api.github.com/users/Nakkhatra/orgs",
"repos_url": "https://api.github.com/users/Nakkhatra/repos",
"events_url": "https://api.github.com/users/Nakkhatra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nakkhatra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! Can you please specify the full name of the dataset? IIRC `full_numbers` is one of the configs of the `svhn` dataset, and its generation is slow due to data being stored in binary Matlab files. Even if you specify a specific split, `datasets` downloads all of them, but we plan to fix that soon and only download the requested split.\r\n\r\nIf you are in a hurry, download the `svhn` script [here](`https://huggingface.co/datasets/svhn/blob/main/svhn.py`), remove [this code](https://huggingface.co/datasets/svhn/blob/main/svhn.py#L155-L162), and run:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"path/to/your/local/script.py\", \"full_numbers\")\r\n```\r\n\r\nAnd to make loading easier in Colab, you can create a dataset repo on the Hub and upload the script there. Or push the script to Google Drive and mount the drive in Colab."
] | 1,649,174,415,000 | 1,649,250,541,000 | null | NONE | null | How can I download only the train and test split for full numbers using load_dataset()?
I do not need the extra split and it will take 40 mins just to download in Colab. I have very short time in hand. Please help. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4101/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4100/comments | https://api.github.com/repos/huggingface/datasets/issues/4100/events | https://github.com/huggingface/datasets/pull/4100 | 1,193,393,959 | PR_kwDODunzps41q4ce | 4,100 | Improve RedCaps dataset card | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4100). All of your documentation changes will be reflected on that endpoint."
] | 1,649,174,234,000 | 1,649,257,225,000 | null | CONTRIBUTOR | null | This PR modifies the RedCaps card to:
* fix the formatting of the Point of Contact fields on the Hub
* speed up the image fetching logic (aligns it with the [img2dataset](https://github.com/rom1504/img2dataset) tool) and make it more robust (return None if **any** exception is thrown) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4100/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4100",
"html_url": "https://github.com/huggingface/datasets/pull/4100",
"diff_url": "https://github.com/huggingface/datasets/pull/4100.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4100.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4099/comments | https://api.github.com/repos/huggingface/datasets/issues/4099/events | https://github.com/huggingface/datasets/issues/4099 | 1,193,253,768 | I_kwDODunzps5HH5uI | 4,099 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) | {
"login": "andreybond",
"id": 20210017,
"node_id": "MDQ6VXNlcjIwMjEwMDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/20210017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andreybond",
"html_url": "https://github.com/andreybond",
"followers_url": "https://api.github.com/users/andreybond/followers",
"following_url": "https://api.github.com/users/andreybond/following{/other_user}",
"gists_url": "https://api.github.com/users/andreybond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andreybond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreybond/subscriptions",
"organizations_url": "https://api.github.com/users/andreybond/orgs",
"repos_url": "https://api.github.com/users/andreybond/repos",
"events_url": "https://api.github.com/users/andreybond/events{/privacy}",
"received_events_url": "https://api.github.com/users/andreybond/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @andreybond, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to able to reproduce your issue:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ...: datasets = load_dataset(\"nielsr/XFUN\", \"xfun.ja\")\r\n\r\nIn [5]: datasets\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 194\r\n })\r\n validation: Dataset({\r\n features: ['id', 'input_ids', 'bbox', 'labels', 'image', 'entities', 'relations'],\r\n num_rows: 71\r\n })\r\n})\r\n```\r\n\r\nThe only reason I can imagine this issue may arise is if your default encoding is not \"UTF-8\" (and it is ASCII instead). This is usually the case on Windows machines; but you say your environment is a Linux machine. Maybe you change your machine default encoding?\r\n\r\nCould you please check this?\r\n```python\r\nIn [6]: import sys\r\n\r\nIn [7]: sys.getdefaultencoding()\r\nOut[7]: 'utf-8'\r\n```",
"I opened a PR in the original dataset loading script:\r\n- microsoft/unilm#677\r\n\r\nand fixed the corresponding dataset script on the Hub:\r\n- https://huggingface.co/datasets/nielsr/XFUN/commit/73ba5e026621e05fb756ae0f267eb49971f70ebd",
"import sys\r\nsys.getdefaultencoding()\r\n\r\nreturned: 'utf-8'\r\n\r\n---------------------\r\n\r\nI've just cloned master branch - your fix works! Thank you!"
] | 1,649,169,758,000 | 1,649,227,064,000 | 1,649,226,954,000 | NONE | null | ## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected results
Dataset should be downloaded without exceptions
## Actual results
Stack trace (for the second-time execution):
Downloading and preparing dataset xfun/xfun.ja to /root/.cache/huggingface/datasets/nielsr___xfun/xfun.ja/0.0.0/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477...
Downloading data files: 100%
2/2 [00:00<00:00, 88.48it/s]
Extracting data files: 100%
2/2 [00:00<00:00, 79.60it/s]
UnicodeDecodeErrorTraceback (most recent call last)
<ipython-input-31-79c26bd1109c> in <module>
1 from datasets import load_dataset
2
----> 3 datasets = load_dataset("nielsr/XFUN", "xfun.ja")
/usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
604 )
605
--> 606 # By default, return all splits
607 if split is None:
608 split = {s: s for s in self.info.splits}
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
692 Args:
693 split: `datasets.Split` which subset of the data to read.
--> 694
695 Returns:
696 `Dataset`
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)
/usr/local/lib/python3.6/dist-packages/tqdm/notebook.py in __iter__(self)
252 if not self.disable:
253 self.display(check_delay=False)
--> 254
255 def __iter__(self):
256 try:
/usr/local/lib/python3.6/dist-packages/tqdm/std.py in __iter__(self)
1183 for obj in iterable:
1184 yield obj
-> 1185 return
1186
1187 mininterval = self.mininterval
~/.cache/huggingface/modules/datasets_modules/datasets/nielsr--XFUN/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477/XFUN.py in _generate_examples(self, filepaths)
140 logger.info("Generating examples from = %s", filepath)
141 with open(filepath[0], "r") as f:
--> 142 data = json.load(f)
143
144 for doc in data["documents"]:
/usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
294
295 """
--> 296 return loads(fp.read(),
297 cls=cls, object_hook=object_hook,
298 parse_float=parse_float, parse_int=parse_int,
/usr/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0 (but reproduced with many previous versions)
- Platform: Docker: Linux da5b74136d6b 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ; Base docker image is : huggingface/transformers-pytorch-cpu
- Python version: 3.6.9
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4099/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4098/comments | https://api.github.com/repos/huggingface/datasets/issues/4098/events | https://github.com/huggingface/datasets/pull/4098 | 1,193,245,522 | PR_kwDODunzps41qXjo | 4,098 | Proposing WikiSplit metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"A quick Github tip ;) To avoid running N times the CI, you can push all the changes at once: go to Files Changed tab, and on each suggestion there's a \"add to commit batch\" and then you can do one commit for all the suggestions you want to approve ;)"
] | 1,649,169,394,000 | 1,649,173,717,000 | 1,649,173,348,000 | CONTRIBUTOR | null | Pinging @lhoestq to ensure that my distinction between the dataset and the metric are clear :sweat_smile: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4098/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4098/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4098",
"html_url": "https://github.com/huggingface/datasets/pull/4098",
"diff_url": "https://github.com/huggingface/datasets/pull/4098.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4098.patch",
"merged_at": 1649173348000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4097/comments | https://api.github.com/repos/huggingface/datasets/issues/4097/events | https://github.com/huggingface/datasets/pull/4097 | 1,193,205,751 | PR_kwDODunzps41qPEu | 4,097 | Updating FrugalScore metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,167,764,000 | 1,649,171,255,000 | 1,649,170,906,000 | CONTRIBUTOR | null | removing duplicate paragraph | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4097/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4097",
"html_url": "https://github.com/huggingface/datasets/pull/4097",
"diff_url": "https://github.com/huggingface/datasets/pull/4097.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4097.patch",
"merged_at": 1649170906000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4096/comments | https://api.github.com/repos/huggingface/datasets/issues/4096/events | https://github.com/huggingface/datasets/issues/4096 | 1,193,165,229 | I_kwDODunzps5HHkGt | 4,096 | Add support for streaming Zarr stores for hosted datasets | {
"login": "jacobbieker",
"id": 7170359,
"node_id": "MDQ6VXNlcjcxNzAzNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7170359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacobbieker",
"html_url": "https://github.com/jacobbieker",
"followers_url": "https://api.github.com/users/jacobbieker/followers",
"following_url": "https://api.github.com/users/jacobbieker/following{/other_user}",
"gists_url": "https://api.github.com/users/jacobbieker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacobbieker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacobbieker/subscriptions",
"organizations_url": "https://api.github.com/users/jacobbieker/orgs",
"repos_url": "https://api.github.com/users/jacobbieker/repos",
"events_url": "https://api.github.com/users/jacobbieker/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacobbieker/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @jacobbieker, thanks for your request and study of possible alternatives.\r\n\r\nWe are very interested in finding a way to make `datasets` useful to you.\r\n\r\nLooking at the Zarr docs, I saw that among its storage alternatives, there is the ZIP file format: https://zarr.readthedocs.io/en/stable/api/storage.html#zarr.storage.ZipStore\r\n\r\nThis might be convenient for many reasons:\r\n- On the one hand, we avoid the Git issue with huge number of small files: chunks files are compressed into a single ZIP file\r\n- On the other hand, the ZIP file format is specially suited for streaming data because it allows random access to its component files (i.e. it supports random access to its chunks)\r\n\r\nAnyway, I think that a Python loading script will be necessary: you need to implement additional logic to select certain chunks (based on date or other criteria).\r\n\r\nPlease, let me know if this makes sense to you."
] | 1,649,165,912,000 | 1,649,258,854,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr stores are designed to be easily streamed in from cloud storage, especially with xarray and fsspec. Since geospatial data tends to be very large, and on the order of TBs of data or 10's of TBs of data for a single dataset, it can be difficult to store the dataset locally for users. Just adding Zarr stores with HF git doesn't work well (see https://github.com/huggingface/datasets/issues/3823) as Zarr splits the data into lots of small chunks for fast loading, and that doesn't work well with git. I've somewhat gotten around that issue by tarring each Zarr store and uploading them as a single file, which seems to be working (see https://huggingface.co/datasets/openclimatefix/gfs-reforecast for example data files, although the script isn't written yet). This does mean that streaming doesn't quite work though. On the other hand, in https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv we stream in a Zarr store from a public GCP bucket quite easily.
**Describe the solution you'd like**
A way to upload Zarr stores for hosted datasets so that we can stream it with xarray and fsspec.
**Describe alternatives you've considered**
Tarring each Zarr store individually and just extracting them in the dataset script -> Downside this is a lot of data that probably doesn't fit locally for a lot of potential users.
Pre-prepare examples in a format like Parquet -> Would use a lot more storage, and a lot less flexibility, in the eumetsat_uk_hrv, we use the one Zarr store for multiple different configurations.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4096/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4096/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4095 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4095/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4095/comments | https://api.github.com/repos/huggingface/datasets/issues/4095/events | https://github.com/huggingface/datasets/pull/4095 | 1,192,573,353 | PR_kwDODunzps41oIFI | 4,095 | fix typo in rename_column error message | {
"login": "hunterlang",
"id": 680821,
"node_id": "MDQ6VXNlcjY4MDgyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/680821?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hunterlang",
"html_url": "https://github.com/hunterlang",
"followers_url": "https://api.github.com/users/hunterlang/followers",
"following_url": "https://api.github.com/users/hunterlang/following{/other_user}",
"gists_url": "https://api.github.com/users/hunterlang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hunterlang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hunterlang/subscriptions",
"organizations_url": "https://api.github.com/users/hunterlang/orgs",
"repos_url": "https://api.github.com/users/hunterlang/repos",
"events_url": "https://api.github.com/users/hunterlang/events{/privacy}",
"received_events_url": "https://api.github.com/users/hunterlang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4095). All of your documentation changes will be reflected on that endpoint."
] | 1,649,130,956,000 | 1,649,148,886,000 | 1,649,148,353,000 | CONTRIBUTOR | null | I feel bad submitting such a tiny change as a PR but it confused me today 😄 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4095/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4095",
"html_url": "https://github.com/huggingface/datasets/pull/4095",
"diff_url": "https://github.com/huggingface/datasets/pull/4095.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4095.patch",
"merged_at": 1649148353000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4094 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4094/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4094/comments | https://api.github.com/repos/huggingface/datasets/issues/4094/events | https://github.com/huggingface/datasets/issues/4094 | 1,192,534,414 | I_kwDODunzps5HFKGO | 4,094 | Helo Mayfrends | {
"login": "Budigming",
"id": 102933353,
"node_id": "U_kgDOBiKjaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102933353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Budigming",
"html_url": "https://github.com/Budigming",
"followers_url": "https://api.github.com/users/Budigming/followers",
"following_url": "https://api.github.com/users/Budigming/following{/other_user}",
"gists_url": "https://api.github.com/users/Budigming/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Budigming/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Budigming/subscriptions",
"organizations_url": "https://api.github.com/users/Budigming/orgs",
"repos_url": "https://api.github.com/users/Budigming/repos",
"events_url": "https://api.github.com/users/Budigming/events{/privacy}",
"received_events_url": "https://api.github.com/users/Budigming/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,649,126,577,000 | 1,649,143,002,000 | 1,649,143,002,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4094/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4094/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4093/comments | https://api.github.com/repos/huggingface/datasets/issues/4093/events | https://github.com/huggingface/datasets/issues/4093 | 1,192,523,161 | I_kwDODunzps5HFHWZ | 4,093 | elena-soare/crawled-ecommerce: missing dataset | {
"login": "seevaratnam",
"id": 17519354,
"node_id": "MDQ6VXNlcjE3NTE5MzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/17519354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seevaratnam",
"html_url": "https://github.com/seevaratnam",
"followers_url": "https://api.github.com/users/seevaratnam/followers",
"following_url": "https://api.github.com/users/seevaratnam/following{/other_user}",
"gists_url": "https://api.github.com/users/seevaratnam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seevaratnam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seevaratnam/subscriptions",
"organizations_url": "https://api.github.com/users/seevaratnam/orgs",
"repos_url": "https://api.github.com/users/seevaratnam/repos",
"events_url": "https://api.github.com/users/seevaratnam/events{/privacy}",
"received_events_url": "https://api.github.com/users/seevaratnam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It's a bug! Thanks for reporting, I'm looking at it.",
"By the way, the error on our part is due to the huge size of every row (~90MB). The dataset viewer does not support such big dataset rows for the moment.\r\nAnyway, we're working to give a hint about this in the dataset viewer."
] | 1,649,125,519,000 | 1,649,149,814,000 | null | NONE | null | elena-soare/crawled-ecommerce
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4093/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4092/comments | https://api.github.com/repos/huggingface/datasets/issues/4092/events | https://github.com/huggingface/datasets/pull/4092 | 1,192,499,903 | PR_kwDODunzps41n40R | 4,092 | Fix dataset `amazon_us_reviews` metadata - 4/4/2022 | {
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4092). All of your documentation changes will be reflected on that endpoint."
] | 1,649,122,785,000 | 1,649,149,026,000 | null | NONE | null | Fixes #4048 by running `dataset-cli test` to reprocess data and regenerate metadata. Additionally I've updated the README to include up-to-date counts for the subsets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4092/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4092/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4092",
"html_url": "https://github.com/huggingface/datasets/pull/4092",
"diff_url": "https://github.com/huggingface/datasets/pull/4092.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4092.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4091/comments | https://api.github.com/repos/huggingface/datasets/issues/4091/events | https://github.com/huggingface/datasets/issues/4091 | 1,192,023,855 | I_kwDODunzps5HDNcv | 4,091 | Build a Dataset One Example at a Time Without Loading All Data Into Memory | {
"login": "aravind-tonita",
"id": 99340348,
"node_id": "U_kgDOBevQPA",
"avatar_url": "https://avatars.githubusercontent.com/u/99340348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aravind-tonita",
"html_url": "https://github.com/aravind-tonita",
"followers_url": "https://api.github.com/users/aravind-tonita/followers",
"following_url": "https://api.github.com/users/aravind-tonita/following{/other_user}",
"gists_url": "https://api.github.com/users/aravind-tonita/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aravind-tonita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aravind-tonita/subscriptions",
"organizations_url": "https://api.github.com/users/aravind-tonita/orgs",
"repos_url": "https://api.github.com/users/aravind-tonita/repos",
"events_url": "https://api.github.com/users/aravind-tonita/events{/privacy}",
"received_events_url": "https://api.github.com/users/aravind-tonita/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! Yes, the problem with `add_item` is that it keeps examples in memory, so you are left with these options:\r\n* writing a dataset loading script in which you iterate over `custom_example_dict_streamer` and yield the examples (in `_generate examples`)\r\n* storing the data in a JSON/CSV/Parquet/TXT file and using `Dataset.from_{format}`\r\n* using `add_item` + `save_to_disk` on smaller chunks: \r\n ```python\r\n from datasets import Dataset, concatenate_datasets\r\n MAX_SAMPLES_IN_MEMORY = 1000\r\n samples_in_dset = 0\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n path_to_save_dir = \"path/to/save/dir\"\r\n num_chunks = 0\r\n for example_dict in custom_example_dict_streamer(\"/path/to/raw/data\"):\r\n dset = dset.add_item(example_dict)\r\n samples_in_dset += 1\r\n if samples_in_dset == MAX_SAMPLES_IN_MEMORY:\r\n samples_in_dset = 0\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n dset = Dataset.from_dict({\"col1\": [], \"col2\": []}) # empty dataset\r\n if samples_in_dset > 0:\r\n dset.save_to_disk(f\"{path_to_save_dir}{num_chunks}\")\r\n num_chunks =+ 1\r\n loaded_dsets = [] # memory-mapped\r\n for chunk_num in range(num_chunks):\r\n dset = Dataset.load_from_disk(f\"{path_to_save_dir}{chunk_num}\") \r\n loaded_dsets.append(dset)\r\n final_dset = concatenate_datasets(dset)\r\n ```\r\n If you still have issues with this approach, you can try to delete unused datasets with `gc.collect()` to free some memory. ",
"This is really elegant, thank you @mariosasko! I will try this."
] | 1,649,089,164,000 | 1,649,186,552,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I later want to load the saved `Dataset` and use it like any other HuggingFace dataset, get splits, wrap it in a PyTorch `DataLoader`, etc. **Crucially, I do not ever want to materialize all the data in memory while building the dataset.**
**Describe the solution you'd like**
I would like to be able to do something like the following. Notice how each example is read and then immediately added to the dataset. We do not store all the data in memory when constructing the `Dataset`. If it helps, I will know the schema of my dataset before hand.
```
# Initialize an empty Dataset, possibly from a known schema.
dataset = Dataset()
# Read in examples one by one using a custom data streamer.
for example_dict in custom_example_dict_streamer("/path/to/raw/data"):
# Add this example to the dict but do not store it in memory.
dataset.add_item(example_dict)
# Save the final dataset to disk as an Arrow-backed dataset.
dataset.save_to_disk("/path/to/dataset")
...
# I'd like to be able to later `load_from_disk` and use the loaded Dataset
# just like any other memory-mapped pyarrow-backed HuggingFace dataset...
loaded_dataset = Dataset.load_from_disk("/path/to/dataset")
loaded_dataset.set_format(type="torch", columnns=["foo", "bar", "baz"])
dataloader = torch.utils.data.DataLoader(loaded_dataset, batch_size=16)
...
```
**Describe alternatives you've considered**
I initially tried to read all the data into memory, construct a Pandas DataFrame and then call `Dataset.from_pandas`. This would not work as it requires storing all the data in memory. It seems that there is an `add_item` method already -- I tried to implement something like the desired API written above, but I've not been able to initialize an empty `Dataset` (this seems to require several layers of constructing `datasets.table.Table` which requires constructing a `pyarrow.lib.Table`, etc). I also considered writing my data to multiple sharded CSV files or JSON files and then using `from_csv` or `from_json`. I'd prefer not to do this because (1) I'd prefer to avoid the intermediate step of creating these temp CSV/JSON files and (2) I'm not sure if `from_csv` and `from_json` use memory-mapping.
Do you have any suggestions on how I'd be able to achieve this use case? Does something already exist to support this? Thank you very much in advance! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4091/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4090/comments | https://api.github.com/repos/huggingface/datasets/issues/4090/events | https://github.com/huggingface/datasets/pull/4090 | 1,191,956,734 | PR_kwDODunzps41mEs5 | 4,090 | Avoid writing empty license files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,085,817,000 | 1,649,335,605,000 | 1,649,335,243,000 | MEMBER | null | This PR avoids the creation of empty `LICENSE` files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4090/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4090",
"html_url": "https://github.com/huggingface/datasets/pull/4090",
"diff_url": "https://github.com/huggingface/datasets/pull/4090.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4090.patch",
"merged_at": 1649335243000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4089/comments | https://api.github.com/repos/huggingface/datasets/issues/4089/events | https://github.com/huggingface/datasets/pull/4089 | 1,191,915,196 | PR_kwDODunzps41l7yd | 4,089 | Create metric card for Frugal Score | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,084,029,000 | 1,649,168,086,000 | 1,649,167,610,000 | CONTRIBUTOR | null | Proposing metric card for Frugal Score.
@albertvillanova or @lhoestq -- there are certain aspects that I'm not 100% sure on (such as how exactly the distillation between BertScore and FrugalScore is done) -- so if you find that something isn't clear, please let me know! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4089/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4089",
"html_url": "https://github.com/huggingface/datasets/pull/4089",
"diff_url": "https://github.com/huggingface/datasets/pull/4089.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4089.patch",
"merged_at": 1649167610000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4088/comments | https://api.github.com/repos/huggingface/datasets/issues/4088/events | https://github.com/huggingface/datasets/pull/4088 | 1,191,901,172 | PR_kwDODunzps41l4yE | 4,088 | Remove unused legacy Beam utils | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,083,431,000 | 1,649,172,207,000 | 1,649,171,861,000 | MEMBER | null | This PR removes unused legacy custom `WriteToParquet`, once official Apache Beam includes the patch since version 2.22.0:
- Patch PR: https://github.com/apache/beam/pull/11699
- Issue: https://issues.apache.org/jira/browse/BEAM-10022
In relation with:
- #204 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4088/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4088/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4088",
"html_url": "https://github.com/huggingface/datasets/pull/4088",
"diff_url": "https://github.com/huggingface/datasets/pull/4088.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4088.patch",
"merged_at": 1649171861000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4087/comments | https://api.github.com/repos/huggingface/datasets/issues/4087/events | https://github.com/huggingface/datasets/pull/4087 | 1,191,819,805 | PR_kwDODunzps41lnfO | 4,087 | Fix BeamWriter output Parquet file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,649,080,010,000 | 1,649,170,840,000 | 1,649,170,488,000 | MEMBER | null | Since now, the `BeamWriter` saved a Parquet file with a simplified schema, where each field value was serialized to JSON. That resulted in Parquet files larger than Arrow files.
This PR:
- writes Parquet file preserving original schema and without serialization, thus avoiding serialization overhead and resulting in a smaller output file size.
- fixes `parquet_to_arrow` function | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4087/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4087/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4087",
"html_url": "https://github.com/huggingface/datasets/pull/4087",
"diff_url": "https://github.com/huggingface/datasets/pull/4087.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4087.patch",
"merged_at": 1649170488000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4086/comments | https://api.github.com/repos/huggingface/datasets/issues/4086/events | https://github.com/huggingface/datasets/issues/4086 | 1,191,373,374 | I_kwDODunzps5HAuo- | 4,086 | Dataset viewer issue for McGill-NLP/feedbackQA | {
"login": "cslizc",
"id": 54827718,
"node_id": "MDQ6VXNlcjU0ODI3NzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/54827718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cslizc",
"html_url": "https://github.com/cslizc",
"followers_url": "https://api.github.com/users/cslizc/followers",
"following_url": "https://api.github.com/users/cslizc/following{/other_user}",
"gists_url": "https://api.github.com/users/cslizc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cslizc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cslizc/subscriptions",
"organizations_url": "https://api.github.com/users/cslizc/orgs",
"repos_url": "https://api.github.com/users/cslizc/repos",
"events_url": "https://api.github.com/users/cslizc/events{/privacy}",
"received_events_url": "https://api.github.com/users/cslizc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @cslizc, thanks for reporting.\r\n\r\nI have just forced the refresh of the corresponding cache and the preview is working now.",
"thank you so much"
] | 1,649,057,240,000 | 1,649,111,393,000 | 1,649,059,305,000 | NONE | null | ## Dataset viewer issue for '*McGill-NLP/feedbackQA*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)*
*short description of the issue*
The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message:
```
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4086/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4085/comments | https://api.github.com/repos/huggingface/datasets/issues/4085/events | https://github.com/huggingface/datasets/issues/4085 | 1,190,621,345 | I_kwDODunzps5G93Ch | 4,085 | datasets.set_progress_bar_enabled(False) not working in datasets v2 | {
"login": "virilo",
"id": 3381112,
"node_id": "MDQ6VXNlcjMzODExMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3381112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/virilo",
"html_url": "https://github.com/virilo",
"followers_url": "https://api.github.com/users/virilo/followers",
"following_url": "https://api.github.com/users/virilo/following{/other_user}",
"gists_url": "https://api.github.com/users/virilo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/virilo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/virilo/subscriptions",
"organizations_url": "https://api.github.com/users/virilo/orgs",
"repos_url": "https://api.github.com/users/virilo/repos",
"events_url": "https://api.github.com/users/virilo/events{/privacy}",
"received_events_url": "https://api.github.com/users/virilo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Now, I can't find any reference to set_progress_bar_enabled in the code.\r\n\r\nI think it have been deleted",
"Hi @virilo,\r\n\r\nPlease note that since `datasets` version 2.0.0, we have aligned with `transformers` the management of the progress bar (among other things):\r\n- #3897\r\n\r\nNow, you should update your code to use `datasets.logging.disable_progress_bar`.\r\n\r\nYou have more info in our docs: [Logging methods](https://huggingface.co/docs/datasets/package_reference/logging_methods)"
] | 1,648,903,210,000 | 1,649,056,499,000 | 1,649,054,674,000 | NONE | null | ## Describe the bug
datasets.set_progress_bar_enabled(False) not working in datasets v2
## Steps to reproduce the bug
```python
datasets.set_progress_bar_enabled(False)
```
## Expected results
datasets not using any progress bar
## Actual results
AttributeError: module 'datasets' has no attribute 'set_progress_bar_enabled
## Environment info
datasets version 2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4085/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4084/comments | https://api.github.com/repos/huggingface/datasets/issues/4084/events | https://github.com/huggingface/datasets/issues/4084 | 1,190,060,415 | I_kwDODunzps5G7uF_ | 4,084 | Errors in `Train with Datasets` Tensorflow code section on Huggingface.co | {
"login": "blackhat-coder",
"id": 57095771,
"node_id": "MDQ6VXNlcjU3MDk1Nzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/57095771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blackhat-coder",
"html_url": "https://github.com/blackhat-coder",
"followers_url": "https://api.github.com/users/blackhat-coder/followers",
"following_url": "https://api.github.com/users/blackhat-coder/following{/other_user}",
"gists_url": "https://api.github.com/users/blackhat-coder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blackhat-coder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blackhat-coder/subscriptions",
"organizations_url": "https://api.github.com/users/blackhat-coder/orgs",
"repos_url": "https://api.github.com/users/blackhat-coder/repos",
"events_url": "https://api.github.com/users/blackhat-coder/events{/privacy}",
"received_events_url": "https://api.github.com/users/blackhat-coder/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @blackhat-coder, thanks for reporting.\r\n\r\nPlease note that the `transformers` library updated their data collators API last year (version 4.10.0):\r\n- huggingface/transformers#13105\r\n\r\nnow requiring to pass `return_tensors` argument at Data Collator instantiation.\r\n\r\nAnd therefore, we also updated in the `datasets` library documentation all the examples using `transformers` data collators.\r\n\r\nIf you would like to follow our examples, please update your installed `transformers` version:\r\n```\r\npip install -U transformers\r\n```"
] | 1,648,832,567,000 | 1,649,057,077,000 | 1,649,056,891,000 | NONE | null | ## Describe the bug
Hi
### Error 1
Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors'
### Error 2
`DataCollatorWithPadding` isn't imported
## Steps to reproduce the bug
```python
import tensorflow as tf
from datasets import load_dataset
from transformers import AutoTokenizer
dataset = load_dataset('glue', 'mrpc', split='train')
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
train_dataset = dataset["train"].to_tf_dataset(
columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
```
This is the same code on Huggingface.co
## Actual results
TypeError: __init__() got an unexpected keyword argument 'return_tensors'
## Environment info
- `datasets` version: 2.0.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.7
- PyArrow version: 6.0.0
- Pandas version: 1.4.1
> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4084/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4083/comments | https://api.github.com/repos/huggingface/datasets/issues/4083/events | https://github.com/huggingface/datasets/pull/4083 | 1,190,025,878 | PR_kwDODunzps41gEbu | 4,083 | Add SacreBLEU Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4083). All of your documentation changes will be reflected on that endpoint."
] | 1,648,830,296,000 | 1,649,067,227,000 | null | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4083/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4083",
"html_url": "https://github.com/huggingface/datasets/pull/4083",
"diff_url": "https://github.com/huggingface/datasets/pull/4083.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4083.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4082/comments | https://api.github.com/repos/huggingface/datasets/issues/4082/events | https://github.com/huggingface/datasets/pull/4082 | 1,189,965,845 | PR_kwDODunzps41f3fb | 4,082 | Add chrF(++) Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4082). All of your documentation changes will be reflected on that endpoint."
] | 1,648,827,132,000 | 1,648,831,894,000 | null | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4082/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4082",
"html_url": "https://github.com/huggingface/datasets/pull/4082",
"diff_url": "https://github.com/huggingface/datasets/pull/4082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4082.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4081/comments | https://api.github.com/repos/huggingface/datasets/issues/4081/events | https://github.com/huggingface/datasets/pull/4081 | 1,189,916,472 | PR_kwDODunzps41fsxW | 4,081 | Close parquet writer properly in `push_to_hub` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,825,130,000 | 1,648,830,094,000 | 1,648,829,779,000 | MEMBER | null | We don’t call writer.close(), which causes https://github.com/huggingface/datasets/issues/4077. It can happen that we upload the file before the writer is garbage collected and writes the footer.
I fixed this by explicitly closing the parquet writer.
Close https://github.com/huggingface/datasets/issues/4077. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4081/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4081",
"html_url": "https://github.com/huggingface/datasets/pull/4081",
"diff_url": "https://github.com/huggingface/datasets/pull/4081.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4081.patch",
"merged_at": 1648829779000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4080/comments | https://api.github.com/repos/huggingface/datasets/issues/4080/events | https://github.com/huggingface/datasets/issues/4080 | 1,189,667,296 | I_kwDODunzps5G6OHg | 4,080 | NonMatchingChecksumError for downloading conll2012_ontonotesv5 dataset | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @richarddwang,\r\n\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitHub repo:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n- forcing the data files to be redownloaded\r\n```python\r\nds = load_dataset('conll2012_ontonotesv5', 'english_v4', split=\"test\", download_mode=\"force_redownload\")\r\n```\r\n\r\nFeel free to re-open this issue if the problem persists. \r\n\r\nDuplicate of:\r\n- #4031"
] | 1,648,812,868,000 | 1,648,821,550,000 | 1,648,821,550,000 | CONTRIBUTOR | null | ## Steps to reproduce the bug
```python
datasets.load_dataset("conll2012_ontonotesv5", "english_v12")
```
## Actual results
```
Downloading builder script: 32.2kB [00:00, 9.72MB/s]
Downloading metadata: 20.0kB [00:00, 10.4MB/s]
Downloading and preparing dataset conll2012_ontonotesv5/english_v12 (download: 174.83 MiB, generated: 204.29 MiB, post-processed: Unknown size
, total: 379.12 MiB) to ...
Traceback (most recent call last): [315/390]
File "/home/yisiang/lgtn/conll2012/run.py", line 86, in <module>
train()
File "/home/yisiang/lgtn/conll2012/run.py", line 65, in train
trainer.fit(model, datamodule=dm)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit
self._call_and_handle_interrupt(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_inte
rrupt
return trainer_fn(*args, **kwargs)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1131, in _run
self._data_connector.prepare_data()
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 154, in pre
pare_data
self.trainer.datamodule.prepare_data()
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn
fn(*args, **kwargs)
File "/home/yisiang/lgtn/_abstract_task/data.py", line 43, in prepare_data
raw_dsets = datasets.load_dataset(**load_dataset_kwargs)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/load.py", line 1687, in load_dataset
builder_instance.download_and_prepare(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 605, in download_and_prepare
self._download_and_prepare(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 1104, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/builder.py", line 676, in _download_and_prepare
verify_checksums(
File "/home/yisiang/miniconda3/envs/ai/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4080/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4080/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4079/comments | https://api.github.com/repos/huggingface/datasets/issues/4079/events | https://github.com/huggingface/datasets/pull/4079 | 1,189,521,576 | PR_kwDODunzps41eYRC | 4,079 | Increase max retries for GitHub datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,805,643,000 | 1,648,827,160,000 | 1,648,826,831,000 | MEMBER | null | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub datasets, as previously done for GitHub metrics:
- #4063
Note that this is a temporary solution, while we decide when and how to load GitHub datasets from the Hub:
- #4059
Fix #2048
Related to:
- #4051
- #3210
- #2787
- #2075
- #2036
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4079/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4079/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4079",
"html_url": "https://github.com/huggingface/datasets/pull/4079",
"diff_url": "https://github.com/huggingface/datasets/pull/4079.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4079.patch",
"merged_at": 1648826830000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4078/comments | https://api.github.com/repos/huggingface/datasets/issues/4078/events | https://github.com/huggingface/datasets/pull/4078 | 1,189,513,572 | PR_kwDODunzps41eWnl | 4,078 | Fix GithubMetricModuleFactory instantiation with None download_config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,805,218,000 | 1,648,824,291,000 | 1,648,823,967,000 | MEMBER | null | Recent PR:
- #4063
introduced a potential bug if `GithubMetricModuleFactory` is instantiated with None `download_config`.
This PR add instantiation tests and fix that potential issue.
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4078/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4078/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4078",
"html_url": "https://github.com/huggingface/datasets/pull/4078",
"diff_url": "https://github.com/huggingface/datasets/pull/4078.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4078.patch",
"merged_at": 1648823967000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4077/comments | https://api.github.com/repos/huggingface/datasets/issues/4077/events | https://github.com/huggingface/datasets/issues/4077 | 1,189,467,585 | I_kwDODunzps5G5dXB | 4,077 | ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,648,802,953,000 | 1,648,829,779,000 | 1,648,829,779,000 | CONTRIBUTOR | null | ## Describe the bug
When uploading a relatively large image dataset of > 1GB, reloading doesn't work for me, even though pushing to the hub went just fine.
Basically, I do:
```
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_files="path_to_my_files")
dataset.push_to_hub("dataset_name") # works fine, no errors
reloaded_dataset = load_dataset("dataset_name")
```
and it returns:
```
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
I created a Colab notebook to reproduce my error: https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4077/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4077/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4076/comments | https://api.github.com/repos/huggingface/datasets/issues/4076/events | https://github.com/huggingface/datasets/pull/4076 | 1,188,478,867 | PR_kwDODunzps41a1n2 | 4,076 | Add ROUGE Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4076). All of your documentation changes will be reflected on that endpoint."
] | 1,648,751,674,000 | 1,648,824,172,000 | null | CONTRIBUTOR | null | Add ROUGE metric card.
I've left the 'Values from popular papers' section empty for the time being because I don't know the summarization literature very well and am therefore not sure which paper(s) to pull from (note that the original rouge paper does not seem to present specific values, just correlations with human judgements). Any suggestions on which paper(s) to pull from would be helpful! :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4076/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4076",
"html_url": "https://github.com/huggingface/datasets/pull/4076",
"diff_url": "https://github.com/huggingface/datasets/pull/4076.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4076.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4075/comments | https://api.github.com/repos/huggingface/datasets/issues/4075/events | https://github.com/huggingface/datasets/issues/4075 | 1,188,462,162 | I_kwDODunzps5G1n5S | 4,075 | Add CCAgT dataset | {
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | {
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "johnnv1",
"id": 20444345,
"node_id": "MDQ6VXNlcjIwNDQ0MzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnnv1",
"html_url": "https://github.com/johnnv1",
"followers_url": "https://api.github.com/users/johnnv1/followers",
"following_url": "https://api.github.com/users/johnnv1/following{/other_user}",
"gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions",
"organizations_url": "https://api.github.com/users/johnnv1/orgs",
"repos_url": "https://api.github.com/users/johnnv1/repos",
"events_url": "https://api.github.com/users/johnnv1/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnnv1/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Awesome ! Let us know if you have questions or if we can help ;) I'm assigning you\r\n\r\nPS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable."
] | 1,648,750,828,000 | 1,649,070,205,000 | null | NONE | null | ## Adding a Dataset
- **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique
- **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample cervical slide, colored with silver-stained, a method known as Argyrophilic Nucleolar Organizer Regions (AgNOR).
- **Paper:** https://doi.org/10.1109/cbms49503.2020.00110
- **Data:** https://arquivos.ufsc.br/d/373be2177a33426a9e6c/ or https://drive.google.com/drive/u/4/folders/1TBpYCv6S1ydASLauSzcsvO7Wc5O-WUw0
- **Motivation:** This is a unique dataset (because of the stain), for a major health problem, cervical cancer, with real data.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Hi, this is a public version of the dataset that I have been working on, soon we will have another version of this dataset. But until this new version goes out, I thought I would add this dataset here, if it makes sense for the repository. You can assign the task to me if possible | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4075/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4074/comments | https://api.github.com/repos/huggingface/datasets/issues/4074/events | https://github.com/huggingface/datasets/issues/4074 | 1,188,449,142 | I_kwDODunzps5G1kt2 | 4,074 | Error in google/xtreme_s dataset card | {
"login": "wranai",
"id": 1048544,
"node_id": "MDQ6VXNlcjEwNDg1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1048544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wranai",
"html_url": "https://github.com/wranai",
"followers_url": "https://api.github.com/users/wranai/followers",
"following_url": "https://api.github.com/users/wranai/following{/other_user}",
"gists_url": "https://api.github.com/users/wranai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wranai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wranai/subscriptions",
"organizations_url": "https://api.github.com/users/wranai/orgs",
"repos_url": "https://api.github.com/users/wranai/repos",
"events_url": "https://api.github.com/users/wranai/events{/privacy}",
"received_events_url": "https://api.github.com/users/wranai/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi @wranai, thanks for reporting.\r\n\r\nPlease note that the information about language families and groups is taken form the original paper: [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752).\r\n\r\nIf that information is wrong, feel free to contact the paper's authors to suggest that correction.\r\n\r\nJust note that Hungarian language (contrary to their geographically surrounding neighbor languages) belongs to the Uralic (languages) family, together with (among others) Finnish, Estonian, some other languages in northern regions of Scandinavia..."
] | 1,648,750,065,000 | 1,648,800,776,000 | 1,648,800,776,000 | NONE | null | **Link:** https://huggingface.co/datasets/google/xtreme_s
Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4074/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4073/comments | https://api.github.com/repos/huggingface/datasets/issues/4073/events | https://github.com/huggingface/datasets/pull/4073 | 1,188,364,711 | PR_kwDODunzps41adPA | 4,073 | Create a metric card for Competition MATH | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,745,339,000 | 1,648,839,759,000 | 1,648,839,433,000 | CONTRIBUTOR | null | Proposing metric card for Competition MATH | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4073/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4073",
"html_url": "https://github.com/huggingface/datasets/pull/4073",
"diff_url": "https://github.com/huggingface/datasets/pull/4073.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4073.patch",
"merged_at": 1648839432000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4072/comments | https://api.github.com/repos/huggingface/datasets/issues/4072/events | https://github.com/huggingface/datasets/pull/4072 | 1,188,266,410 | PR_kwDODunzps41aIUG | 4,072 | Add installation instructions to image_process doc | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,740,577,000 | 1,648,746,346,000 | 1,648,746,019,000 | CONTRIBUTOR | null | This PR adds the installation instructions for the Image feature to the image process doc. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4072/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4072",
"html_url": "https://github.com/huggingface/datasets/pull/4072",
"diff_url": "https://github.com/huggingface/datasets/pull/4072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4072.patch",
"merged_at": 1648746019000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4071/comments | https://api.github.com/repos/huggingface/datasets/issues/4071/events | https://github.com/huggingface/datasets/issues/4071 | 1,187,587,683 | I_kwDODunzps5GySZj | 4,071 | Loading issue for xuyeliu/notebookCDG dataset | {
"login": "Jun-jie-Huang",
"id": 46160972,
"node_id": "MDQ6VXNlcjQ2MTYwOTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/46160972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jun-jie-Huang",
"html_url": "https://github.com/Jun-jie-Huang",
"followers_url": "https://api.github.com/users/Jun-jie-Huang/followers",
"following_url": "https://api.github.com/users/Jun-jie-Huang/following{/other_user}",
"gists_url": "https://api.github.com/users/Jun-jie-Huang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jun-jie-Huang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jun-jie-Huang/subscriptions",
"organizations_url": "https://api.github.com/users/Jun-jie-Huang/orgs",
"repos_url": "https://api.github.com/users/Jun-jie-Huang/repos",
"events_url": "https://api.github.com/users/Jun-jie-Huang/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jun-jie-Huang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi @Jun-jie-Huang,\r\n\r\nAs the error message says, \".pkl\" data files are not supported.\r\n\r\nIf you would like to share your dataset on the Hub, you would need:\r\n- either to create a Python loading script, that loads the data in any format\r\n- or to transform your data files to one of the supported formats (listed in the error message above: CSV, JSON, Parquet, TXT,...)\r\n\r\nYou can find the details in our docs: \r\n- How to share a dataset: https://huggingface.co/docs/datasets/share\r\n- How to create a dataset loading script: https://huggingface.co/docs/datasets/dataset_script\r\n\r\nFeel free to re-open this issue and ping us if you need further assistance."
] | 1,648,708,589,000 | 1,648,714,621,000 | 1,648,714,576,000 | NONE | null | ## Dataset viewer issue for '*xuyeliu/notebookCDG*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/xuyeliu/notebookCDG)*
*Couldn't load the xuyeliu/notebookCDG with provided scripts: *
```
from datasets import load_dataset
dataset = load_dataset("xuyeliu/notebookCDG/dataset_notebook.pkl")
```
I get an error message as follows:
FileNotFoundError: Couldn't find a dataset script at /home/code_documentation/code/xuyeliu/notebookCDG/notebookCDG.py or any data file in the same directory. Couldn't find 'xuyeliu/notebookCDG' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**train*'] in dataset repository xuyeliu/notebookCDG with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4071/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4070/comments | https://api.github.com/repos/huggingface/datasets/issues/4070/events | https://github.com/huggingface/datasets/pull/4070 | 1,186,810,205 | PR_kwDODunzps41VMYq | 4,070 | Create metric card for seqeval | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,663,681,000 | 1,648,839,778,000 | 1,648,839,445,000 | CONTRIBUTOR | null | Proposing metric card for seqeval. Not sure which values to report for Popular papers though. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4070/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4070",
"html_url": "https://github.com/huggingface/datasets/pull/4070",
"diff_url": "https://github.com/huggingface/datasets/pull/4070.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4070.patch",
"merged_at": 1648839445000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4069/comments | https://api.github.com/repos/huggingface/datasets/issues/4069/events | https://github.com/huggingface/datasets/pull/4069 | 1,186,790,578 | PR_kwDODunzps41VIMJ | 4,069 | Add support for metadata files to `imagefolder` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4069). All of your documentation changes will be reflected on that endpoint.",
"Love it !\r\n\r\n+1 to using JSON Lines rather than CSV. I've also seen image datasets for which JSON Lines was used.\r\n\r\nA `file_name` column sounds good as well, and it means we could reuse the same name for audio. And ok to check the metadata file by default :)\r\n\r\nYou suggested to name the file infos.json - since we already have a datasets_infos.json file, maybe it would be nice to have a name for the metadata/annotations that doesn't contain \"info\" ? (e.g. metadata.json, annotations.json, labels.json)",
"@lhoestq I've addressed your comments and my TODOs. Additionally, I've updated `encode_nested_example`/`decode_nested_example` to support null values in place of a dictionary (if it's not top-level) since JSON Lines also supports this. "
] | 1,648,662,471,000 | 1,649,157,894,000 | null | CONTRIBUTOR | null | This PR adds support for metadata files to `imagefolder` to add an ability to specify image fields other than `image` and `label`, which are inferred from the directory structure in the loaded dataset.
To be parsed as an image metadata file, a file should be named `"info.csv"` and should have the following structure:
```
image_id,some_col1_name,some_col2_name
rel/path/to/image1.jpg,image1_col1_value,image1_col2_value
rel/path/to/image2.jpg,image2_col1_value,image2_col2_value
...
```
This is how the resolution works:
```
- path/to/imagefolder/directory
- info.csv
- 10.jpg # referenced as 10.jpg in "info.csv"
- Cat
- 0.jpg # referenced as Cat/0.jpg in "info.csv"
- 1.jpg # referenced as Cat/1.jpg in "info.csv"
- Dog
- 0.jpg # referenced as Dog/0.jpg in "info.csv"
- 1.jpg # referenced as Dog/1.jpg in "info.csv"
```
Open questions:
1. IMO it makes more sense to store image metadata as JSON Lines than CSV. CSV is sufficient for textual metadata but not the best for representing bounding boxes, for instance. Also, JSON Lines is more strict, which is good in this case (CSV supports various delimiters, the header line is optional, etc., so it's easier to enforce rules on JSON Lines that it's on CSV)
2. A better name for the `image_id` column, which contains image identifiers? Maybe `image_file` or `image_filename`?
3. WDYT about making `with_metadata=True` the default behavior if the loaded repo/directory contains an `info.csv` file?
An example repository: https://huggingface.co/datasets/mariosasko/PetImages. Can be loaded by installing `datasets` from the PR branch and running `load_dataset("mariosasko/PetImages", with_metadata=True)`.
cc: @abhishekkrthakur (this PR should address https://huggingface.slack.com/archives/C02JB9L6JKF/p1645450017434029?thread_ts=1645157416.389499&cid=C02JB9L6JKF)
TODOs:
- [x] Test
- [x] Metadata file nesting
```
- path/to/imagefolder/directory
- info.csv
- 10.jpg
- Cat
- info.csv # should have higher precedence in this directory than the top-level info.csv, but we choose the first "eligible" metadata file currently
- 0.jpg
- 1.jpg
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4069/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4069",
"html_url": "https://github.com/huggingface/datasets/pull/4069",
"diff_url": "https://github.com/huggingface/datasets/pull/4069.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4069.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4068/comments | https://api.github.com/repos/huggingface/datasets/issues/4068/events | https://github.com/huggingface/datasets/pull/4068 | 1,186,765,422 | PR_kwDODunzps41VC0I | 4,068 | Improve out of bounds error message | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,660,930,000 | 1,648,715,948,000 | 1,648,715,637,000 | MEMBER | null | In 1.18.4 with https://github.com/huggingface/datasets/pull/3719 we introduced an error message for users using `select` with out of bounds indices. The message ended up being confusing for some users because it mentioned negative indices, which is not the main use case.
I replaced it with a message that is very similar to the one you get with you try to access a list with an out-of-range index. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4068/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4068",
"html_url": "https://github.com/huggingface/datasets/pull/4068",
"diff_url": "https://github.com/huggingface/datasets/pull/4068.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4068.patch",
"merged_at": 1648715636000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4067/comments | https://api.github.com/repos/huggingface/datasets/issues/4067/events | https://github.com/huggingface/datasets/pull/4067 | 1,186,731,905 | PR_kwDODunzps41U7qc | 4,067 | Update datasets task tags to align tags with models | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4067). All of your documentation changes will be reflected on that endpoint."
] | 1,648,658,972,000 | 1,648,715,716,000 | null | MEMBER | null | **Requires https://github.com/huggingface/datasets/pull/4066 to be merged first**
Following https://github.com/huggingface/datasets/pull/4066 we need to update many dataset tags to use the new ones. This PR takes case of this and is quite big - feel free to review only certain tags if you don't want to spend too much time on it.
Note that the CI will never be green for this PR, because many dataset cards have missing tags or sections, and fixing them is out of scope of this PR (the CI on master will be green anyway) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4067/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4067",
"html_url": "https://github.com/huggingface/datasets/pull/4067",
"diff_url": "https://github.com/huggingface/datasets/pull/4067.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4067.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4066/comments | https://api.github.com/repos/huggingface/datasets/issues/4066/events | https://github.com/huggingface/datasets/pull/4066 | 1,186,728,104 | PR_kwDODunzps41U63x | 4,066 | Tasks alignment with models | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4066). All of your documentation changes will be reflected on that endpoint.",
"Yay! This is exciting! Note that we would probably be able to generate this JSON directly from `huggingface/hub-docs`' `Types.ts` file (cc @osanseviero)",
"The following issue should make this much easier :smile: https://github.com/huggingface/hub-docs/issues/83",
"So far I think I've addressed all the comments that I got on slack, but feel free to do a review @osanseviero and let me know if it sounds good to you",
"It just occurred to me that we should probably restart the `datasets-tagging` space once this is merged to update all the task categories there: https://huggingface.co/spaces/huggingface/datasets-tagging",
"Yes, let me update it now",
"Updated: https://huggingface.co/spaces/huggingface/datasets-tagging"
] | 1,648,658,756,000 | 1,649,172,718,000 | null | MEMBER | null | I updated our `tasks.json` file with the new task taxonomy that is aligned with models.
The rule that defines a task is the following:
**Two tasks are different if and only if the steps of their pipelines** are different, i.e. if they can’t reasonably be implemented using the same coherent code (level of granularity/complexity of the code to be defined - ideally I’d like to say “HF user’s level”) - this is the same definition in `transformers`
I will update the tags of all the datasets in this repository [in another PR](https://github.com/huggingface/datasets/pull/4067) for readability.
Main changes:
- conditional-text-generation is split between summarization, translation, text-generation and text2text-generation
- speech-processing is split into automatic-speech-recognition, audio-classification, etc.
- structure-prediction is renamed token-classification
- abstractive-qa now belongs to text2text-generation
Here is just a simplified YAML dump of `tasks.json`:
```yaml
audio-classification:
- keyword-spotting
- speaker-identification
- speaker-intent-classification
- emotion-recognition
- speaker-language-identification
audio-to-audio: []
automatic-speech-recognition: []
conversational:
- dialogue-generation
feature-extraction: []
fill-mask:
- slot-filling
- masked-language-modeling
image-classification:
- multi-label-image-classification
- multi-class-image-classification
image-segmentation:
- instance-segmentation
- semantic-segmentation
- panoptic-segmentation
image-to-text:
- image-captioning
multiple-choice:
- multiple-choice-qa
- multiple-choice-coreference-resolution
object-detection:
- face-detection
- vehicle-detection
question-answering:
- extractive-qa
- open-domain-qa
- closed-domain-qa
sentence-similarity: []
tabular-classification: []
tabular-to-text:
- rdf-to-text
summarization:
- news-articles-summarization
- news-articles-headline-generation
table-to-text: []
table-question-answering: []
text-classification:
- acceptability-classification
- entity-linking-classification
- fact-checking
- intent-classification
- multi-class-classification
- multi-label-classification
- natural-language-inference
- semantic-similarity-classification
- sentiment-classification
- topic-classification
- semantic-similarity-scoring
- sentiment-scoring
- sentiment-analysis
- hate-speech-detection
- text-scoring
text-generation:
- dialogue-modeling
- language-modeling
text-retrieval:
- document-retrieval
- utterance-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
text-to-image: []
text-to-tabular:
- relation-extraction
- semantic-role-labeling
text-to-speech: []
text2text-generation:
- text-simplification
- explanation-generation
- abstractive-qa
- open-domain-abstractive-qa
- closed-domain-qa
- open-book-qa
- closed-book-qa
time-series-forecasting:
- univariate-time-series-forecasting
- multivariate-time-series-forecasting
token-classification:
- named-entity-recognition
- part-of-speech-tagging
- parsing
- lemmatization
- word-sense-disambiguation
- coreference-resolution
translation: []
visual-question-answering: []
voice-activity-detection: []
zero-shot-classification: []
zero-shot-image-classification: []
reinforcement-learning: []
other: []
```
Feel free to comment and give suggestions, especially if you think we can also align this list with other projects
cc @julien-c @osanseviero @severo @lewtun @yjernite @albertvillanova @mariosasko @polinaeterna | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4066/reactions",
"total_count": 6,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4066/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4066",
"html_url": "https://github.com/huggingface/datasets/pull/4066",
"diff_url": "https://github.com/huggingface/datasets/pull/4066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4066.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4065/comments | https://api.github.com/repos/huggingface/datasets/issues/4065/events | https://github.com/huggingface/datasets/pull/4065 | 1,186,722,478 | PR_kwDODunzps41U5rq | 4,065 | Create metric card for METEOR | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,658,430,000 | 1,648,746,730,000 | 1,648,746,470,000 | CONTRIBUTOR | null | Proposing a metric card for METEOR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4065/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4065",
"html_url": "https://github.com/huggingface/datasets/pull/4065",
"diff_url": "https://github.com/huggingface/datasets/pull/4065.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4065.patch",
"merged_at": 1648746470000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4064/comments | https://api.github.com/repos/huggingface/datasets/issues/4064/events | https://github.com/huggingface/datasets/pull/4064 | 1,186,650,321 | PR_kwDODunzps41UqXS | 4,064 | Contributing MedMCQA dataset | {
"login": "monk1337",
"id": 17107749,
"node_id": "MDQ6VXNlcjE3MTA3NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/monk1337",
"html_url": "https://github.com/monk1337",
"followers_url": "https://api.github.com/users/monk1337/followers",
"following_url": "https://api.github.com/users/monk1337/following{/other_user}",
"gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}",
"starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/monk1337/subscriptions",
"organizations_url": "https://api.github.com/users/monk1337/orgs",
"repos_url": "https://api.github.com/users/monk1337/repos",
"events_url": "https://api.github.com/users/monk1337/events{/privacy}",
"received_events_url": "https://api.github.com/users/monk1337/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@lhoestq Could you please take a look?\r\nThank you!!"
] | 1,648,654,967,000 | 1,649,346,387,000 | null | NONE | null | Adding MedMCQA dataset ( https://paperswithcode.com/dataset/medmcqa )
**Name**: MedMCQA
**Description**: MedMCQA is a large-scale, Multiple-Choice Question Answering (MCQA) dataset designed to address real-world medical entrance exam questions.
MedMCQA has more than 194k high-quality AIIMS & NEET PG entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token length of 12.77 and high topical diversity.
The dataset contains questions about the following topics: Anesthesia, Anatomy, Biochemistry, Dental, ENT, Forensic Medicine (FM), Obstetrics and Gynecology (O&G), Medicine, Microbiology, Ophthalmology, Orthopedics Pathology, Pediatrics, Pharmacology, Physiology,
Psychiatry, Radiology Skin, Preventive & Social Medicine (PSM), and Surgery
**Code**: https://github.com/medmcqa/medmcqa
All files are at place :
**a dataset script** : medmcqa.py
**a dataset card with tags and information** : README.md.
**a metadata file** : dataset_infos.json
**a dummy-data file** : Please help to generate this file, I was facing
` raise JSONDecodeError("Extra data", s, end)` error | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4064/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4064",
"html_url": "https://github.com/huggingface/datasets/pull/4064",
"diff_url": "https://github.com/huggingface/datasets/pull/4064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4064.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4063/comments | https://api.github.com/repos/huggingface/datasets/issues/4063/events | https://github.com/huggingface/datasets/pull/4063 | 1,186,611,368 | PR_kwDODunzps41UiDm | 4,063 | Increase max retries for GitHub metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,653,168,000 | 1,648,737,772,000 | 1,648,737,467,000 | MEMBER | null | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub metrics.
Related to:
- #3134
Also related to:
- #4059 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4063/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4063",
"html_url": "https://github.com/huggingface/datasets/pull/4063",
"diff_url": "https://github.com/huggingface/datasets/pull/4063.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4063.patch",
"merged_at": 1648737467000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4062/comments | https://api.github.com/repos/huggingface/datasets/issues/4062/events | https://github.com/huggingface/datasets/issues/4062 | 1,186,330,732 | I_kwDODunzps5Gtfhs | 4,062 | Loading mozilla-foundation/common_voice_7_0 dataset failed | {
"login": "aapot",
"id": 19529125,
"node_id": "MDQ6VXNlcjE5NTI5MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/19529125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aapot",
"html_url": "https://github.com/aapot",
"followers_url": "https://api.github.com/users/aapot/followers",
"following_url": "https://api.github.com/users/aapot/following{/other_user}",
"gists_url": "https://api.github.com/users/aapot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aapot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aapot/subscriptions",
"organizations_url": "https://api.github.com/users/aapot/orgs",
"repos_url": "https://api.github.com/users/aapot/repos",
"events_url": "https://api.github.com/users/aapot/events{/privacy}",
"received_events_url": "https://api.github.com/users/aapot/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @aapot, thanks for reporting.\r\n\r\nWe are investigating the cause of this issue. We will keep you informed. ",
"When making HTTP request from code line:\r\n```\r\nresponse = requests.get(f\"{_API_URL}/bucket/dataset/{path}/{use_cdn}\", timeout=10.0).json()\r\n```\r\nit cannot be decoded to JSON because it raises a 404 Not Found error.\r\n\r\nThe request is fixed if removing the `/{use_cdn}` from the URL.\r\n\r\nMaybe there was a change in the Common Voice API?\r\n\r\nCC: @anton-l @patrickvonplaten @polinaeterna ",
"We have contacted by email the data owners of the Common Voice dataset.",
"Hotfix: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/commit/17b237961e4f7f84a2a0aea645abe5428a9d568e",
"I have also made the hotfix for all the rest of Common Voice script versions: 8.0, 6.1, 6.0,..., 1.0"
] | 1,648,640,381,000 | 1,648,727,571,000 | 1,648,714,684,000 | NONE | null | ## Describe the bug
I wanted to load `mozilla-foundation/common_voice_7_0` dataset with `fi` language and `test` split from datasets on Colab/Kaggle notebook, but I am getting an error `JSONDecodeError: [Errno Expecting value] Not Found: 0` while loading it. The bug seems to affect other languages and splits too than just the `fi` and `test` split.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token="YOUR TOKEN")
```
## Expected results
load `mozilla-foundation/common_voice_7_0` dataset succesfully
## Actual results
```
JSONDecodeError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs)
909 try:
--> 910 return complexjson.loads(self.text, **kwargs)
911 except JSONDecodeError as e:
/opt/conda/lib/python3.7/site-packages/simplejson/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, use_decimal, **kw)
524 and not use_decimal and not kw):
--> 525 return _default_decoder.decode(s)
526 if cls is None:
/opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in decode(self, s, _w, _PY3)
369 s = str(s, self.encoding)
--> 370 obj, end = self.raw_decode(s)
371 end = _w(s, end).end()
/opt/conda/lib/python3.7/site-packages/simplejson/decoder.py in raw_decode(self, s, idx, _w, _PY3)
399 idx += 3
--> 400 return self.scan_once(s, idx=_w(s, idx).end())
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
JSONDecodeError Traceback (most recent call last)
/tmp/ipykernel_358/370980805.py in <module>
1 # load Common Voice 7.0 dataset from Huggingface with Finnish "test" split
----> 2 test_dataset = load_dataset("mozilla-foundation/common_voice_7_0", "fi", split="test", use_auth_token=True)
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1690 ignore_verifications=ignore_verifications,
1691 try_from_hf_gcs=try_from_hf_gcs,
-> 1692 use_auth_token=use_auth_token,
1693 )
1694
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
604 if not downloaded_from_gcs:
605 self._download_and_prepare(
--> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
607 )
608 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1102
1103 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1105
1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
670 split_dict = SplitDict(dataset_name=self.name)
671 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 672 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
673
674 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _split_generators(self, dl_manager)
151
152 self._log_download(self.config.name, bundle_version, hf_auth_token)
--> 153 archive = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template))
154
155 if self.config.version < datasets.Version("5.0.0"):
~/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_7_0/fe20cac47c166e25b1f096ab661832e3da7cf298ed4a91dcaa1343ad972d175b/common_voice_7_0.py in _get_bundle_url(self, locale, url_template)
130 path = urllib.parse.quote(path.encode("utf-8"), safe="~()*!.'")
131 use_cdn = self.config.size_bytes < 20 * 1024 * 1024 * 1024
--> 132 response = requests.get(f"{_API_URL}/bucket/dataset/{path}/{use_cdn}", timeout=10.0).json()
133 return response["url"]
134
/opt/conda/lib/python3.7/site-packages/requests/models.py in json(self, **kwargs)
915 raise RequestsJSONDecodeError(e.message)
916 else:
--> 917 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
918
919 @property
JSONDecodeError: [Errno Expecting value] Not Found: 0
```
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-5.10.90+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 5.0.0
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4062/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4061/comments | https://api.github.com/repos/huggingface/datasets/issues/4061/events | https://github.com/huggingface/datasets/issues/4061 | 1,186,317,071 | I_kwDODunzps5GtcMP | 4,061 | Loading cnn_dailymail dataset failed | {
"login": "Arij-Aladel",
"id": 68355048,
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arij-Aladel",
"html_url": "https://github.com/Arij-Aladel",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Arij-Aladel, thanks for reporting.\r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it in our 2.0.0 release. See:\r\n- #3787 \r\n\r\nPlease, update your `datasets` version:\r\n```\r\npip install -U datasets\r\n```\r\nand retry loading the dataset by forcing its redownload:\r\n```python\r\ndataset = load_dataset(\"cnn_dailymail\", \"3.0.0\", download_mode=\"force_redownload\")\r\n```"
] | 1,648,639,742,000 | 1,648,647,374,000 | 1,648,647,374,000 | NONE | null | ## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on jupyter lab, but I am getting an error ` NotADirectoryError:[Errno20] Not a directory ` while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
## Expected results
load `cnn_dailymail` dataset succesfully
## Actual results
failed to load and get error
> NotADirectoryError: [Errno 20] Not a directory
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` 1.8.0:
- Platform: Ubuntu-20.04
- Python version: 3.9.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4061/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4060/comments | https://api.github.com/repos/huggingface/datasets/issues/4060/events | https://github.com/huggingface/datasets/pull/4060 | 1,186,281,033 | PR_kwDODunzps41Tbmg | 4,060 | Deprecate canonical Multilingual Librispeech | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Yes, as discussed in #4006 we should update facebook/multilingual_librispeech indeed before we do a release. @anton-l could you help taking care of updating facebook/multilingual_librispeech ? We need to update the task template\r\n```python\r\ntask_templates=[AutomaticSpeechRecognition(audio_column=\"audio\", transcription_column=\"text\")],\r\n```\r\nand write that `datasets>=2.1` is necessary to load it in the dataset card.\r\n\r\nOnce the change is done we can merge this PR and do the release I think",
"@polinaeterna @lhoestq \r\nUpdated the script and the dataset card: https://huggingface.co/datasets/facebook/multilingual_librispeech ",
"@anton-l @lhoestq now previewer doesn't work for this datasets as it cannot recognize new `audio_column` argument:\r\n![image](https://user-images.githubusercontent.com/16348744/161233533-3170760b-5141-4525-9592-6675669c223a.png)\r\n\r\nI'm not an expert in previewer things, where should I look into the corresponding code?",
"Yes, there are several datasets with the same error, eg https://github.com/huggingface/datasets-preview-backend/issues/188. I'm not sure what I should do to fix this? Upgrade datasets to master?\r\n",
"@anton-l ended up removing the task template in facebook/multilingual_librispeech to make it work for the current version of `datasets` and fix the viewer :) thanks !",
"@lhoestq can we merge now? ^^"
] | 1,648,637,816,000 | 1,648,817,645,000 | 1,648,817,331,000 | CONTRIBUTOR | null | Deprecate canonical Multilingual Librispeech in favor of [the community one](https://huggingface.co/datasets/facebook/multilingual_librispeech) which supports streaming.
However, there is a problem regarding new ASR template schema: since it's changed, I guess all community datasets that use this template do not work with new version of the library, including MLS. Should we somehow notify users about that or is it possible to change this line ourselves? For MLS specifically, I cannot change the code directly as I'm not the member of the Facebook org.
Hm, and the code should be change after the release, no? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4060/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4060",
"html_url": "https://github.com/huggingface/datasets/pull/4060",
"diff_url": "https://github.com/huggingface/datasets/pull/4060.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4060.patch",
"merged_at": 1648817331000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4059/comments | https://api.github.com/repos/huggingface/datasets/issues/4059/events | https://github.com/huggingface/datasets/pull/4059 | 1,186,149,949 | PR_kwDODunzps41TC-o | 4,059 | Load GitHub datasets from Hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4059). All of your documentation changes will be reflected on that endpoint.",
"Currently the github datasets versioning is synced with the `datasets` lib versioning: when you load a github dataset using `datasets==x.y.z`, then the version of the dataset will be the one at the git tag `x.y.z`. This is for reproducibility reasons.\r\n\r\nWe could stop having this behavior and always use the latest version of the dataset, but when we do a breaking change it will break github datasets for previous versions of the library. It could be nice to think about tools that will allow backward compatibility if we ever need to to a breaking change in some datasets. Maybe a way to specify which revision of the dataset to use based on the `datasets` major version.\r\n\r\nIf we keep this behavior, then maybe add a note in setup.py to push to PyPI only after the `Update Hub repositories` CI job is done. It can take a few minutes to add the version tag to all the dataset repositories on the Hub. If we push to PyPI before the tags are pushed, then some users might get some 404 if at the same time they installed `datasets` and run `load_dataset`.",
"@lhoestq I was going to increase the `max_retries` as done for metrics:\r\n- #4063 \r\n\r\nBut then I realized that loading from the Hub would work as well. That is why I opened this PR.\r\n\r\nDefinitely, we should decide which behavior we want:\r\n- We have been working in the direction of eliminating the distinctions between canonical/community datasets\r\n- If we continue to go in that direction, then passing (or not passing) `revision` should have the same behavior for canonical/community\r\n- If we want to continue to tight the library version with the canonical datasets version, that is definitely a difference between canonical and community datasets\r\n\r\nNot sure what could be better in the long term...",
"> We could stop having this behavior and always use the latest version of the dataset, but when we do a breaking change it will break github datasets for previous versions of the library. \r\n\r\nNot sure of understanding this. Previous versions of the `datasets` library will continue to download GitHub datasets from GitHub, syncing library/dataset versions... Where is the problem?",
"Yes you're right, previous versions of `datasets` will still continue to download from github, but not future versions.\r\nIf we release `datasets` 2.1 by removing this behavior and if one day we release `datasets` 3.0 with a breaking change in the dataset scripts, then all version >=2.1 will break.",
"Ideally we should drop the differences between github datasets and community datasets, and maybe provide a way to fallback on an older version of a dataset repository if the user's `datasets` version is too old and incompatible with it."
] | 1,648,632,116,000 | 1,648,738,089,000 | null | MEMBER | null | We have recurrently had connection errors when requesting GitHub because sometimes the site is not available.
This PR requests the Hub instead, once all GitHub datasets are mirrored on the Hub.
Fix #2048
Related to:
- #4051
- #3210
- #2787
- #2075
- #2036 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4059/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4059",
"html_url": "https://github.com/huggingface/datasets/pull/4059",
"diff_url": "https://github.com/huggingface/datasets/pull/4059.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4059.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4058/comments | https://api.github.com/repos/huggingface/datasets/issues/4058/events | https://github.com/huggingface/datasets/pull/4058 | 1,185,611,600 | PR_kwDODunzps41RPhl | 4,058 | Updated annotations for nli_tr dataset | {
"login": "e-budur",
"id": 2246791,
"node_id": "MDQ6VXNlcjIyNDY3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2246791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-budur",
"html_url": "https://github.com/e-budur",
"followers_url": "https://api.github.com/users/e-budur/followers",
"following_url": "https://api.github.com/users/e-budur/following{/other_user}",
"gists_url": "https://api.github.com/users/e-budur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-budur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-budur/subscriptions",
"organizations_url": "https://api.github.com/users/e-budur/orgs",
"repos_url": "https://api.github.com/users/e-budur/repos",
"events_url": "https://api.github.com/users/e-budur/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-budur/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4058). All of your documentation changes will be reflected on that endpoint."
] | 1,648,597,619,000 | 1,648,598,158,000 | null | CONTRIBUTOR | null | This PR adds annotation tags for `nli_tr` dataset so that the dataset can be searchable wrt. relevant query parameters.
The annotations in this PR are based on the existing annotations of `snli` and `multi_nli` datasets as `nli_tr` is a machine-generated extension of those datasets.
This PR is intended only for updating the annotation labels but a followup PR will focus on updating the missing sections in the `README.md` as well.
Thanks for all your time to review it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4058/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4058",
"html_url": "https://github.com/huggingface/datasets/pull/4058",
"diff_url": "https://github.com/huggingface/datasets/pull/4058.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4058.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4057/comments | https://api.github.com/repos/huggingface/datasets/issues/4057/events | https://github.com/huggingface/datasets/issues/4057 | 1,185,442,001 | I_kwDODunzps5GqGjR | 4,057 | `load_dataset` consumes too much memory | {
"login": "JFCeron",
"id": 50839826,
"node_id": "MDQ6VXNlcjUwODM5ODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/50839826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JFCeron",
"html_url": "https://github.com/JFCeron",
"followers_url": "https://api.github.com/users/JFCeron/followers",
"following_url": "https://api.github.com/users/JFCeron/following{/other_user}",
"gists_url": "https://api.github.com/users/JFCeron/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JFCeron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JFCeron/subscriptions",
"organizations_url": "https://api.github.com/users/JFCeron/orgs",
"repos_url": "https://api.github.com/users/JFCeron/repos",
"events_url": "https://api.github.com/users/JFCeron/events{/privacy}",
"received_events_url": "https://api.github.com/users/JFCeron/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Could it be because you need to free the memory used by `tarfile` by emptying the tar `members` by any chance ?\r\n```python\r\n yield key, {\"audio\": {\"path\": audio_name, \"bytes\": audio_file_obj.read()}}\r\n audio_tarfile.members = [] # free memory\r\n key += 1\r\n```\r\n\r\nand then you can set `DEFAULT_WRITER_BATCH_SIZE` to whatever value makes more sense for your dataset.\r\n\r\nLet me know if the issue persists (which could happen, given that you managed to run your generator without RAM issues and using os.walk didn't solve the issue)",
"Thanks for your reply! Tried it but the issue persists. "
] | 1,648,589,935,000 | 1,648,653,645,000 | null | NONE | null |
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists.
## Steps to reproduce the bug
Here's my implementation of `_generate_examples`:
```python
class MyDatasetBuilder(datasets.GeneratorBasedBuilder):
DEFAULT_WRITER_BATCH_SIZE = 1
...
def _split_generators(self, dl_manager):
archive_path = dl_manager.download(_DL_URLS[self.config.name])
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"audio_tarfile_path": archive_path["audio_tarfile"]
},
),
]
def _generate_examples(self, audio_tarfile_path):
key = 0
with tarfile.open(audio_tarfile_path, mode="r|") as audio_tarfile:
for audio_tarinfo in audio_tarfile:
audio_name = audio_tarinfo.name
audio_file_obj = audio_tarfile.extractfile(audio_tarinfo)
yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}}
key += 1
```
I then try to load via `ds = load_dataset('./datasets/my_new_dataset', writer_batch_size=1)`, and memory usage grows until all 8GB of my machine are taken and process is killed (`Killed`). Also tried an untarred version of this using `os.walk` but the same happened.
I created a script to confirm that one can safely go through such a generator, which runs just fine with memory <500MB at all times.
```python
import tarfile
def generate_examples():
audio_tarfile = tarfile.open("audios.tar", mode="r|")
key = 0
for audio_tarinfo in audio_tarfile:
audio_name = audio_tarinfo.name
audio_file_obj = audio_tarfile.extractfile(audio_tarinfo)
yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}}
key += 1
if __name__ == "__main__":
examples = generate_examples()
for example in examples:
pass
```
## Expected results
Memory consumption should be similar to the non-huggingface script.
## Actual results
Process is killed after consuming too much memory.
## Environment info
- `datasets` version: 2.0.1.dev0
- Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-debian-10.12
- Python version: 3.7.12
- PyArrow version: 6.0.1
- Pandas version: 1.3.5 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4057/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4057/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4056/comments | https://api.github.com/repos/huggingface/datasets/issues/4056/events | https://github.com/huggingface/datasets/issues/4056 | 1,185,155,775 | I_kwDODunzps5GpAq_ | 4,056 | Unexpected behavior of _TempDirWithCustomCleanup | {
"login": "JonasGeiping",
"id": 22680696,
"node_id": "MDQ6VXNlcjIyNjgwNjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/22680696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JonasGeiping",
"html_url": "https://github.com/JonasGeiping",
"followers_url": "https://api.github.com/users/JonasGeiping/followers",
"following_url": "https://api.github.com/users/JonasGeiping/following{/other_user}",
"gists_url": "https://api.github.com/users/JonasGeiping/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JonasGeiping/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JonasGeiping/subscriptions",
"organizations_url": "https://api.github.com/users/JonasGeiping/orgs",
"repos_url": "https://api.github.com/users/JonasGeiping/repos",
"events_url": "https://api.github.com/users/JonasGeiping/events{/privacy}",
"received_events_url": "https://api.github.com/users/JonasGeiping/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Would setting TMPDIR at the beginning of your python script/session work ? I mean, even before importing transformers, datasets, etc. and using them ? I think this would be the most robust solution given any library that uses `tempfile`. I don't think we aim to support environment variables to be changed at run time",
"Hi, yeah setting the environment variable before the imports / as environment variable outside is another way to fix this. I am just arguing that `datasets` already uses its own global variable to track temporary files: `_TEMP_DIR_FOR_TEMP_CACHE_FILES`, and the creation of this global variable should respect TMPDIR instead of relying on tempfile to do so."
] | 1,648,573,102,000 | 1,648,652,884,000 | null | NONE | null | ## Describe the bug
This is not 100% a bug in `datasets`, but behavior that surprised me and I think this could be made more robust on the `datasets`side.
When using `datasets.disable_caching()`, cache files are written to a temporary directory. This directory should be based on the environment variable TMPDIR. I want to set TMPDIR at runtime using os.ENVIRON["TMPDIR"] = something, but depending on other imported modules this can fail to take effect.
## Steps to reproduce the bug
`_TempDirWithCustomCleanup` relies on `tempfile` to generate a path to a temporary directory. However, `tempfile` generates the path only once. This can be a problem when trying to set TMPDIR at runtime whenever other code imports `tempfile` first and does something unexpected.
For example (after too much trial and error) I found out that a different part of the code base I work with defines a class `PatchedDataCollatorForLanguageModeling(transformers.DataCollatorForLanguageModeling)` based on a `transformers` class. This import is enough to trigger `tempfile` to generate `tempfile` to generate a temporary path and leading to the wrong path being cached in `tempfile.tempdir`.
## Suggestion:
I could file this also as bug with `transformers`, but I think fixing this on the datasets would be much more robust:
Datasets could recompute the temporary path once (technically possible via `tempfile._get_default_tempdir` or resetting
the global variable `tempfile.tmpdir` to None) before setting its own global `_TEMP_DIR_FOR_TEMP_CACHE_FILES`.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4056/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4055/comments | https://api.github.com/repos/huggingface/datasets/issues/4055/events | https://github.com/huggingface/datasets/pull/4055 | 1,184,976,292 | PR_kwDODunzps41PGF1 | 4,055 | [DO NOT MERGE] Test doc-builder | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Docs built successfully, so closing this."
] | 1,648,564,742,000 | 1,648,643,474,000 | 1,648,643,152,000 | MEMBER | null | This is a test PR to ensure the changes in https://github.com/huggingface/doc-builder/pull/164 don't break anything in `datasets` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4055/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4055",
"html_url": "https://github.com/huggingface/datasets/pull/4055",
"diff_url": "https://github.com/huggingface/datasets/pull/4055.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4055.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4054/comments | https://api.github.com/repos/huggingface/datasets/issues/4054/events | https://github.com/huggingface/datasets/pull/4054 | 1,184,575,368 | PR_kwDODunzps41Nwjz | 4,054 | Support float data types in pearsonr/spearmanr metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,546,150,000 | 1,648,562,879,000 | 1,648,562,540,000 | MEMBER | null | Fix #4053. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4054/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4054",
"html_url": "https://github.com/huggingface/datasets/pull/4054",
"diff_url": "https://github.com/huggingface/datasets/pull/4054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4054.patch",
"merged_at": 1648562540000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4053/comments | https://api.github.com/repos/huggingface/datasets/issues/4053/events | https://github.com/huggingface/datasets/issues/4053 | 1,184,500,378 | I_kwDODunzps5Gmgqa | 4,053 | Modify datatype from `int32` to `float` for pearsonr, spearmanr. | {
"login": "Woodywarhol9",
"id": 86637320,
"node_id": "MDQ6VXNlcjg2NjM3MzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/86637320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Woodywarhol9",
"html_url": "https://github.com/Woodywarhol9",
"followers_url": "https://api.github.com/users/Woodywarhol9/followers",
"following_url": "https://api.github.com/users/Woodywarhol9/following{/other_user}",
"gists_url": "https://api.github.com/users/Woodywarhol9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Woodywarhol9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Woodywarhol9/subscriptions",
"organizations_url": "https://api.github.com/users/Woodywarhol9/orgs",
"repos_url": "https://api.github.com/users/Woodywarhol9/repos",
"events_url": "https://api.github.com/users/Woodywarhol9/events{/privacy}",
"received_events_url": "https://api.github.com/users/Woodywarhol9/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@Woodywarhol9 good catch, thanks for reporting.\r\n\r\nWe are fixing this."
] | 1,648,542,461,000 | 1,648,562,540,000 | 1,648,562,540,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
- Now [Pearsonr](https://github.com/huggingface/datasets/blob/master/metrics/pearsonr/pearsonr.py) and [Spearmanr](https://github.com/huggingface/datasets/blob/master/metrics/spearmanr/spearmanr.py) both get input data as 'int32'.
**Describe the solution you'd like**
- Considering that those metrics are widely used for the STS task(labels are in 'float' data type),
it would be better to modify datatype from 'int32' to 'float' for getting exact values of similarity. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4053/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4052/comments | https://api.github.com/repos/huggingface/datasets/issues/4052/events | https://github.com/huggingface/datasets/issues/4052 | 1,184,447,977 | I_kwDODunzps5GmT3p | 4,052 | metric = metric_cls( TypeError: 'NoneType' object is not callable | {
"login": "klyuhang9",
"id": 39409233,
"node_id": "MDQ6VXNlcjM5NDA5MjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klyuhang9",
"html_url": "https://github.com/klyuhang9",
"followers_url": "https://api.github.com/users/klyuhang9/followers",
"following_url": "https://api.github.com/users/klyuhang9/following{/other_user}",
"gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions",
"organizations_url": "https://api.github.com/users/klyuhang9/orgs",
"repos_url": "https://api.github.com/users/klyuhang9/repos",
"events_url": "https://api.github.com/users/klyuhang9/events{/privacy}",
"received_events_url": "https://api.github.com/users/klyuhang9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [2]: metric = load_metric('glue', 'rte')\r\nDownloading builder script: 5.76kB [00:00, 2.40MB/s]\r\n```\r\n\r\nCould you please, retry to load the metric? Sometimes there are temporary connectivity issues.\r\n\r\nFeel free to re-open this issue of the problem persists."
] | 1,648,539,788,000 | 1,648,562,761,000 | 1,648,562,761,000 | NONE | null | Hi, friend. I meet a problem.
When I run the code:
`metric = load_metric('glue', 'rte')`
There is a problem raising:
`metric = metric_cls(
TypeError: 'NoneType' object is not callable `
I don't know why. Thanks for your help!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4052/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4051/comments | https://api.github.com/repos/huggingface/datasets/issues/4051/events | https://github.com/huggingface/datasets/issues/4051 | 1,184,400,179 | I_kwDODunzps5GmIMz | 4,051 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py | {
"login": "klyuhang9",
"id": 39409233,
"node_id": "MDQ6VXNlcjM5NDA5MjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/39409233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klyuhang9",
"html_url": "https://github.com/klyuhang9",
"followers_url": "https://api.github.com/users/klyuhang9/followers",
"following_url": "https://api.github.com/users/klyuhang9/following{/other_user}",
"gists_url": "https://api.github.com/users/klyuhang9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klyuhang9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klyuhang9/subscriptions",
"organizations_url": "https://api.github.com/users/klyuhang9/orgs",
"repos_url": "https://api.github.com/users/klyuhang9/repos",
"events_url": "https://api.github.com/users/klyuhang9/events{/privacy}",
"received_events_url": "https://api.github.com/users/klyuhang9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [4]: ds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\nDownloading builder script: 28.8kB [00:00, 9.15MB/s] \r\nDownloading metadata: 28.7kB [00:00, 10.7MB/s] \r\nDownloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.78 MiB, post-processed: Unknown size, total: 11.88 MiB) to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.44M/7.44M [00:01<00:00, 4.12MB/s]\r\nDataset glue downloaded and prepared to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. \r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1047.96it/s]\r\n\r\nIn [5]: ds\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1821\r\n })\r\n})\r\n```\r\n\r\nPlease, note that sometimes GitHub has some temporary connectivity issues. Feel free to retry and re-open this issue if the problem persists."
] | 1,648,537,231,000 | 1,648,542,565,000 | 1,648,542,565,000 | NONE | null | Hi, I meet a problem.
When I run the code:
`dataset = load_dataset('glue','sst2')`
There is a issue raising:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
I don't know why; it is ok when I use Google Chrome to view this url.
Thanks for your help! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4051/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4050/comments | https://api.github.com/repos/huggingface/datasets/issues/4050/events | https://github.com/huggingface/datasets/pull/4050 | 1,184,346,501 | PR_kwDODunzps41NAMF | 4,050 | Add RVL-CDIP dataset | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4050). All of your documentation changes will be reflected on that endpoint.",
"Thanks a lot for inputs. I'll use the URL suggested and check.\r\n\r\n> we need to implement the streamable (can't use os.path.join) and the non-streamable versions of _generate_examples.\r\n\r\nSure. I will check the reference and try this out, will get back to you if I face any issues.\r\n\r\n> The labels-only data file URL doesn't work for me, so feel free to ask the authors whether they are OK with us hosting the file on the Hub/S3 (to speed up the streamable version)\r\n\r\nJust checked. The author (Adam Harley) has responded positively and allowed us to host the file. Do I share the file with you for hosting it on Hub/S3 ?",
"> Just checked. The author (Adam Harley) has responded positively and allowed us to host the file. Do I share the file with you for hosting it on Hub/S3 ?\r\n\r\nYes, feel free to e-mail me the file. Then I'll create a repo under my namespace and push the file there. We run a GH action on a GH dataset after merging to create its repo on the Hub, so after this PR is merged, I'll push the file to the \"official\" namespace and update the download link.",
"> You can use this URL to avoid manual download: https://drive.google.com/uc?export=download&id=0Bz1dfcnrpXM-MUt4cHNzUEFXcmc\r\n\r\nFor some reason, the direct download doesn't seem to work for me even with this URL. \r\n```\r\nDownloading and preparing dataset rvl_cdip/default to ~/.cache/huggingface/datasets/rvl_cdip/default/1.0.0/ea152149e06310d60a9ef3c3020199dd4780bb952a773ba5aac6b57d59f12628...\r\nDownloading data files: 100%|█████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6307.22it/s]\r\n{'rvl-cdip': '~/.cache/huggingface/datasets/downloads/07ef956a33750078d570d76fefe9fed49f7dc32ecf6e872d690de11e66bbe869'}\r\n```\r\nAnd this directory does not exist. Am I doing something wrong ?\r\nTo verify, I tried using [gdown](https://github.com/wkentaro/gdown) for the above URL, we get the following : \r\n```\r\nAccess denied with the following error:\r\n\r\n Cannot retrieve the public link of the file. You may need to change\r\n the permission to 'Anyone with the link', or have had many accesses. \r\n\r\nYou may still be able to access the file from the browser:\r\n```\r\n----\r\n\r\n> Yes, feel free to e-mail me the file. Then I'll create a repo under my namespace and push the file there. We run a GH action on a GH dataset after merging to create its repo on the Hub, so after this PR is merged, I'll push the file to the \"official\" namespace and update the download link.\r\n\r\nGot it. I've sent you an email with the file. Thank you.",
"Actually this URL works for direct download :\r\n`https://drive.google.com/uc?export=download&confirm=pbef&id=0Bz1dfcnrpXM-MUt4cHNzUEFXcmc`\r\nRef : https://github.com/wkentaro/gdown/issues/146#issuecomment-1042382215\r\n\r\nI'm working on the streamable versions of _generate_examples as well, will update you regarding this.",
"Google Drive is a tricky host, and it's easy to exceed daily download quota limits, so if we are allowed to host the `rvl-cdip.tar.gz` file, I can push it to the Hub.",
"Just checked, the authors have agreed. He mentioned that he had complaints about the GDrive link.\r\nYou can push it to the Hub and share the link. :)",
"I have added :\r\n- streaming support for rvl-cdip.tar.gz file. [ Need to test this ]\r\n\r\nIs it possible for you to upload the train.txt, test.txt, val.txt files separately to the Hub instead of labels_only.tar.gz file.\r\nCurrently during the tests in stream mode, we get : \r\n`NotImplementedError: Extraction protocol for TAR archives like 'https://huggingface.co/datasets/mariosasko/rvl_cdip/resolve/main/labels_only.tar.gz' is not implemented in streaming mode. Please use dl_manager.iter_archive instead.`\r\nIf the label files are present as .txt files then we can directly use dl_manager.download.\r\n\r\n\r\n"
] | 1,648,533,602,000 | 1,649,256,044,000 | null | CONTRIBUTOR | null | Resolves #2762
Dataset Request : Add RVL-CDIP dataset [#2762](https://github.com/huggingface/datasets/issues/2762)
This PR adds the RVL-CDIP dataset.
The dataset contains Google Drive link for download and wasn't getting downloaded automatically, so I have provided manual_download_instructions.
- I have added the dummy_data.zip as well.
Needed inputs on how I can run the real data and the dummy data tests for datasets with manual download ?
Inputs and suggestions for improvement are welcome. Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4050/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4050",
"html_url": "https://github.com/huggingface/datasets/pull/4050",
"diff_url": "https://github.com/huggingface/datasets/pull/4050.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4050.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4049/comments | https://api.github.com/repos/huggingface/datasets/issues/4049/events | https://github.com/huggingface/datasets/pull/4049 | 1,183,832,893 | PR_kwDODunzps41LSjv | 4,049 | Create metric card for the Code Eval metric | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"if possible, give relevant names to your Pull requests @sashavor (make it easier to scan the repo activity) Thanks!",
"updating them now! thanks for the feedback @julien-c "
] | 1,648,492,463,000 | 1,648,561,092,000 | 1,648,560,770,000 | CONTRIBUTOR | null | Creating initial Code Eval metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4049/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4049",
"html_url": "https://github.com/huggingface/datasets/pull/4049",
"diff_url": "https://github.com/huggingface/datasets/pull/4049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4049.patch",
"merged_at": 1648560770000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4048/comments | https://api.github.com/repos/huggingface/datasets/issues/4048/events | https://github.com/huggingface/datasets/issues/4048 | 1,183,804,576 | I_kwDODunzps5Gj2yg | 4,048 | Split size error on `amazon_us_reviews` / `PC_v1_00` dataset | {
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | open | false | {
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "trentonstrong",
"id": 191985,
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trentonstrong",
"html_url": "https://github.com/trentonstrong",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Follow-up: I have confirmed there are no duplicate lines via `sort amazon_reviews_us_PC_v1_00.tsv | uniq -cd` after extracting the raw file.",
"Hi @trentonstrong, thanks for reporting!\r\n\r\nI confirm that loading this dataset configuration throws a `NonMatchingSplitsSizesError`:\r\n```\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=350242049, num_examples=785730, dataset_name='amazon_us_reviews'), 'recorded': SplitInfo(name='train', num_bytes=3982712078, num_examples=6908554, dataset_name='amazon_us_reviews')}]\r\n```\r\n\r\nAlso thank you for your offer to fix this. You can find information about how to update the metadata JSON file here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata\r\n```shell\r\ndatasets-cli test datasets/amazon_us_reviews --save_infos --all_configs\r\n```\r\nPlease, feel free to open a PR with this fix. And do not hesitate to ping me if you need any help.",
"No sweat. Will get it patched up ASAP."
] | 1,648,491,124,000 | 1,649,057,249,000 | null | NONE | null | ## Describe the bug
When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m.
Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_PC_v1_00.tsv.gz` and extracted them. A line count via `wc -l` confirms the ~6m number that we see and the data looks valid at a glance (I did not check for duplicate rows). My guess is this file has either been updated in place or there is a bug in the dataset metadata.
Happy to submit a PR and fix this up if turns out to be a metadata issue but wanted to get some other :eyes: on it first.
## Steps to reproduce the bug
```python
load_dataset('amazon_us_reviews', 'PC_v1_00')
```
## Expected results
Dataset is downloaded and extracted successfully.
## Actual results
An split size exception is thrown.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4048/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4047/comments | https://api.github.com/repos/huggingface/datasets/issues/4047/events | https://github.com/huggingface/datasets/issues/4047 | 1,183,789,237 | I_kwDODunzps5GjzC1 | 4,047 | Dataset.unique(column: str) -> ArrowNotImplementedError | {
"login": "orkenstein",
"id": 1461936,
"node_id": "MDQ6VXNlcjE0NjE5MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1461936?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orkenstein",
"html_url": "https://github.com/orkenstein",
"followers_url": "https://api.github.com/users/orkenstein/followers",
"following_url": "https://api.github.com/users/orkenstein/following{/other_user}",
"gists_url": "https://api.github.com/users/orkenstein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orkenstein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orkenstein/subscriptions",
"organizations_url": "https://api.github.com/users/orkenstein/orgs",
"repos_url": "https://api.github.com/users/orkenstein/repos",
"events_url": "https://api.github.com/users/orkenstein/events{/privacy}",
"received_events_url": "https://api.github.com/users/orkenstein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @orkenstein, thanks for reporting.\r\n\r\nPlease note that for this case, our `datasets` library uses under the hood the Apache Arrow `unique` function: https://arrow.apache.org/docs/python/generated/pyarrow.compute.unique.html#pyarrow.compute.unique\r\n\r\nAnd currently the Apache Arrow `unique` function is only implemented for these input types (see info in their [docs](https://arrow.apache.org/docs/cpp/compute.html#array-wise-vector-functions)): Boolean, Null, Numeric, Temporal, Binary- and String-like.\r\n\r\nHowever, the data types of the `wikiann` dataset are all `list<item: string>` (see its [dataset card](https://huggingface.co/datasets/wikiann#data-fields)), and thus, not yet supported by the Apache Arrow `unique` function.",
"As a workaround solution you can use pandas:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('wikiann', 'en', split='train')\r\ndf = dataset.to_pandas()\r\nunique_df = df[~df.tokens.apply(tuple).duplicated()] # from https://stackoverflow.com/a/46958336/17517845\r\n```\r\n\r\nNote that pandas loads the dataset in memory (this one is small so it's fine).",
"@lhoestq thank you! I will fall back to this method for now"
] | 1,648,490,372,000 | 1,648,837,497,000 | 1,648,837,497,000 | NONE | null | ## Describe the bug
I'm trying to use `unique()` function, but it fails
## Steps to reproduce the bug
1. Get dataset
2. Call `unique`
3. Error
# Sample code to reproduce the bug
```python
!pip show datasets
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
dataset['train'].column_names
dataset['train'].unique(dataset['train'].column_names[0])
```
## Expected results
It would be nice to actually see unique items
## Actual results
Error:
```python
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
[<ipython-input-10-5e0de07ed42c>](https://s0qyv2vjaji-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220324-060046-RC00_436956229#) in <module>()
6
7 dataset['train'].column_names
----> 8 dataset['train'].unique(dataset['train'].column_names[0])
5 frames
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: Function unique has no kernel matching input types (array[list<item: string>])
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Google Collab
- Python version: 3.7.13
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4047/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4046/comments | https://api.github.com/repos/huggingface/datasets/issues/4046/events | https://github.com/huggingface/datasets/pull/4046 | 1,183,723,360 | PR_kwDODunzps41K6_H | 4,046 | Create metric card for XNLI | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,486,678,000 | 1,648,560,779,000 | 1,648,560,450,000 | CONTRIBUTOR | null | Proposing a metric card for XNLI | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4046/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4046",
"html_url": "https://github.com/huggingface/datasets/pull/4046",
"diff_url": "https://github.com/huggingface/datasets/pull/4046.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4046.patch",
"merged_at": 1648560450000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4045/comments | https://api.github.com/repos/huggingface/datasets/issues/4045/events | https://github.com/huggingface/datasets/pull/4045 | 1,183,661,091 | PR_kwDODunzps41KtfV | 4,045 | Fix CLI dummy data generation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,483,755,000 | 1,648,739,052,000 | 1,648,738,746,000 | MEMBER | null | PR:
- #3868
broke the CLI dummy data generation.
Fix #4044. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4045/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4045",
"html_url": "https://github.com/huggingface/datasets/pull/4045",
"diff_url": "https://github.com/huggingface/datasets/pull/4045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4045.patch",
"merged_at": 1648738746000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4044/comments | https://api.github.com/repos/huggingface/datasets/issues/4044/events | https://github.com/huggingface/datasets/issues/4044 | 1,183,658,942 | I_kwDODunzps5GjTO- | 4,044 | CLI dummy data generation is broken | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,648,483,657,000 | 1,648,738,746,000 | 1,648,738,746,000 | MEMBER | null | ## Describe the bug
We get a TypeError when running CLI dummy data generation:
```shell
datasets-cli dummy_data datasets/<your-dataset-folder> --auto_generate
```
gives:
```
File ".../huggingface/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data
dataset_builder._prepare_split(split_generator)
TypeError: _prepare_split() missing 1 required positional argument: 'check_duplicate_keys'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4044/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4043/comments | https://api.github.com/repos/huggingface/datasets/issues/4043/events | https://github.com/huggingface/datasets/pull/4043 | 1,183,624,475 | PR_kwDODunzps41Kl0b | 4,043 | Create metric card for CUAD | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,481,938,000 | 1,648,567,256,000 | 1,648,566,919,000 | CONTRIBUTOR | null | Proposing a CUAD metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4043/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4043",
"html_url": "https://github.com/huggingface/datasets/pull/4043",
"diff_url": "https://github.com/huggingface/datasets/pull/4043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4043.patch",
"merged_at": 1648566919000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4042/comments | https://api.github.com/repos/huggingface/datasets/issues/4042/events | https://github.com/huggingface/datasets/issues/4042 | 1,183,606,855 | I_kwDODunzps5GjGhH | 4,042 | Standardize metric ranges | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
},
{
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,648,481,085,000 | 1,648,481,086,000 | null | CONTRIBUTOR | null | Several common metrics, like `exact_match` and `f1`, sometimes range from 0-1, and sometimes from 0-100.
As discussed with @lhoestq , we think it makes more sense to report them from 0-1, which would entail changing the code of metrics such as [CUAD](https://github.com/huggingface/datasets/blob/master/metrics/cuad/cuad.py).
@emibaylor and I will add other metrics here that we see reporting scores from 0-100 instead of 0-1. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4042/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4041/comments | https://api.github.com/repos/huggingface/datasets/issues/4041/events | https://github.com/huggingface/datasets/issues/4041 | 1,183,599,461 | I_kwDODunzps5GjEtl | 4,041 | Add support for IIIF in datasets | {
"login": "davanstrien",
"id": 8995957,
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davanstrien",
"html_url": "https://github.com/davanstrien",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! Thanks for the detailed analysis of adding IIIF support. I like the idea of \"using IIIF through datasets scripts\" due to its ease of use. Another approach that I like is yielding image ids and using the `piffle` library (which offers a bit more flexibility) + `map` to download + cache images. We can handle bad URLs in `map` by returning `None`. Plus, we can add a `Dataset Preprocessing` section with the code that explains this approach to the card of such datasets. WDYT?\r\n\r\n> currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models.\r\n\r\nThis is why (currently) adding a new feature type would be overkill, IMO.\r\n"
] | 1,648,480,765,000 | 1,649,182,853,000 | null | CONTRIBUTOR | null | This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred.
## What is [IIIF](https://iiif.io/)?
IIIF (International Image Interoperability Framework)
> is a set of open standards for delivering high-quality, attributed digital objects online at scale. It’s also an international community developing and implementing the IIIF APIs. IIIF is backed by a consortium of leading cultural institutions.
The tl;dr is that IIIF provides various specifications for implementing useful functionality for:
- Institutions to make available images for various use cases
- Users to have a consistent way of interacting/requesting these images
- For developers to have a common standard for developing tools for working with IIIF images that will work across all institutions that implement a particular IIIF standard (for example the image viewer for the BNF can also work for the Library of Congress if they both use IIIF).
Some institutions that various levels of support IIF include: The British Library, Internet Archive, Library of Congress, Wikidata. There are also many smaller institutions that have IIIF support. An incomplete list can be found here: https://iiif.io/guides/finding_resources/
## IIIF APIs
IIIF consists of a number of APIs which could be integrated with datasets. I think the most obvious candidate for inclusion would be the [Image API](https://iiif.io/api/image/3.0/)
### IIIF Image API
The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL:
```{scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}```
A concrete example of this:
```https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg```
As you can see the scheme offers a number of options that can be specified in the URL, for example, size. Using the example URL we return:
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg)
We can change the size to request a size of 250 by 250, this is done by changing the size from `full` to `250,250` i.e. switching the URL to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg`
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg)
We can also request the image with max width 250, max height 250 whilst maintaining the aspect ratio using `!w,h`. i.e. change the url to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg`
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg)
A full overview of the options for size can be found here: https://iiif.io/api/image/3.0/#42-size
## Why would/could this be useful for datasets?
There are a few reasons why support for the IIIF Image API could be useful. Broadly the ability to have more control over how an image is returned from a server is useful for many ML workflows:
- images can be requested in the right size, this prevents having to download/stream large images when the actual desired size is much smaller
- can select a subset of an image: it is possible to select a sub-region of an image, this could be useful for example when you already have a bounding box for a subset of an image and then want to use this subset of an image for another task. For example, https://github.com/Living-with-machines/nnanno uses IIIF to request parts of a newspaper image that have been detected as 'photograph', 'illustration' etc for downstream use.
- options for quality, rotation, the format can all be encoded in the URL request.
These may become particularly useful when pre-training models on large image datasets where the cost of downloading images with 1600 pixel width when you actually want 240 has a larger impact.
## What could this look like in datasets?
I think there are various ways in which support for IIIF could potentially be included in `datasets`. These suggestions aren't fully fleshed out but hopefully, give a sense of possible approaches that match existing `datasets` methods in their approach.
### Use through datasets scripts
Loading images via URL is already supported. There are a few possible 'extras' that could be included when using IIIF. One option is to leverage the IIIF protocol in datasets scripts, i.e. the dataset script can expose the IIIF options via the dataset script:
```python
ds = load_dataset("iiif_dataset", image_size="250,250", fmt="jpg")
```
This is already possible. The approach to parsing the IIIF URLs would be left to the person creating the dataset script.
### Support through dataset scripts (with some datasets support)
This is similar to the above but `datasets` would offer some way of saying this is a iiif URL and then expose the options associated with IIIF images automatically. i.e. if you did something like:
```python
features = {"label": ClassLabel(names=['dog','cat']),
"url": datasets.IIIFURL()}
```
inside your loading script, you would automatically have exposed `size`, `fmt` etc. options when loading the dataset.
### Other possible integrations
Some other possible pseudocode ways that a user could interact with IIIF URLs:
The ability to cast to an `IIIFImage` feature type:
```
ds.cast_column('url', IIIFImage, download=False)
```
The ability to specify some options associated with IIIF urls.
```
ds = ds.set_iiif_options(column='url', size="250,250")
```
I think all of these would rely on having an `IIIFImage` feature type - this would be a little bit of a Frankenstein between a `string` and `datasets.Image`. I think most of the actual image behaviour would be exactly the same as `datasets.Image`, the difference would be that the underlying URL could be modified in various ways.
## prerequisite requirements
There are a few pre-requisites that I can anticipate. This doesn't cover a full implementation of IIIF support which would have different requirements depending on the approach taken to implementing IIIF. Some of these features would be useful independently of adding IIIF support:
### support for handling failed images loaded via a URL (or a specific IIIFImage feature).
Working with images via web requests will inevitably return the odd failed request. If these images are then requests and don't return it would be useful to have a `None` returned instead of an error. For example, when using `push_to_hub` `datasets` will try and include the image but currently fails with bad URLs.
```python
from datasets import Dataset
import datasets
urls = ['https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg']*3
urls.append("badurl.com/image.jpg")
data = {"url":urls}
ds = Dataset.from_dict(data)
ds = ds.cast_column('url', datasets.Image())
ds[3]['url']
```
returns a `FileNotFoundError`, for streaming large datasets of images using their URLs it could be useful to have `None` returned instead. This has implications for the actual training loop i.e. you now need to somehow skip those examples because of this it might not be desirable to support this.
### Caching support
Since IIIF requests images via a URL it would be great to have a way of not requesting the images multiple times. This is tracked in https://github.com/huggingface/datasets/issues/3142 and I think this would also be very desirable to have here particularly as one of the primary use cases of IIIF may be to do unsupervised pre-training on large datasets of IIIF URLs.
### Support for Parsing IIIF URLs
This gets closer to the actual implementation. Here the requirement would be some way for `datasets` to parse a URL that the users specify is an IIIF URL. An example of a Python library that does this: https://github.com/Princeton-CDH/piffle. I also have a rough version that uses `dataclasses` which I can share.
## Why it might not be worthwhile/suitable for datasets
There are some reasons that this might not be worth implementing:
- currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models.
- It may end up being better to leave this to the user. It would for example be possible for someone to write map functions to change an IIIF URL to the correct size etc. Adding direct support for IIIF in datasets may potentially not be worth the trouble.
- The impact of different approaches to doing image scaling can impact the downstream model's performance, see: https://twitter.com/wightmanr/status/1479528581466243073?s=20. Since different IIIF image servers may implement different approaches to resizing images this could have a downstream impact on model performance. think this is something that could be flagged to the end-user in the documentation. This probably also falls into general "gotchas" that probably aren't the `datasets` libraries' role to protect users from.
Some of the requirements outlined above would be useful for images anyway. These could be implemented prior to a final decision about whether IIIF support could/should be added to datasets.
## Suggested next steps:
I realise this is a long and slightly open-ended issue. I am happy to clarify/answer questions on IIIF and possible integrations. If the prerequisite requirements seem worth exploring/are better explored in their own issues let me know and I can open new issues for those.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4041/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4041/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4040/comments | https://api.github.com/repos/huggingface/datasets/issues/4040/events | https://github.com/huggingface/datasets/issues/4040 | 1,183,556,085 | I_kwDODunzps5Gi6H1 | 4,040 | Calling existing metrics from other metrics | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"That's definitely the way to go to avoid implementation bugs and making sure we can fix issues globally when we detect them in a metric. Thanks for reporting!",
"CC @emibaylor "
] | 1,648,478,892,000 | 1,648,579,118,000 | null | CONTRIBUTOR | null | There are several cases of metrics calling other metrics, e.g. [Wiki Split](https://huggingface.co/metrics/wiki_split) which calls [BLEU](https://huggingface.co/metrics/bleu) and [SARI](https://huggingface.co/metrics/sari). These are all currently re-implemented each time (often with external code).
A potentially more efficient and centralized way of doing things would maybe to have a single implementation and calling that implementation.
E.g. @lhoestq 's proposal:
```
def _compute(...):
bleu = load_metric("bleu", cache_dir=self.cache_dir, seed=self.seed)
output = bleu._compute(...)
```
Something to keep in mind for the big metric reorg! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4040/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4040/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4039/comments | https://api.github.com/repos/huggingface/datasets/issues/4039/events | https://github.com/huggingface/datasets/pull/4039 | 1,183,468,927 | PR_kwDODunzps41KFIf | 4,039 | Support streaming xcopa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,475,155,000 | 1,648,484,808,000 | 1,648,484,506,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4039/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4039",
"html_url": "https://github.com/huggingface/datasets/pull/4039",
"diff_url": "https://github.com/huggingface/datasets/pull/4039.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4039.patch",
"merged_at": 1648484506000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4038/comments | https://api.github.com/repos/huggingface/datasets/issues/4038/events | https://github.com/huggingface/datasets/pull/4038 | 1,183,189,827 | PR_kwDODunzps41JKUG | 4,038 | [DO NOT MERGE] Test doc-builder with skipped installation feature | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Fix in https://github.com/huggingface/doc-builder/pull/162 works as expected (docs build), closing this"
] | 1,648,461,511,000 | 1,648,470,845,000 | 1,648,470,549,000 | MEMBER | null | This PR is just for testing that we can build PR docs with changes made on the [`skip-install-for-real`](https://github.com/huggingface/doc-builder/tree/skip-install-for-real) branch of `doc-builder`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4038/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4038",
"html_url": "https://github.com/huggingface/datasets/pull/4038",
"diff_url": "https://github.com/huggingface/datasets/pull/4038.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4038.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4037/comments | https://api.github.com/repos/huggingface/datasets/issues/4037/events | https://github.com/huggingface/datasets/issues/4037 | 1,183,144,486 | I_kwDODunzps5GhVom | 4,037 | Error while building documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"After some investigation, maybe the bug is in `doc-builder`.\r\n\r\nI've opened an issue there:\r\n- huggingface/doc-builder#160",
"Fixed by @lewtun (thank you):\r\n- huggingface/doc-builder@31fe6c8bc7225810e281c2f6c6cd32f38828c504"
] | 1,648,459,364,000 | 1,648,461,712,000 | 1,648,461,648,000 | MEMBER | null | ## Describe the bug
Documentation building is failing:
- https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true
```
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct.
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4037/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4036/comments | https://api.github.com/repos/huggingface/datasets/issues/4036/events | https://github.com/huggingface/datasets/pull/4036 | 1,183,126,893 | PR_kwDODunzps41I854 | 4,036 | Fix building of documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Superseded by huggingface/doc-builder@31fe6c8bc7225810e281c2f6c6cd32f38828c504"
] | 1,648,458,552,000 | 1,648,466,311,000 | 1,648,466,002,000 | MEMBER | null | Documentation building is failing:
- https://github.com/huggingface/datasets/runs/5716300989?check_suite_focus=true
```
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Unable to find datasets.filesystems.S3FileSystem in datasets. Make sure the path to that object is correct.
```
Fix #4037. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4036/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4036",
"html_url": "https://github.com/huggingface/datasets/pull/4036",
"diff_url": "https://github.com/huggingface/datasets/pull/4036.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4036.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4035/comments | https://api.github.com/repos/huggingface/datasets/issues/4035/events | https://github.com/huggingface/datasets/pull/4035 | 1,183,067,456 | PR_kwDODunzps41Iwb2 | 4,035 | Add zero_division argument to precision and recall metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,648,455,554,000 | 1,648,461,187,000 | 1,648,461,186,000 | MEMBER | null | Fix #4025. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4035/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4035",
"html_url": "https://github.com/huggingface/datasets/pull/4035",
"diff_url": "https://github.com/huggingface/datasets/pull/4035.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4035.patch",
"merged_at": 1648461186000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4034/comments | https://api.github.com/repos/huggingface/datasets/issues/4034/events | https://github.com/huggingface/datasets/pull/4034 | 1,183,033,285 | PR_kwDODunzps41IpN1 | 4,034 | Fix null checksum in xcopa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,648,453,694,000 | 1,648,454,774,000 | 1,648,454,774,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4034/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4034",
"html_url": "https://github.com/huggingface/datasets/pull/4034",
"diff_url": "https://github.com/huggingface/datasets/pull/4034.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4034.patch",
"merged_at": 1648454774000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4033/comments | https://api.github.com/repos/huggingface/datasets/issues/4033/events | https://github.com/huggingface/datasets/pull/4033 | 1,182,984,445 | PR_kwDODunzps41Ie6w | 4,033 | Fix checksum error in cats_vs_dogs dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,450,885,000 | 1,648,453,779,000 | 1,648,453,464,000 | MEMBER | null | Recent PR updated the metadata JSON file of cats_vs_dogs dataset:
- #3878
However, that new JSON file contains a None checksum.
This PR fixes it.
Fix #4032. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4033/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4033",
"html_url": "https://github.com/huggingface/datasets/pull/4033",
"diff_url": "https://github.com/huggingface/datasets/pull/4033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4033.patch",
"merged_at": 1648453464000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4032/comments | https://api.github.com/repos/huggingface/datasets/issues/4032/events | https://github.com/huggingface/datasets/issues/4032 | 1,182,595,697 | I_kwDODunzps5GfPpx | 4,032 | can't download cats_vs_dogs dataset | {
"login": "RRaphaell",
"id": 74569835,
"node_id": "MDQ6VXNlcjc0NTY5ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/74569835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RRaphaell",
"html_url": "https://github.com/RRaphaell",
"followers_url": "https://api.github.com/users/RRaphaell/followers",
"following_url": "https://api.github.com/users/RRaphaell/following{/other_user}",
"gists_url": "https://api.github.com/users/RRaphaell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RRaphaell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RRaphaell/subscriptions",
"organizations_url": "https://api.github.com/users/RRaphaell/orgs",
"repos_url": "https://api.github.com/users/RRaphaell/repos",
"events_url": "https://api.github.com/users/RRaphaell/events{/privacy}",
"received_events_url": "https://api.github.com/users/RRaphaell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thnaks for reporting @RRaphaell.\r\n\r\nWe are fixing it. "
] | 1,648,400,739,000 | 1,648,453,464,000 | 1,648,453,464,000 | NONE | null | ## Describe the bug
can't download cats_vs_dogs dataset. error: Checksums didn't match for dataset source files
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cats_vs_dogs")
```
## Expected results
loaded successfully.
## Actual results
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip']
## Environment info
fresh google colab notebook
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4032/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4031/comments | https://api.github.com/repos/huggingface/datasets/issues/4031/events | https://github.com/huggingface/datasets/issues/4031 | 1,182,415,124 | I_kwDODunzps5GejkU | 4,031 | Cannot load the dataset conll2012_ontonotesv5 | {
"login": "cathyxl",
"id": 8326473,
"node_id": "MDQ6VXNlcjgzMjY0NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8326473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cathyxl",
"html_url": "https://github.com/cathyxl",
"followers_url": "https://api.github.com/users/cathyxl/followers",
"following_url": "https://api.github.com/users/cathyxl/following{/other_user}",
"gists_url": "https://api.github.com/users/cathyxl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cathyxl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cathyxl/subscriptions",
"organizations_url": "https://api.github.com/users/cathyxl/orgs",
"repos_url": "https://api.github.com/users/cathyxl/repos",
"events_url": "https://api.github.com/users/cathyxl/events{/privacy}",
"received_events_url": "https://api.github.com/users/cathyxl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @cathyxl, thanks for reporting.\r\n\r\nIndeed, we have recently updated the loading script of that dataset (and fixed that bug as well):\r\n- #4002\r\n\r\nThat fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by:\r\n- installing `datasets` from our GitHub repo:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n- forcing the data files to be redownloaded\r\n```python\r\nds = load_dataset('conll2012_ontonotesv5', 'english_v4', split=\"test\", download_mode=\"force_redownload\")\r\n```\r\n\r\nFeel free to re-open this issue if the problem persists."
] | 1,648,366,703,000 | 1,648,450,711,000 | 1,648,449,078,000 | NONE | null | ## Describe the bug
Cannot load the dataset conll2012_ontonotesv5
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
dataset = load_dataset('conll2012_ontonotesv5', 'english_v4', split="test")
print(dataset)
```
## Expected results
The datasets should be downloaded successfully
## Actual results
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/zmycy7t9h9-1.zip']
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4031/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4030/comments | https://api.github.com/repos/huggingface/datasets/issues/4030/events | https://github.com/huggingface/datasets/pull/4030 | 1,182,157,056 | PR_kwDODunzps41FxjE | 4,030 | Use a constant for the articles regex in SQuAD v2 | {
"login": "bryant1410",
"id": 3905501,
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bryant1410",
"html_url": "https://github.com/bryant1410",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4030). All of your documentation changes will be reflected on that endpoint."
] | 1,648,335,990,000 | 1,648,336,711,000 | null | CONTRIBUTOR | null | The main reason for doing this is to be able to change the articles list if using another language, for example. It's not the most elegant solution but at least it makes the metric more extensible with no drawbacks.
BTW, what could be the best way to make this more generic (i.e., SQuAD in other languages)? Maybe receive a regex as an optional param, with the current value as the default? Similarly for SQuAD v1 (can't they re-use code?). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4030/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4030",
"html_url": "https://github.com/huggingface/datasets/pull/4030",
"diff_url": "https://github.com/huggingface/datasets/pull/4030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4030.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4029/comments | https://api.github.com/repos/huggingface/datasets/issues/4029/events | https://github.com/huggingface/datasets/issues/4029 | 1,181,057,011 | I_kwDODunzps5GZX_z | 4,029 | Add FAISS .range_search() method for retrieving all texts from dataset above similarity threshold | {
"login": "MoritzLaurer",
"id": 41862082,
"node_id": "MDQ6VXNlcjQxODYyMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MoritzLaurer",
"html_url": "https://github.com/MoritzLaurer",
"followers_url": "https://api.github.com/users/MoritzLaurer/followers",
"following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}",
"gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions",
"organizations_url": "https://api.github.com/users/MoritzLaurer/orgs",
"repos_url": "https://api.github.com/users/MoritzLaurer/repos",
"events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}",
"received_events_url": "https://api.github.com/users/MoritzLaurer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! You can access the faiss index with\r\n```python\r\nfaiss_index = my_dataset.get_index(\"my_index_name\").faiss_index\r\n```\r\nand then do whatever you want with it, e.g. query it using range_search:\r\n```python\r\nthreshold = 0.95\r\nlimits, distances, indices = faiss_index.range_search(x=xq, thresh=threshold)\r\n\r\ntexts = dataset[indices]\r\n```",
"wow, that's great, thank you for the explanation. (if that's not already in the documentation, could be worth adding it)\r\n\r\nwhich type of faiss index is Datasets using? I looked into faiss recently and I understand that there are several different types of indexes and the choice is important, e.g. regarding which distance metric you use (euclidian vs. cosine/dot product), the size of my dataset etc. can I chose the type of index somehow as well?",
"`Dataset.add_faiss_index` has a `string_factory` parameter, used to set the type of index (see the faiss documentation about [index factory](https://github.com/facebookresearch/faiss/wiki/The-index-factory)). Alternatively, you can pass an index you've defined yourself using faiss with the `custom_index` parameter of `Dataset.add_faiss_index` \r\n\r\nHere is the full documentation of `Dataset.add_faiss_index`: https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.Dataset.add_faiss_index",
"great thanks, I will try it out"
] | 1,648,229,493,000 | 1,648,583,848,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
I would like to retrieve all texts from a dataset, which are semantically similar to a specific input text (query), above a certain (cosine) similarity threshold. My dataset is very large (Wikipedia), so I need to use Datasets and FAISS for this. I would like to be able to repeat many different queries on the dataset quickly.
**Describe the solution you'd like**
dataset objects currently have the .get_nearest_examples() method for text retrieval via FAISS. But this only allows retrieving a specific number of K texts instead of everything above a specified similarity threshold.
It would be great if HF Datasets would also support the FAISS method .range_search() for retrieving texts above a certain similarity threshold.
see details here: https://github.com/facebookresearch/faiss/issues/1273
**Describe alternatives you've considered**
I've considered using native FAISS, but doing this via HF datasets would be better. My assumption is that Dataset features like dataset streaming make it easier to work with large datasets
**Additional context**
The concrete use-case is: I have a large dataset (wikipedia) and I would like to retrieve all paragraphs which are similar to a query. I will use sentence-transformers for encoding the texts.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4029/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4028/comments | https://api.github.com/repos/huggingface/datasets/issues/4028/events | https://github.com/huggingface/datasets/pull/4028 | 1,181,022,675 | PR_kwDODunzps41B429 | 4,028 | Fix docs on audio feature installation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,227,311,000 | 1,648,743,647,000 | 1,648,743,320,000 | MEMBER | null | This PR:
- Removes the explicit installation of `librosa` (this is installed with `pip install datasets[audio]`
- Adds the warning for Linux users to install manually the non-Python package `libsndfile`
- Explains that the installation of `torchaudio` is only necessary to support loading audio datasets containing MP3 audio files
Related to #4000. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4028/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4028",
"html_url": "https://github.com/huggingface/datasets/pull/4028",
"diff_url": "https://github.com/huggingface/datasets/pull/4028.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4028.patch",
"merged_at": 1648743320000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4027/comments | https://api.github.com/repos/huggingface/datasets/issues/4027/events | https://github.com/huggingface/datasets/issues/4027 | 1,180,991,344 | I_kwDODunzps5GZH9w | 4,027 | ElasticSearch Indexing example: TypeError: __init__() missing 1 required positional argument: 'scheme' | {
"login": "MoritzLaurer",
"id": 41862082,
"node_id": "MDQ6VXNlcjQxODYyMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MoritzLaurer",
"html_url": "https://github.com/MoritzLaurer",
"followers_url": "https://api.github.com/users/MoritzLaurer/followers",
"following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}",
"gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions",
"organizations_url": "https://api.github.com/users/MoritzLaurer/orgs",
"repos_url": "https://api.github.com/users/MoritzLaurer/repos",
"events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}",
"received_events_url": "https://api.github.com/users/MoritzLaurer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, @MoritzLaurer, thanks for reporting.\r\n\r\nNormally this is due to a mismatch between the versions of your Elasticsearch client and server:\r\n- your ES client is passing only keyword arguments to your ES server\r\n- whereas your ES server expects a positional argument called 'scheme'\r\n\r\nIn order to fix this, you should align the major versions of both Elasticsearch client and server.\r\n\r\nYou can have more info:\r\n- on this other issue page: https://github.com/huggingface/datasets/issues/3956#issuecomment-1072115173\r\n- Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility\r\n\r\nFeel free to re-open this issue if the problem persists.\r\n\r\nDuplicate of:\r\n- #3956",
"1. Check elasticsearch version\r\n```\r\nimport elasticsearch\r\nprint(elasticsearch.__version__)\r\n```\r\nEx: 7.9.1\r\n2. Uninstall current elasticsearch package\r\n`pip uninstall elasticsearch`\r\n3. Install elasticsearch 7.9.1 package\r\n`pip install elasticsearch==7.9.1`"
] | 1,648,225,348,000 | 1,649,327,392,000 | 1,648,454,336,000 | NONE | null | ## Describe the bug
I am following the example in the documentation for elastic search step by step (on google colab): https://huggingface.co/docs/datasets/faiss_es#elasticsearch
```
from datasets import load_dataset
squad = load_dataset('crime_and_punish', split='train[:1000]')
```
When I run the line:
`squad.add_elasticsearch_index("context", host="localhost", port="9200")`
I get the error:
`TypeError: __init__() missing 1 required positional argument: 'scheme'`
## Expected results
No error message
## Actual results
```
TypeError Traceback (most recent call last)
[<ipython-input-23-9205593edef3>](https://localhost:8080/#) in <module>()
1 import elasticsearch
----> 2 squad.add_elasticsearch_index("text", host="localhost", port="9200")
6 frames
[/usr/local/lib/python3.7/dist-packages/elasticsearch/_sync/client/utils.py](https://localhost:8080/#) in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.0
- Platform: Linux, Google Colab
- Python version: Google Colab (probably 3.7)
- PyArrow version: ?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4027/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4026/comments | https://api.github.com/repos/huggingface/datasets/issues/4026/events | https://github.com/huggingface/datasets/pull/4026 | 1,180,968,774 | PR_kwDODunzps41Btcm | 4,026 | Support streaming xtreme dataset for bucc18 config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,648,224,040,000 | 1,648,225,610,000 | 1,648,225,312,000 | MEMBER | null | Support streaming xtreme dataset for bucc18 config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4026/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4026",
"html_url": "https://github.com/huggingface/datasets/pull/4026",
"diff_url": "https://github.com/huggingface/datasets/pull/4026.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4026.patch",
"merged_at": 1648225312000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4025/comments | https://api.github.com/repos/huggingface/datasets/issues/4025/events | https://github.com/huggingface/datasets/issues/4025 | 1,180,963,105 | I_kwDODunzps5GZBEh | 4,025 | Missing argument in precision/recall | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for the suggestion, @Dref360.\r\n\r\nWe are adding that argument. "
] | 1,648,223,752,000 | 1,648,461,186,000 | 1,648,461,186,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
[`sklearn.metrics.precision_score`](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html) accepts an argument `zero_division`, but it is not available in [precision Metric](https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py#L117)
Same issue is present for Recall.
**Describe the solution you'd like**
Support for **kwargs or adding a new field for `zero_division`.
**Describe alternatives you've considered**
I could filter the warnings myself, but that is not ideal.
**Additional context**
I can make the requested changes if this is approved. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4025/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4024/comments | https://api.github.com/repos/huggingface/datasets/issues/4024/events | https://github.com/huggingface/datasets/pull/4024 | 1,180,951,817 | PR_kwDODunzps41Bp3V | 4,024 | Doc: image_process small tip | {
"login": "FrancescoSaverioZuppichini",
"id": 15908060,
"node_id": "MDQ6VXNlcjE1OTA4MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/15908060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancescoSaverioZuppichini",
"html_url": "https://github.com/FrancescoSaverioZuppichini",
"followers_url": "https://api.github.com/users/FrancescoSaverioZuppichini/followers",
"following_url": "https://api.github.com/users/FrancescoSaverioZuppichini/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancescoSaverioZuppichini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrancescoSaverioZuppichini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancescoSaverioZuppichini/subscriptions",
"organizations_url": "https://api.github.com/users/FrancescoSaverioZuppichini/orgs",
"repos_url": "https://api.github.com/users/FrancescoSaverioZuppichini/repos",
"events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This tip is unnecessary, i.e., Pillow will already be installed since the `Image` feature requires it for encoding and decoding. Thanks anyway.\r\n\r\ncc @stevhliu I've noticed we are missing the installation section in the doc (`pip install datasets[vision]`). I can add it myself."
] | 1,648,223,072,000 | 1,648,740,935,000 | 1,648,740,620,000 | NONE | null | I've added a small tip in the `image_process` doc | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4024/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4024",
"html_url": "https://github.com/huggingface/datasets/pull/4024",
"diff_url": "https://github.com/huggingface/datasets/pull/4024.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4024.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4023/comments | https://api.github.com/repos/huggingface/datasets/issues/4023/events | https://github.com/huggingface/datasets/pull/4023 | 1,180,840,399 | PR_kwDODunzps41BSZT | 4,023 | Replace yahoo_answers_topics data url | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI is failing because of issues in the dataset cards that are unrelated to this PR - merging"
] | 1,648,217,337,000 | 1,648,462,376,000 | 1,648,462,072,000 | MEMBER | null | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4023/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4023",
"html_url": "https://github.com/huggingface/datasets/pull/4023",
"diff_url": "https://github.com/huggingface/datasets/pull/4023.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4023.patch",
"merged_at": 1648462072000
} | true |