url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.11B
node_id
stringlengths
18
32
number
int64
1
3.59k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
sequence
created_at
int64
1,587B
1,642B
updated_at
int64
1,587B
1,642B
closed_at
null
1,587B
1,642B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
null
2 classes
pull_request
null
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3585/comments
https://api.github.com/repos/huggingface/datasets/issues/3585/events
https://github.com/huggingface/datasets/issues/3585
1,105,821,470
I_kwDODunzps5B6X8e
3,585
Datasets streaming + map doesn't work for `Audio`
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "This seems related to https://github.com/huggingface/datasets/issues/3505." ]
1,642,424,142,000
1,642,424,757,000
null
MEMBER
null
## Describe the bug When using audio datasets in streaming mode, applying a `map(...)` before iterating leads to an error as the key `array` does not exist anymore. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("common_voice", "en", streaming=True, split="train") def map_fn(batch): print("audio keys", batch["audio"].keys()) batch["audio"] = batch["audio"]["array"][:100] return batch ds = ds.map(map_fn) sample = next(iter(ds)) ``` I think the audio is somehow decoded before `.map(...)` is actually called. ## Expected results IMO, the above code snippet should work. ## Actual results ```bash audio keys dict_keys(['path', 'bytes']) Traceback (most recent call last): File "./run_audio.py", line 15, in <module> sample = next(iter(ds)) File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 341, in __iter__ for key, example in self._iter(): File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 338, in _iter yield from ex_iterable File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 192, in __iter__ yield key, self.function(example) File "./run_audio.py", line 9, in map_fn batch["input"] = batch["audio"]["array"][:100] KeyError: 'array' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.1.dev0 - Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3585/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3584
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3584/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3584/comments
https://api.github.com/repos/huggingface/datasets/issues/3584/events
https://github.com/huggingface/datasets/issues/3584
1,105,231,768
I_kwDODunzps5B4H-Y
3,584
https://huggingface.co/datasets/huggingface/transformers-metadata
{ "login": "ecankirkic", "id": 37082592, "node_id": "MDQ6VXNlcjM3MDgyNTky", "avatar_url": "https://avatars.githubusercontent.com/u/37082592?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ecankirkic", "html_url": "https://github.com/ecankirkic", "followers_url": "https://api.github.com/users/ecankirkic/followers", "following_url": "https://api.github.com/users/ecankirkic/following{/other_user}", "gists_url": "https://api.github.com/users/ecankirkic/gists{/gist_id}", "starred_url": "https://api.github.com/users/ecankirkic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ecankirkic/subscriptions", "organizations_url": "https://api.github.com/users/ecankirkic/orgs", "repos_url": "https://api.github.com/users/ecankirkic/repos", "events_url": "https://api.github.com/users/ecankirkic/events{/privacy}", "received_events_url": "https://api.github.com/users/ecankirkic/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
[]
1,642,378,694,000
1,642,411,314,000
null
NONE
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3584/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3583/comments
https://api.github.com/repos/huggingface/datasets/issues/3583/events
https://github.com/huggingface/datasets/issues/3583
1,105,195,144
I_kwDODunzps5B3_CI
3,583
Add The Medical Segmentation Decathlon Dataset
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
1,642,369,345,000
1,642,369,345,000
null
NONE
null
## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects. - **Paper:** [link to the dataset paper if available](https://arxiv.org/abs/2106.05735) - **Data:** http://medicaldecathlon.com/ - **Motivation:** Hugging Face seeks to democratize ML for society. One of the growing niches within ML is the ML + Medicine community. Key data sets will help increase the supply of HF resources for starting an initial community. (cc @osanseviero @abidlabs ) Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3583/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3582
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3582/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3582/comments
https://api.github.com/repos/huggingface/datasets/issues/3582/events
https://github.com/huggingface/datasets/issues/3582
1,104,877,303
I_kwDODunzps5B2xb3
3,582
conll 2003 dataset source url is no longer valid
{ "login": "rcanand", "id": 303900, "node_id": "MDQ6VXNlcjMwMzkwMA==", "avatar_url": "https://avatars.githubusercontent.com/u/303900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcanand", "html_url": "https://github.com/rcanand", "followers_url": "https://api.github.com/users/rcanand/followers", "following_url": "https://api.github.com/users/rcanand/following{/other_user}", "gists_url": "https://api.github.com/users/rcanand/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcanand/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcanand/subscriptions", "organizations_url": "https://api.github.com/users/rcanand/orgs", "repos_url": "https://api.github.com/users/rcanand/repos", "events_url": "https://api.github.com/users/rcanand/events{/privacy}", "received_events_url": "https://api.github.com/users/rcanand/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "I came to open the same issue." ]
1,642,287,857,000
1,642,425,282,000
null
NONE
null
## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expected results The dataset should load. ## Actual results It is looking for the dataset at `https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt` but it was removed from there yesterday (see [commit](https://github.com/davidsbatista/NER-datasets/commit/9d8f45cc7331569af8eb3422bbe1c97cbebd5690) that removed the file and related [issue](https://github.com/davidsbatista/NER-datasets/issues/8)). - We should replace this with an alternate valid location. - this is being referenced in the huggingface course chapter 7 [colab notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section2_pt.ipynb), which is also broken. ```python FileNotFoundError Traceback (most recent call last) <ipython-input-4-27c956bec93c> in <module>() 1 from datasets import load_dataset 2 ----> 3 raw_datasets = load_dataset("conll2003") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params) 610 ) 611 elif response is not None and response.status_code == 404: --> 612 raise FileNotFoundError(f"Couldn't find file at {url}") 613 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 614 if head_error is not None: FileNotFoundError: Couldn't find file at https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3582/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3581/comments
https://api.github.com/repos/huggingface/datasets/issues/3581/events
https://github.com/huggingface/datasets/issues/3581
1,104,857,822
I_kwDODunzps5B2sre
3,581
Unable to create a dataset from a parquet file in S3
{ "login": "regCode", "id": 18012903, "node_id": "MDQ6VXNlcjE4MDEyOTAz", "avatar_url": "https://avatars.githubusercontent.com/u/18012903?v=4", "gravatar_id": "", "url": "https://api.github.com/users/regCode", "html_url": "https://github.com/regCode", "followers_url": "https://api.github.com/users/regCode/followers", "following_url": "https://api.github.com/users/regCode/following{/other_user}", "gists_url": "https://api.github.com/users/regCode/gists{/gist_id}", "starred_url": "https://api.github.com/users/regCode/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/regCode/subscriptions", "organizations_url": "https://api.github.com/users/regCode/orgs", "repos_url": "https://api.github.com/users/regCode/repos", "events_url": "https://api.github.com/users/regCode/events{/privacy}", "received_events_url": "https://api.github.com/users/regCode/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,642,282,456,000
1,642,282,456,000
null
NONE
null
## Describe the bug Trying to create a dataset from a parquet file in S3. ## Steps to reproduce the bug ```python import s3fs from datasets import Dataset s3 = s3fs.S3FileSystem(anon=False) with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file: dataset = Dataset.from_parquet(s3file) ``` ## Expected results A new Dataset object ## Actual results ```AttributeError: 'S3File' object has no attribute 'decode'``` ``` AttributeError Traceback (most recent call last) <command-2452877612515691> in <module> 5 6 with s3.open(PATH_LTR_TOY_CLEAN_DATASET, 'rb') as s3file: ----> 7 dataset = Dataset.from_parquet(s3file) /databricks/python/lib/python3.8/site-packages/datasets/arrow_dataset.py in from_parquet(path_or_paths, split, features, cache_dir, keep_in_memory, columns, **kwargs) 907 from .io.parquet import ParquetDatasetReader 908 --> 909 return ParquetDatasetReader( 910 path_or_paths, 911 split=split, /databricks/python/lib/python3.8/site-packages/datasets/io/parquet.py in __init__(self, path_or_paths, split, features, cache_dir, keep_in_memory, **kwargs) 28 path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths} 29 hash = _PACKAGED_DATASETS_MODULES["parquet"][1] ---> 30 self.builder = Parquet( 31 cache_dir=cache_dir, 32 data_files=path_or_paths, /databricks/python/lib/python3.8/site-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, base_path, info, features, use_auth_token, namespace, data_files, data_dir, **config_kwargs) 246 247 if data_files is not None and not isinstance(data_files, DataFilesDict): --> 248 data_files = DataFilesDict.from_local_or_remote( 249 sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token 250 ) /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 576 for key, patterns_for_key in patterns.items(): 577 out[key] = ( --> 578 DataFilesList.from_local_or_remote( 579 patterns_for_key, 580 base_path=base_path, /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 544 ) -> "DataFilesList": 545 base_path = base_path if base_path is not None else str(Path().resolve()) --> 546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) 547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token) 548 return cls(data_files, origin_metadata) /databricks/python/lib/python3.8/site-packages/datasets/data_files.py in resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) 191 data_files = [] 192 for pattern in patterns: --> 193 if is_remote_url(pattern): 194 data_files.append(Url(pattern)) 195 else: /databricks/python/lib/python3.8/site-packages/datasets/utils/file_utils.py in is_remote_url(url_or_filename) 115 116 def is_remote_url(url_or_filename: str) -> bool: --> 117 parsed = urlparse(url_or_filename) 118 return parsed.scheme in ("http", "https", "s3", "gs", "hdfs", "ftp") 119 /usr/lib/python3.8/urllib/parse.py in urlparse(url, scheme, allow_fragments) 370 Note that we don't break the components up in smaller bits 371 (e.g. netloc is a single string) and we don't expand % escapes.""" --> 372 url, scheme, _coerce_result = _coerce_args(url, scheme) 373 splitresult = urlsplit(url, scheme, allow_fragments) 374 scheme, netloc, url, query, fragment = splitresult /usr/lib/python3.8/urllib/parse.py in _coerce_args(*args) 122 if str_input: 123 return args + (_noop,) --> 124 return _decode_args(args) + (_encode_result,) 125 126 # Result objects are more helpful than simple tuples /usr/lib/python3.8/urllib/parse.py in _decode_args(args, encoding, errors) 106 def _decode_args(args, encoding=_implicit_encoding, 107 errors=_implicit_errors): --> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args) 109 110 def _coerce_args(*args): /usr/lib/python3.8/urllib/parse.py in <genexpr>(.0) 106 def _decode_args(args, encoding=_implicit_encoding, 107 errors=_implicit_errors): --> 108 return tuple(x.decode(encoding, errors) if x else '' for x in args) 109 110 def _coerce_args(*args): AttributeError: 'S3File' object has no attribute 'decode' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Ubuntu 20.04.3 LTS - Python version: 3.8.10 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3581/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3580
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3580/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3580/comments
https://api.github.com/repos/huggingface/datasets/issues/3580/events
https://github.com/huggingface/datasets/issues/3580
1,104,663,242
I_kwDODunzps5B19LK
3,580
Bug in wiki bio load
{ "login": "tuhinjubcse", "id": 3104771, "node_id": "MDQ6VXNlcjMxMDQ3NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tuhinjubcse", "html_url": "https://github.com/tuhinjubcse", "followers_url": "https://api.github.com/users/tuhinjubcse/followers", "following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}", "gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}", "starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions", "organizations_url": "https://api.github.com/users/tuhinjubcse/orgs", "repos_url": "https://api.github.com/users/tuhinjubcse/repos", "events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}", "received_events_url": "https://api.github.com/users/tuhinjubcse/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[]
1,642,241,073,000
1,642,425,303,000
null
NONE
null
wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-images.githubusercontent.com/3104771/149617875-ef0e30b0-b76e-48cf-b3eb-93ba8e6e5465.png) a
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3580/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3579/comments
https://api.github.com/repos/huggingface/datasets/issues/3579/events
https://github.com/huggingface/datasets/pull/3579
1,103,451,118
PR_kwDODunzps4xBmY4
3,579
Add Text2log Dataset
{ "login": "apergo-ai", "id": 68908804, "node_id": "MDQ6VXNlcjY4OTA4ODA0", "avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apergo-ai", "html_url": "https://github.com/apergo-ai", "followers_url": "https://api.github.com/users/apergo-ai/followers", "following_url": "https://api.github.com/users/apergo-ai/following{/other_user}", "gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions", "organizations_url": "https://api.github.com/users/apergo-ai/orgs", "repos_url": "https://api.github.com/users/apergo-ai/repos", "events_url": "https://api.github.com/users/apergo-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/apergo-ai/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,642,157,101,000
1,642,157,101,000
null
CONTRIBUTOR
null
Adding the text2log dataset used for training FOL sentence translating models
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3579/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3578
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3578/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3578/comments
https://api.github.com/repos/huggingface/datasets/issues/3578/events
https://github.com/huggingface/datasets/issues/3578
1,103,403,287
I_kwDODunzps5BxJkX
3,578
label information get lost after parquet serialization
{ "login": "Tudyx", "id": 56633664, "node_id": "MDQ6VXNlcjU2NjMzNjY0", "avatar_url": "https://avatars.githubusercontent.com/u/56633664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tudyx", "html_url": "https://github.com/Tudyx", "followers_url": "https://api.github.com/users/Tudyx/followers", "following_url": "https://api.github.com/users/Tudyx/following{/other_user}", "gists_url": "https://api.github.com/users/Tudyx/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tudyx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tudyx/subscriptions", "organizations_url": "https://api.github.com/users/Tudyx/orgs", "repos_url": "https://api.github.com/users/Tudyx/repos", "events_url": "https://api.github.com/users/Tudyx/events{/privacy}", "received_events_url": "https://api.github.com/users/Tudyx/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,642,155,038,000
1,642,155,038,000
null
NONE
null
## Describe the bug In *dataset_info.json* file, information about the label get lost after the dataset serialization. ## Steps to reproduce the bug ```python from datasets import load_dataset # normal save dataset = load_dataset('glue', 'sst2', split='train') dataset.save_to_disk("normal_save") # save after parquet serialization dataset.to_parquet("glue-sst2-train.parquet") dataset = load_dataset("parquet", data_files='glue-sst2-train.parquet') dataset.save_to_disk("save_after_parquet") ``` ## Expected results I expected to keep label information in *dataset_info.json* file even after parquet serialization ## Actual results In the normal serialization i got ```json "label": { "num_classes": 2, "names": [ "negative", "positive" ], "names_file": null, "id": null, "_type": "ClassLabel" }, ``` And after parquet serialization i got ```json "label": { "dtype": "int64", "id": null, "_type": "Value" }, ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: ubuntu 20.04 - Python version: 3.8.10 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3578/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3577/comments
https://api.github.com/repos/huggingface/datasets/issues/3577/events
https://github.com/huggingface/datasets/issues/3577
1,102,598,241
I_kwDODunzps5BuFBh
3,577
Add The Mexican Emotional Speech Database (MESD)
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
1,642,117,776,000
1,642,117,776,000
null
NONE
null
## Adding a Dataset - **Name:** *The Mexican Emotional Speech Database (MESD)* - **Description:** *Contains 864 voice recordings with six different prosodies: anger, disgust, fear, happiness, neutral, and sadness. Furthermore, three voice categories are included: female adult, male adult, and child. * - **Paper:** *[Paper](https://ieeexplore.ieee.org/abstract/document/9629934/authors#authors)* - **Data:** *[link to the Github repository or current dataset location](https://data.mendeley.com/datasets/cy34mh68j9/3)* - **Motivation:** *Would add Spanish speech data to the HF datasets :) * Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3577/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3576/comments
https://api.github.com/repos/huggingface/datasets/issues/3576/events
https://github.com/huggingface/datasets/pull/3576
1,102,059,651
PR_kwDODunzps4w8sUm
3,576
Add PASS dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,642,094,167,000
1,642,094,167,000
null
CONTRIBUTOR
null
This PR adds the PASS dataset. Closes #3043
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3576/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3575
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3575/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3575/comments
https://api.github.com/repos/huggingface/datasets/issues/3575/events
https://github.com/huggingface/datasets/pull/3575
1,101,947,955
PR_kwDODunzps4w8Usm
3,575
Add Arrow type casting to struct for Image and Audio + Support nested casting
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Regarding the tests I'm just missing the FixedSizeListType type casting for ListArray objects, will to it tomorrow as well as adding new tests + docstrings\r\n\r\nand also adding soundfile in the CI", "While writing some tests I noticed that the ExtensionArray can't be directly concatenated - maybe we can get rid of the extension types/arrays and only keep their storages in native arrow types.\r\n\r\nIn this case the `cast_storage` functions should be the responsibility of the Image and Audio classes directly. And therefore we would need two never cast to a pyarrow type again but to a HF feature - since they'd end up being the one able to tell what's castable or not. This is fine in my opinion but let me know what you think. I can take care of this on monday I think" ]
1,642,088,219,000
1,642,178,477,000
null
MEMBER
null
## Intro 1. Currently, it's not possible to have nested features containing Audio or Image. 2. Moreover one can keep an Arrow array as a StringArray to store paths to images, but such arrays can't be directly concatenated to another image array if it's stored an another Arrow type (typically, a StructType). 3. Allowing several Arrow types for a single HF feature type also leads to bugs like this one #3497 4. Issues like #3247 are quite frequent and happen when Arrow fails to reorder StructArrays. 5. Casting Audio feature type is blocking preparation for the ASR task template: https://github.com/huggingface/datasets/pull/3364 All those issues are linked together by the fact that: - we are limited by the Arrow type casting which is lacking features for nested types. - and especially for Audio and Image: they are not robust enough for concatenation and feature inference. ## Proposed solution To fix 1 and 4 I implemented nested array type casting (which is missing in PyArrow). To fix 2, 3 and 5 while having a simple implementation for nested array type casting, I changed the storage type of Audio and Image to always be a StructType. Also casting from StringType is directly implemented via a new function `cast_storage` that is defined individually for Audio and Image. I also added nested decoding. ## Implementation details ### I. Better Arrow data type casting for nested data structures I implemented new functions `array_cast` and `table_cast` that do the exact same as `pyarrow.Array.cast` or `pyarrow.Table.cast` but support nested struct casting and array re-ordering. These functions can be used on PyArrow objects, and are already integrated in our own `datasets.table.Table.cast` functions. So one can do `my_dataset.data.cast(pyarrow_schema_with_custom_hf_types)` directly. ### II. New image and audio extension types with custom casting I used PyArrow extension types to be able to define what casting is allowed or not. For example both StringType->ImageExtensionType and StructType->ImageExtensionType are allowed, via the `cast_storage` method. I factorized all the PyArrow + Pandas extension stuff in the `base_extension.py` file. This aims at separating the front-facing API code of `datasets` from the Arrow back-end which requires advanced knowledge. ### III. Nested feature decoding I added a new function `decode_nested_example` to decode image and audio data in nested data structures. For optimization's sake, this function is only called if a column has at least one feature that requires decoding. ## Alternative considered The casting to struct type could have been done directly with python objects using some Audio and Image methods, but bringing arrow data to python objects is expensive. The Audio and Image types could also have been able to convert the arrow data directly, but this is not convenient to use when casting a full Arrow Table with nested fields. Therefore I decided to keep the Arrow data casting logic in Arrow extension types. ## Future work This work can be used to allow the ArrayND feature types to be nested too (see issue #887) ## TODO - [ ] fix current tests - [ ] add new tests - [ ] docstrings/comments
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3575/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3575/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3574
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3574/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3574/comments
https://api.github.com/repos/huggingface/datasets/issues/3574/events
https://github.com/huggingface/datasets/pull/3574
1,101,781,401
PR_kwDODunzps4w7vu6
3,574
Fix qa4mre tags
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,642,082,219,000
1,642,082,582,000
null
MEMBER
null
The YAML tags were invalid. I also fixed the dataset mirroring logging that failed because of this issue [here](https://github.com/huggingface/datasets/actions/runs/1690109581)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3574/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3573/comments
https://api.github.com/repos/huggingface/datasets/issues/3573/events
https://github.com/huggingface/datasets/pull/3573
1,101,157,676
PR_kwDODunzps4w5oE_
3,573
Add Mauve metric
{ "login": "jthickstun", "id": 2321244, "node_id": "MDQ6VXNlcjIzMjEyNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2321244?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jthickstun", "html_url": "https://github.com/jthickstun", "followers_url": "https://api.github.com/users/jthickstun/followers", "following_url": "https://api.github.com/users/jthickstun/following{/other_user}", "gists_url": "https://api.github.com/users/jthickstun/gists{/gist_id}", "starred_url": "https://api.github.com/users/jthickstun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jthickstun/subscriptions", "organizations_url": "https://api.github.com/users/jthickstun/orgs", "repos_url": "https://api.github.com/users/jthickstun/repos", "events_url": "https://api.github.com/users/jthickstun/events{/privacy}", "received_events_url": "https://api.github.com/users/jthickstun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,642,045,968,000
1,642,181,538,000
null
NONE
null
Add support for the [Mauve](https://github.com/krishnap25/mauve) metric introduced in this [paper](https://arxiv.org/pdf/2102.01454.pdf) (Neurips, 2021).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3573/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3572
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3572/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3572/comments
https://api.github.com/repos/huggingface/datasets/issues/3572/events
https://github.com/huggingface/datasets/issues/3572
1,100,634,244
I_kwDODunzps5BmliE
3,572
ConnectionError: Couldn't reach https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/wikiann-ner.tar.gz (error 403)
{ "login": "sahoodib", "id": 79107194, "node_id": "MDQ6VXNlcjc5MTA3MTk0", "avatar_url": "https://avatars.githubusercontent.com/u/79107194?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sahoodib", "html_url": "https://github.com/sahoodib", "followers_url": "https://api.github.com/users/sahoodib/followers", "following_url": "https://api.github.com/users/sahoodib/following{/other_user}", "gists_url": "https://api.github.com/users/sahoodib/gists{/gist_id}", "starred_url": "https://api.github.com/users/sahoodib/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sahoodib/subscriptions", "organizations_url": "https://api.github.com/users/sahoodib/orgs", "repos_url": "https://api.github.com/users/sahoodib/repos", "events_url": "https://api.github.com/users/sahoodib/events{/privacy}", "received_events_url": "https://api.github.com/users/sahoodib/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[]
1,642,010,376,000
1,642,425,328,000
null
NONE
null
## Adding a Dataset - **Name:**IndicGLUE** - **Description:** *natural language understanding benchmark for Indian languages* - **Paper:** *(https://indicnlp.ai4bharat.org/home/)* - **Data:** *https://huggingface.co/datasets/indic_glue#data-fields* - **Motivation:** *I am trying to train my model on Indian languages* While I am trying to load dataset it is giving me with the above error.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3572/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3571
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3571/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3571/comments
https://api.github.com/repos/huggingface/datasets/issues/3571/events
https://github.com/huggingface/datasets/pull/3571
1,100,519,604
PR_kwDODunzps4w3fVQ
3,571
Add missing tasks to MuchoCine dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,642,003,652,000
1,642,003,652,000
null
CONTRIBUTOR
null
Addresses the 2nd bullet point in #2520. I'm also removing the licensing information, because I couldn't verify that it is correct.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3571/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3570
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3570/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3570/comments
https://api.github.com/repos/huggingface/datasets/issues/3570/events
https://github.com/huggingface/datasets/pull/3570
1,100,480,791
PR_kwDODunzps4w3Xez
3,570
Add the KMWP dataset (extension of #3564)
{ "login": "sooftware", "id": 42150335, "node_id": "MDQ6VXNlcjQyMTUwMzM1", "avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sooftware", "html_url": "https://github.com/sooftware", "followers_url": "https://api.github.com/users/sooftware/followers", "following_url": "https://api.github.com/users/sooftware/following{/other_user}", "gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}", "starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sooftware/subscriptions", "organizations_url": "https://api.github.com/users/sooftware/orgs", "repos_url": "https://api.github.com/users/sooftware/repos", "events_url": "https://api.github.com/users/sooftware/events{/privacy}", "received_events_url": "https://api.github.com/users/sooftware/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,642,001,588,000
1,642,174,881,000
null
NONE
null
New pull request of #3564 (Add the KMWP dataset)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3570/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3569/comments
https://api.github.com/repos/huggingface/datasets/issues/3569/events
https://github.com/huggingface/datasets/pull/3569
1,100,478,994
PR_kwDODunzps4w3XGo
3,569
Add the DKTC dataset (Extension of #3564)
{ "login": "sooftware", "id": 42150335, "node_id": "MDQ6VXNlcjQyMTUwMzM1", "avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sooftware", "html_url": "https://github.com/sooftware", "followers_url": "https://api.github.com/users/sooftware/followers", "following_url": "https://api.github.com/users/sooftware/following{/other_user}", "gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}", "starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sooftware/subscriptions", "organizations_url": "https://api.github.com/users/sooftware/orgs", "repos_url": "https://api.github.com/users/sooftware/repos", "events_url": "https://api.github.com/users/sooftware/events{/privacy}", "received_events_url": "https://api.github.com/users/sooftware/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I reflect your comment! @lhoestq ", "Wait, the format of the data just changed, so I'll take it into consideration and commit it.", "I update the code according to the dataset structure change.", "Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).", "> Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).\r\n\r\nHi! @lhoestq There is a problem. \r\n<img src=\"https://user-images.githubusercontent.com/42150335/149804142-3800e635-f5a0-44d9-9694-0c2b0c05f16b.png\" width=500>\r\n \r\nAs shown in the picture above, the conversation is divided into \"\\n\" in the \"conversion\" column. \r\nThat's why there's an error in the file path that only saved only five lines like below. \r\n\r\n```\r\n'idx', 'class', 'conversation'\r\n'0', '협박 대화', '\"지금 너 스스로를 죽여달라고 애원하는 것인가?'\r\n아닙니다. 죄송합니다.'\r\n죽을 거면 혼자 죽지 우리까지 사건에 휘말리게 해? 진짜 죽여버리고 싶게.'\r\n정말 잘못했습니다.\r\n```\r\n \r\nIn fact, these five lines are all one line. \r\n \r\n\r\n" ]
1,642,001,489,000
1,642,436,184,000
null
NONE
null
New pull request of #3564. (for DKTC)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3569/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3569/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3568
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3568/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3568/comments
https://api.github.com/repos/huggingface/datasets/issues/3568/events
https://github.com/huggingface/datasets/issues/3568
1,100,380,631
I_kwDODunzps5BlnnX
3,568
Downloading Hugging Face Medical Dialog Dataset NonMatchingSplitsSizesError
{ "login": "fabianslife", "id": 49265757, "node_id": "MDQ6VXNlcjQ5MjY1NzU3", "avatar_url": "https://avatars.githubusercontent.com/u/49265757?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fabianslife", "html_url": "https://github.com/fabianslife", "followers_url": "https://api.github.com/users/fabianslife/followers", "following_url": "https://api.github.com/users/fabianslife/following{/other_user}", "gists_url": "https://api.github.com/users/fabianslife/gists{/gist_id}", "starred_url": "https://api.github.com/users/fabianslife/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fabianslife/subscriptions", "organizations_url": "https://api.github.com/users/fabianslife/orgs", "repos_url": "https://api.github.com/users/fabianslife/repos", "events_url": "https://api.github.com/users/fabianslife/events{/privacy}", "received_events_url": "https://api.github.com/users/fabianslife/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[]
1,641,996,224,000
1,642,425,341,000
null
NONE
null
I wanted to download the Nedical Dialog Dataset from huggingface, using this github link: https://github.com/huggingface/datasets/tree/master/datasets/medical_dialog After downloading the raw datasets from google drive, i unpacked everything and put it in the same folder as the medical_dialog.py which is: ``` import copy import os import re import datasets _CITATION = """\ @article{chen2020meddiag, title={MedDialog: a large-scale medical dialogue dataset}, author={Chen, Shu and Ju, Zeqian and Dong, Xiangyu and Fang, Hongchao and Wang, Sicheng and Yang, Yue and Zeng, Jiaqi and Zhang, Ruisi and Zhang, Ruoyu and Zhou, Meng and Zhu, Penghui and Xie, Pengtao}, journal={arXiv preprint arXiv:2004.03329}, year={2020} } """ _DESCRIPTION = """\ The MedDialog dataset (English) contains conversations (in English) between doctors and patients.\ It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. \ The raw dialogues are from healthcaremagic.com and icliniq.com.\ All copyrights of the data belong to healthcaremagic.com and icliniq.com. """ _HOMEPAGE = "https://github.com/UCSD-AI4H/Medical-Dialogue-System" _LICENSE = "" class MedicalDialog(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") BUILDER_CONFIGS = [ datasets.BuilderConfig(name="en", description="The dataset of medical dialogs in English.", version=VERSION), datasets.BuilderConfig(name="zh", description="The dataset of medical dialogs in Chinese.", version=VERSION), ] @property def manual_download_instructions(self): return """\ \n For English:\nYou need to go to https://drive.google.com/drive/folders/1g29ssimdZ6JzTST6Y8g6h-ogUNReBtJD?usp=sharing,\ and manually download the dataset from Google Drive. Once it is completed, a file named Medical-Dialogue-Dataset-English-<timestamp-info>.zip will appear in your Downloads folder( or whichever folder your browser chooses to save files to). Unzip the folder to obtain a folder named "Medical-Dialogue-Dataset-English" several text files. Now, you can specify the path to this folder for the data_dir argument in the datasets.load_dataset(...) option. The <path/to/folder> can e.g. be "/Downloads/Medical-Dialogue-Dataset-English". The data can then be loaded using the below command:\ datasets.load_dataset("medical_dialog", name="en", data_dir="/Downloads/Medical-Dialogue-Dataset-English")`. \n For Chinese:\nFollow the above process. Change the 'name' to 'zh'.The download link is https://drive.google.com/drive/folders/1r09_i8nJ9c1nliXVGXwSqRYqklcHd9e2 **NOTE** - A caution while downloading from drive. It is better to download single files since creating a zip might not include files <500 MB. This has been observed mutiple times. - After downloading the files and adding them to the appropriate folder, the path of the folder can be given as input tu the data_dir path. """ datasets.load_dataset("medical_dialog", name="en", data_dir="Medical-Dialogue-Dataset-English") def _info(self): if self.config.name == "zh": features = datasets.Features( { "file_name": datasets.Value("string"), "dialogue_id": datasets.Value("int32"), "dialogue_url": datasets.Value("string"), "dialogue_turns": datasets.Sequence( { "speaker": datasets.ClassLabel(names=["病人", "医生"]), "utterance": datasets.Value("string"), } ), } ) if self.config.name == "en": features = datasets.Features( { "file_name": datasets.Value("string"), "dialogue_id": datasets.Value("int32"), "dialogue_url": datasets.Value("string"), "dialogue_turns": datasets.Sequence( { "speaker": datasets.ClassLabel(names=["Patient", "Doctor"]), "utterance": datasets.Value("string"), } ), } ) return datasets.DatasetInfo( # This is the description that will appear on the datasets page. description=_DESCRIPTION, features=features, supervised_keys=None, # Homepage of the dataset for documentation homepage=_HOMEPAGE, # License for the dataset if available license=_LICENSE, # Citation for the dataset citation=_CITATION, ) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" path_to_manual_file = os.path.abspath(os.path.expanduser(dl_manager.manual_dir)) if not os.path.exists(path_to_manual_file): raise FileNotFoundError( f"{path_to_manual_file} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('medical_dialog', data_dir=...)`. Manual download instructions: {self.manual_download_instructions})" ) filepaths = [ os.path.join(path_to_manual_file, txt_file_name) for txt_file_name in sorted(os.listdir(path_to_manual_file)) if txt_file_name.endswith("txt") ] return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepaths": filepaths})] def _generate_examples(self, filepaths): """Yields examples. Iterates over each file and give the creates the corresponding features. NOTE: - The code makes some assumption on the structure of the raw .txt file. - There are some checks to separate different id's. Hopefully, should not cause further issues later when more txt files are added. """ data_lang = self.config.name id_ = -1 for filepath in filepaths: with open(filepath, encoding="utf-8") as f_in: # Parameters to just "sectionize" the raw data last_part = "" last_dialog = {} last_list = [] last_user = "" check_list = [] # These flags are present to have a single function address both chinese and english data # English data is a little hahazard (i.e. the sentences spans multiple different lines), # Chinese is compact with one line for doctor and patient. conv_flag = False des_flag = False while True: line = f_in.readline() if not line: break # Extracting the dialog id if line[:2] == "id": # Hardcode alert! # Handling ID references that may come in the description # These were observed in the Chinese dataset and were not # followed by numbers try: dialogue_id = int(re.findall(r"\d+", line)[0]) except IndexError: continue # Extracting the url if line[:4] == "http": # Hardcode alert! dialogue_url = line.rstrip() # Extracting the patient info from description. if line[:11] == "Description": # Hardcode alert! last_part = "description" last_dialog = {} last_list = [] last_user = "" last_conv = {"speaker": "", "utterance": ""} while True: line = f_in.readline() if (not line) or (line in ["\n", "\n\r"]): break else: if data_lang == "zh": # Condition in chinese if line[:5] == "病情描述:": # Hardcode alert! last_user = "病人" sen = f_in.readline().rstrip() des_flag = True if data_lang == "en": last_user = "Patient" sen = line.rstrip() des_flag = True if des_flag: if sen == "": continue if sen in check_list: last_conv["speaker"] = "" last_conv["utterance"] = "" else: last_conv["speaker"] = last_user last_conv["utterance"] = sen check_list.append(sen) des_flag = False break # Extracting the conversation info from dialogue. elif line[:8] == "Dialogue": # Hardcode alert! if last_part == "description" and len(last_conv["utterance"]) > 0: last_part = "dialogue" if data_lang == "zh": last_user = "病人" if data_lang == "en": last_user = "Patient" while True: line = f_in.readline() if (not line) or (line in ["\n", "\n\r"]): conv_flag = False last_user = "" last_list.append(copy.deepcopy(last_conv)) # To ensure close of conversation, only even number of sentences # are extracted last_turn = len(last_list) if int(last_turn / 2) > 0: temp = int(last_turn / 2) id_ += 1 last_dialog["file_name"] = filepath last_dialog["dialogue_id"] = dialogue_id last_dialog["dialogue_url"] = dialogue_url last_dialog["dialogue_turns"] = last_list[: temp * 2] yield id_, last_dialog break if data_lang == "zh": if line[:3] == "病人:" or line[:3] == "医生:": # Hardcode alert! user = line[:2] # Hardcode alert! line = f_in.readline() conv_flag = True # The elif block is to ensure that multi-line sentences are captured. # This has been observed only in english. if data_lang == "en": if line.strip() == "Patient:" or line.strip() == "Doctor:": # Hardcode alert! user = line.replace(":", "").rstrip() line = f_in.readline() conv_flag = True elif line[:2] != "id": # Hardcode alert! conv_flag = True # Continues till the next ID is parsed if conv_flag: sen = line.rstrip() if sen == "": continue if user == last_user: last_conv["utterance"] = last_conv["utterance"] + sen else: last_user = user last_list.append(copy.deepcopy(last_conv)) last_conv["utterance"] = sen last_conv["speaker"] = user ``` running this code gives me the error: ``` File "C:\Users\Fabia\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\utils\info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=292801173, num_examples=229674, dataset_name='medical_dialog')}] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3568/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3567
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3567/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3567/comments
https://api.github.com/repos/huggingface/datasets/issues/3567/events
https://github.com/huggingface/datasets/pull/3567
1,100,296,696
PR_kwDODunzps4w2xDl
3,567
Fix push to hub to allow individual split push
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,641,991,378,000
1,641,994,141,000
null
CONTRIBUTOR
null
# Description of the issue If one decides to push a split on a datasets repo, he uploads the dataset and overrides the config. However previous config splits end up being lost despite still having the dataset necessary. The new flow is the following: - query the old config from the repo - update into a new config (add/overwrite new split for example) - push the new config # Side fix - `repo_id` in HfFileSystem was wrongly typed. - I've added `indent=2` as it becomes much easier to read now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3567/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3566
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3566/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3566/comments
https://api.github.com/repos/huggingface/datasets/issues/3566/events
https://github.com/huggingface/datasets/pull/3566
1,100,155,902
PR_kwDODunzps4w2Tcc
3,566
Add initial electricity time series dataset
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,641,982,892,000
1,642,189,448,000
null
NONE
null
Here is an initial prototype time series dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3566/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3565
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3565/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3565/comments
https://api.github.com/repos/huggingface/datasets/issues/3565/events
https://github.com/huggingface/datasets/pull/3565
1,099,296,693
PR_kwDODunzps4wzjhH
3,565
Add parameter `preserve_index` to `from_pandas`
{ "login": "Sorrow321", "id": 20703486, "node_id": "MDQ6VXNlcjIwNzAzNDg2", "avatar_url": "https://avatars.githubusercontent.com/u/20703486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sorrow321", "html_url": "https://github.com/Sorrow321", "followers_url": "https://api.github.com/users/Sorrow321/followers", "following_url": "https://api.github.com/users/Sorrow321/following{/other_user}", "gists_url": "https://api.github.com/users/Sorrow321/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sorrow321/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sorrow321/subscriptions", "organizations_url": "https://api.github.com/users/Sorrow321/orgs", "repos_url": "https://api.github.com/users/Sorrow321/repos", "events_url": "https://api.github.com/users/Sorrow321/events{/privacy}", "received_events_url": "https://api.github.com/users/Sorrow321/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> \r\n\r\nI did `make style` and it affected over 500 files\r\n\r\n```\r\nAll done! ✨ 🍰 ✨\r\n575 files reformatted, 372 files left unchanged.\r\nisort tests src benchmarks datasets/**/*.py metri\r\n```\r\n\r\n(result)\r\n![image](https://user-images.githubusercontent.com/20703486/149166681-2f9d1bc4-116a-4f53-ad42-e54e3b8bd605.png)\r\n", "Nvm I was using wrong black version" ]
1,641,914,797,000
1,642,003,887,000
null
CONTRIBUTOR
null
Added optional parameter, so that user can get rid of useless index preserving. [Issue](https://github.com/huggingface/datasets/issues/3563)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3565/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3564
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3564/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3564/comments
https://api.github.com/repos/huggingface/datasets/issues/3564/events
https://github.com/huggingface/datasets/pull/3564
1,099,214,403
PR_kwDODunzps4wzSOL
3,564
Add the KMWP & DKTC dataset.
{ "login": "sooftware", "id": 42150335, "node_id": "MDQ6VXNlcjQyMTUwMzM1", "avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sooftware", "html_url": "https://github.com/sooftware", "followers_url": "https://api.github.com/users/sooftware/followers", "following_url": "https://api.github.com/users/sooftware/following{/other_user}", "gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}", "starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sooftware/subscriptions", "organizations_url": "https://api.github.com/users/sooftware/orgs", "repos_url": "https://api.github.com/users/sooftware/repos", "events_url": "https://api.github.com/users/sooftware/events{/privacy}", "received_events_url": "https://api.github.com/users/sooftware/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I reflect your review. cc. @lhoestq ", "Ah sorry, I missed KMWP comment, wait.", "I request 2 new pull requests. #3569 #3570" ]
1,641,910,448,000
1,642,001,629,000
null
NONE
null
Add the DKTC dataset. - https://github.com/tunib-ai/DKTC
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3564/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3563/comments
https://api.github.com/repos/huggingface/datasets/issues/3563/events
https://github.com/huggingface/datasets/issues/3563
1,099,070,368
I_kwDODunzps5Bgnug
3,563
Dataset.from_pandas preserves useless index
{ "login": "Sorrow321", "id": 20703486, "node_id": "MDQ6VXNlcjIwNzAzNDg2", "avatar_url": "https://avatars.githubusercontent.com/u/20703486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sorrow321", "html_url": "https://github.com/Sorrow321", "followers_url": "https://api.github.com/users/Sorrow321/followers", "following_url": "https://api.github.com/users/Sorrow321/following{/other_user}", "gists_url": "https://api.github.com/users/Sorrow321/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sorrow321/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sorrow321/subscriptions", "organizations_url": "https://api.github.com/users/Sorrow321/orgs", "repos_url": "https://api.github.com/users/Sorrow321/repos", "events_url": "https://api.github.com/users/Sorrow321/events{/privacy}", "received_events_url": "https://api.github.com/users/Sorrow321/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! That makes sense. Sure, feel free to open a PR! Just a small suggestion: let's make `preserve_index` a parameter of `Dataset.from_pandas` (which we then pass to `InMemoryTable.from_pandas`) with `None` as a default value to not have this as a breaking change. " ]
1,641,902,827,000
1,642,003,887,000
null
CONTRIBUTOR
null
## Describe the bug Let's say that you want to create a Dataset object from pandas dataframe. Most likely you will write something like this: ``` import pandas as pd from datasets import Dataset df = pd.read_csv('some_dataset.csv') # Some DataFrame preprocessing code... dataset = Dataset.from_pandas(df) ``` If your preprocessing code contain indexing operations like this: ``` df = df[df.col1 == some_value] ``` then your df.index can be changed from (default) ```RangeIndex(start=0, stop=16590, step=1)``` to something like this ```Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 83979, 83980, 83981, 83982, 83983, 83984, 83985, 83986, 83987, 83988], dtype='int64', length=16590)``` In this case, PyArrow (by default) will preserve this non-standard index. In the result, your dataset object will have the extra field that you likely don't want to have: '__index_level_0__'. You can easily fix this by just adding extra argument ```preserve_index=False``` to call of ```InMemoryTable.from_pandas``` in ```arrow_dataset.py```. If you approve that this isn't desirable behavior, I can make a PR fixing that. ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-5.11.0-44-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3563/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3562
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3562/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3562/comments
https://api.github.com/repos/huggingface/datasets/issues/3562/events
https://github.com/huggingface/datasets/pull/3562
1,098,341,351
PR_kwDODunzps4wwa44
3,562
Allow multiple task templates of the same type
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,846,727,000
1,641,910,607,000
null
CONTRIBUTOR
null
Add support for multiple task templates of the same type. Fixes (partially) #2520. CC: @lewtun
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3562/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3561/comments
https://api.github.com/repos/huggingface/datasets/issues/3561/events
https://github.com/huggingface/datasets/issues/3561
1,098,328,870
I_kwDODunzps5Bdysm
3,561
Cannot load ‘bookcorpusopen’
{ "login": "HUIYINXUE", "id": 54684403, "node_id": "MDQ6VXNlcjU0Njg0NDAz", "avatar_url": "https://avatars.githubusercontent.com/u/54684403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HUIYINXUE", "html_url": "https://github.com/HUIYINXUE", "followers_url": "https://api.github.com/users/HUIYINXUE/followers", "following_url": "https://api.github.com/users/HUIYINXUE/following{/other_user}", "gists_url": "https://api.github.com/users/HUIYINXUE/gists{/gist_id}", "starred_url": "https://api.github.com/users/HUIYINXUE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HUIYINXUE/subscriptions", "organizations_url": "https://api.github.com/users/HUIYINXUE/orgs", "repos_url": "https://api.github.com/users/HUIYINXUE/repos", "events_url": "https://api.github.com/users/HUIYINXUE/events{/privacy}", "received_events_url": "https://api.github.com/users/HUIYINXUE/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "The host of this copy of the dataset (https://the-eye.eu) is down and has been down for a good amount of time ([potentially months](https://www.reddit.com/r/Roms/comments/q82s15/theeye_downdied/))\r\n\r\nFinding this dataset is a little esoteric, as the original authors took down the official BookCorpus dataset some time ago.\r\n\r\nThere are community-created versions of BookCorpus, such as the files hosted in the link below.\r\nhttps://battle.shawwn.com/sdb/bookcorpus/\r\n\r\nAnd more discussion here:\r\nhttps://github.com/soskek/bookcorpus\r\n\r\nDo we want to remove this dataset entirely? There's a fair argument for this, given that the official BookCorpus dataset was taken down by the authors. If not, perhaps can open a PR with the link to the community-created tar above and updated dataset description." ]
1,641,845,838,000
1,642,425,361,000
null
NONE
null
## Describe the bug Cannot load 'bookcorpusopen' ## Steps to reproduce the bug ```python dataset = load_dataset('bookcorpusopen') ``` or ```python dataset = load_dataset('bookcorpusopen',script_version='master') ``` ## Actual results ConnectionError: Couldn't reach https://the-eye.eu/public/AI/pile_preliminary_components/books1.tar.gz ## Environment info - `datasets` version: 1.9.0 - Platform: Linux version 3.10.0-1160.45.1.el7.x86_64 - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3561/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3560
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3560/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3560/comments
https://api.github.com/repos/huggingface/datasets/issues/3560/events
https://github.com/huggingface/datasets/pull/3560
1,098,280,652
PR_kwDODunzps4wwOMf
3,560
Run pyupgrade for Python 3.6+
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! Thanks for the change :)\r\nCould it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.", "> Hi ! Thanks for the change :)\r\n> Could it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.\r\n\r\nI reverted the changes in `datasets/` instead of changing only `src/`. Does it sound good?" ]
1,641,842,453,000
1,642,000,307,000
null
CONTRIBUTOR
null
Run the command: ```bash pyupgrade $(find . -name "*.py" -type f) --py36-plus ``` Which mainly avoids unnecessary lists creations and also removes unnecessary code for Python 3.6+. It was originally part of #3489. Tip for reviewing faster: use the CLI (`git diff`) and scroll.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3560/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3559
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3559/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3559/comments
https://api.github.com/repos/huggingface/datasets/issues/3559/events
https://github.com/huggingface/datasets/pull/3559
1,098,178,222
PR_kwDODunzps4wv420
3,559
Fix `DuplicatedKeysError` and improve card in `tweet_qa`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,835,660,000
1,642,000,438,000
null
CONTRIBUTOR
null
Fix #3555
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3559/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3558
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3558/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3558/comments
https://api.github.com/repos/huggingface/datasets/issues/3558/events
https://github.com/huggingface/datasets/issues/3558
1,098,025,866
I_kwDODunzps5BcouK
3,558
Integrate Milvus (pymilvus) library
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,641,828,029,000
1,641,828,029,000
null
CONTRIBUTOR
null
Milvus is a popular open-source vector database. We should add a new vector index to support this project.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3558/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3557
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3557/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3557/comments
https://api.github.com/repos/huggingface/datasets/issues/3557/events
https://github.com/huggingface/datasets/pull/3557
1,097,946,034
PR_kwDODunzps4wvIHl
3,557
Fix bug in `ImageClassifcation` task template
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The CI failures are unrelated to the changes in this PR.", "> The CI failures are unrelated to the changes in this PR.\r\n\r\nIt seems that some of the failures are due to the tests on the dataset cards (e.g. CIFAR, MNIST, FASHION_MNIST). Perhaps it's worth addressing those in this PR to avoid confusing downstream developers who branch off `master` and suddenly have a failing CI?", "@lewtun We only run these tests against the modified datasets on the PR branch, so this will not lead to errors after merging." ]
1,641,823,799,000
1,641,916,072,000
null
CONTRIBUTOR
null
Fixes a bug in the `ImageClassification` task template which requires specifying class labels twice in dataset scripts. Additionally, this PR refactors the API around the classification task templates for cleaner `labels` handling. CC: @lewtun @nateraw
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3557/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3556
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3556/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3556/comments
https://api.github.com/repos/huggingface/datasets/issues/3556/events
https://github.com/huggingface/datasets/pull/3556
1,097,907,724
PR_kwDODunzps4wvALx
3,556
Preserve encoding/decoding with features in `Iterable.map` call
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,641,821,540,000
1,642,436,133,000
null
CONTRIBUTOR
null
As described in https://github.com/huggingface/datasets/issues/3505#issuecomment-1004755657, this PR uses a generator expression to encode/decode examples with `features` (which are set to None in `map`) before applying a map transform. Fix #3505
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3556/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3555
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3555/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3555/comments
https://api.github.com/repos/huggingface/datasets/issues/3555/events
https://github.com/huggingface/datasets/issues/3555
1,097,736,982
I_kwDODunzps5BbiMW
3,555
DuplicatedKeysError when loading tweet_qa dataset
{ "login": "LeonieWeissweiler", "id": 30300891, "node_id": "MDQ6VXNlcjMwMzAwODkx", "avatar_url": "https://avatars.githubusercontent.com/u/30300891?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LeonieWeissweiler", "html_url": "https://github.com/LeonieWeissweiler", "followers_url": "https://api.github.com/users/LeonieWeissweiler/followers", "following_url": "https://api.github.com/users/LeonieWeissweiler/following{/other_user}", "gists_url": "https://api.github.com/users/LeonieWeissweiler/gists{/gist_id}", "starred_url": "https://api.github.com/users/LeonieWeissweiler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LeonieWeissweiler/subscriptions", "organizations_url": "https://api.github.com/users/LeonieWeissweiler/orgs", "repos_url": "https://api.github.com/users/LeonieWeissweiler/repos", "events_url": "https://api.github.com/users/LeonieWeissweiler/events{/privacy}", "received_events_url": "https://api.github.com/users/LeonieWeissweiler/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi, we've just merged the PR with the fix. The fixed version of the dataset can be downloaded as follows:\r\n```python\r\nimport datasets\r\ndset = datasets.load_dataset(\"tweet_qa\", revision=\"master\")\r\n```" ]
1,641,811,991,000
1,642,000,653,000
null
NONE
null
When loading the tweet_qa dataset with `load_dataset('tweet_qa')`, the following error occurs: `DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 2a167f9e016ba338e1813fed275a6a1e Keys should be unique and deterministic in nature ` Might be related to issues #2433 and #2333 - `datasets` version: 1.17.0 - Python version: 3.8.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3555/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3554/comments
https://api.github.com/repos/huggingface/datasets/issues/3554/events
https://github.com/huggingface/datasets/issues/3554
1,097,711,367
I_kwDODunzps5Bbb8H
3,554
ImportError: cannot import name 'is_valid_waiter_error'
{ "login": "danielbellhv", "id": 84714841, "node_id": "MDQ6VXNlcjg0NzE0ODQx", "avatar_url": "https://avatars.githubusercontent.com/u/84714841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danielbellhv", "html_url": "https://github.com/danielbellhv", "followers_url": "https://api.github.com/users/danielbellhv/followers", "following_url": "https://api.github.com/users/danielbellhv/following{/other_user}", "gists_url": "https://api.github.com/users/danielbellhv/gists{/gist_id}", "starred_url": "https://api.github.com/users/danielbellhv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danielbellhv/subscriptions", "organizations_url": "https://api.github.com/users/danielbellhv/orgs", "repos_url": "https://api.github.com/users/danielbellhv/repos", "events_url": "https://api.github.com/users/danielbellhv/events{/privacy}", "received_events_url": "https://api.github.com/users/danielbellhv/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,641,810,724,000
1,641,810,724,000
null
NONE
null
Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: ``` Requirement already satisfied: datasets in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.17.0) Requirement already satisfied: transformers in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (4.15.0) Requirement already satisfied: optimum[intel] in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (0.1.3) Requirement already satisfied: numpy>=1.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.19.5) Requirement already satisfied: dill in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.3.4) Requirement already satisfied: tqdm>=4.62.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.62.3) Requirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.2.1) Requirement already satisfied: packaging in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (21.3) Requirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (6.0.1) Requirement already satisfied: pandas in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.1.5) Requirement already satisfied: xxhash in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.0.2) Requirement already satisfied: aiohttp in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (3.8.1) Requirement already satisfied: fsspec[http]>=2021.05.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2021.11.1) Requirement already satisfied: dataclasses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.8) Requirement already satisfied: multiprocess in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.70.12.2) Requirement already satisfied: importlib-metadata in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.5.0) Requirement already satisfied: requests>=2.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.25.1) Requirement already satisfied: pyyaml>=5.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (5.4.1) Requirement already satisfied: regex!=2019.12.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (2021.4.4) Requirement already satisfied: tokenizers<0.11,>=0.10.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.10.3) Requirement already satisfied: filelock in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (3.0.12) Requirement already satisfied: sacremoses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.0.46) Requirement already satisfied: torch>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.10.1) Requirement already satisfied: sympy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.8) Requirement already satisfied: coloredlogs in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (15.0.1) Requirement already satisfied: pycocotools in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (2.0.3) Requirement already satisfied: neural-compressor>=1.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.9) Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.10.0.0) Requirement already satisfied: sigopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.2.0) Requirement already satisfied: opencv-python in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (4.5.1.48) Requirement already satisfied: cryptography in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.4.7) Requirement already satisfied: py-cpuinfo in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.0.0) Requirement already satisfied: gevent in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (21.1.2) Requirement already satisfied: schema in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.7.5) Requirement already satisfied: psutil in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.8.0) Requirement already satisfied: gevent-websocket in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.10.1) Requirement already satisfied: hyperopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.2.7) Requirement already satisfied: Flask in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: prettytable in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.5.0) Requirement already satisfied: Flask-SocketIO in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.1.1) Requirement already satisfied: scikit-learn in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.24.2) Requirement already satisfied: Pillow in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.4.0) Requirement already satisfied: Flask-Cors in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.0.10) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging->datasets) (2.4.7) Requirement already satisfied: chardet<5,>=3.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (4.0.0) Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2021.5.30) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (1.26.5) Requirement already satisfied: idna<3,>=2.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2.10) Requirement already satisfied: yarl<2.0,>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.6.3) Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (2.0.9) Requirement already satisfied: attrs>=17.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (21.2.0) Requirement already satisfied: asynctest==0.13.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (0.13.0) Requirement already satisfied: idna-ssl>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.1.0) Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (4.0.1) Requirement already satisfied: aiosignal>=1.1.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0) Requirement already satisfied: frozenlist>=1.1.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0) Requirement already satisfied: multidict<7.0,>=4.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (5.1.0) Requirement already satisfied: humanfriendly>=9.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from coloredlogs->optimum[intel]) (10.0) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata->datasets) (3.4.1) Requirement already satisfied: python-dateutil>=2.7.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2021.1) Requirement already satisfied: matplotlib>=2.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (3.3.4) Requirement already satisfied: cython>=0.27.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (0.29.23) Requirement already satisfied: setuptools>=18.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (52.0.0.post20210125) Requirement already satisfied: joblib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.0.1) Requirement already satisfied: click in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (8.0.1) Requirement already satisfied: six in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.16.0) Requirement already satisfied: mpmath>=0.19 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sympy->optimum[intel]) (1.2.1) Requirement already satisfied: kiwisolver>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (1.3.1) Requirement already satisfied: cycler>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (0.10.0) Requirement already satisfied: cffi>=1.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cryptography->neural-compressor>=1.7->optimum[intel]) (1.14.5) Requirement already satisfied: Werkzeug>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.2) Requirement already satisfied: Jinja2>=3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (3.0.1) Requirement already satisfied: itsdangerous>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: python-socketio>=5.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (5.5.0) Requirement already satisfied: zope.event in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (4.5.0) Requirement already satisfied: greenlet<2.0,>=0.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (1.1.0) Requirement already satisfied: zope.interface in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (5.4.0) Requirement already satisfied: future in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.18.2) Requirement already satisfied: cloudpickle in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.6.0) Requirement already satisfied: networkx>=2.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (2.5) Requirement already satisfied: scipy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.5.3) Requirement already satisfied: py4j in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.10.7) Requirement already satisfied: wcwidth in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from prettytable->neural-compressor>=1.7->optimum[intel]) (0.2.5) Requirement already satisfied: contextlib2>=0.5.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from schema->neural-compressor>=1.7->optimum[intel]) (0.6.0.post1) Requirement already satisfied: threadpoolctl>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from scikit-learn->neural-compressor>=1.7->optimum[intel]) (2.1.0) Requirement already satisfied: pyOpenSSL>=20.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (20.0.1) Requirement already satisfied: pypng>=0.0.20 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.0.21) Requirement already satisfied: kubernetes<13.0.0,>=12.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (12.0.1) Requirement already satisfied: rsa<5.0.0,>=4.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.7.2) Requirement already satisfied: boto3<2.0.0,==1.16.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.16.34) Requirement already satisfied: Pint<0.17.0,>=0.16.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.16.1) Requirement already satisfied: GitPython>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.18) Requirement already satisfied: backoff<2.0.0,>=1.10.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.11.1) Requirement already satisfied: ipython>=5.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (7.16.1) Requirement already satisfied: docker<5.0.0,>=4.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.4.4) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.10.0) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.3.7) Requirement already satisfied: botocore<1.20.0,>=1.19.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (1.19.63) Requirement already satisfied: pycparser in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cffi>=1.12->cryptography->neural-compressor>=1.7->optimum[intel]) (2.20) Requirement already satisfied: websocket-client>=0.32.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from docker<5.0.0,>=4.4.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.58.0) Requirement already satisfied: gitdb<5,>=4.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.0.9) Requirement already satisfied: traitlets>=4.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.3.3) Requirement already satisfied: jedi>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.17.2) Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (3.0.19) Requirement already satisfied: backcall in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0) Requirement already satisfied: pygments in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (2.9.0) Requirement already satisfied: pexpect in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.8.0) Requirement already satisfied: decorator in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.9) Requirement already satisfied: pickleshare in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.5) Requirement already satisfied: MarkupSafe>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Jinja2>=3.0->Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: google-auth>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.30.2) Requirement already satisfied: requests-oauthlib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.3.0) Requirement already satisfied: importlib-resources in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Pint<0.17.0,>=0.16.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.4.0) Requirement already satisfied: python-engineio>=4.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (4.3.0) Requirement already satisfied: bidict>=0.21.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (0.21.4) Requirement already satisfied: pyasn1>=0.1.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from rsa<5.0.0,>=4.7->sigopt->neural-compressor>=1.7->optimum[intel]) (0.4.8) Requirement already satisfied: smmap<6,>=3.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gitdb<5,>=4.0.1->GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.0) Requirement already satisfied: pyasn1-modules>=0.2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.8) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (4.2.2) Requirement already satisfied: parso<0.8.0,>=0.7.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from jedi>=0.10->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.1) Requirement already satisfied: ipython-genutils in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from traitlets>=4.2->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0) Requirement already satisfied: ptyprocess>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pexpect->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.0) Requirement already satisfied: oauthlib>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests-oauthlib->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.1) ``` --- **Cell:** ```python from datasets import load_dataset, load_metric ``` OR ```python import datasets ``` **Traceback:** ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-7-34fb7ba3338d> in <module> ----> 1 from datasets import load_dataset, load_metric ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/__init__.py in <module> 32 ) 33 ---> 34 from .arrow_dataset import Dataset, concatenate_datasets 35 from .arrow_reader import ArrowReader, ReadInstruction 36 from .arrow_writer import ArrowWriter ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_dataset.py in <module> 59 from . import config, utils 60 from .arrow_reader import ArrowReader ---> 61 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 62 from .features import ClassLabel, Features, FeatureType, Sequence, Value, _ArrayXD, pandas_types_mapper 63 from .filesystems import extract_path_from_uri, is_remote_filesystem ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_writer.py in <module> 26 27 from . import config, utils ---> 28 from .features import ( 29 Features, 30 ImageExtensionType, ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/__init__.py in <module> 1 # flake8: noqa ----> 2 from .audio import Audio 3 from .features import * 4 from .features import ( 5 _ArrayXD, ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/audio.py in <module> 5 import pyarrow as pa 6 ----> 7 from ..utils.streaming_download_manager import xopen 8 9 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/utils/streaming_download_manager.py in <module> 16 17 from .. import config ---> 18 from ..filesystems import COMPRESSION_FILESYSTEMS 19 from .download_manager import DownloadConfig, map_nested 20 from .file_utils import ( ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/__init__.py in <module> 11 12 if _has_s3fs: ---> 13 from .s3filesystem import S3FileSystem # noqa: F401 14 15 COMPRESSION_FILESYSTEMS: List[compression.BaseCompressedFileFileSystem] = [ ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/s3filesystem.py in <module> ----> 1 import s3fs 2 3 4 class S3FileSystem(s3fs.S3FileSystem): 5 """ ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/__init__.py in <module> ----> 1 from .core import S3FileSystem, S3File 2 from .mapping import S3Map 3 4 from ._version import get_versions 5 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/core.py in <module> 12 from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper 13 ---> 14 import aiobotocore 15 import botocore 16 import aiobotocore.session ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/__init__.py in <module> ----> 1 from .session import get_session, AioSession 2 3 __all__ = ['get_session', 'AioSession'] 4 __version__ = '1.3.0' ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/session.py in <module> 4 from botocore import retryhandler, translate 5 from botocore.exceptions import PartialCredentialsError ----> 6 from .client import AioClientCreator, AioBaseClient 7 from .hooks import AioHierarchicalEmitter 8 from .parsers import AioResponseParserFactory ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/client.py in <module> 11 from .args import AioClientArgsCreator 12 from .utils import AioS3RegionRedirector ---> 13 from . import waiter 14 15 history_recorder = get_global_history_recorder() ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/waiter.py in <module> 4 from botocore.exceptions import ClientError 5 from botocore.waiter import WaiterModel # noqa: F401, lgtm[py/unused-import] ----> 6 from botocore.waiter import Waiter, xform_name, logger, WaiterError, \ 7 NormalizedOperationMethod as _NormalizedOperationMethod, is_valid_waiter_error 8 from botocore.docs.docstring import WaiterDocstring ImportError: cannot import name 'is_valid_waiter_error' ``` Please let me know if there's anything else I can add to post. [1]: https://github.com/huggingface/notebooks/blob/master/examples/text_classification_quantization_inc.ipynb
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3554/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3553/comments
https://api.github.com/repos/huggingface/datasets/issues/3553/events
https://github.com/huggingface/datasets/issues/3553
1,097,252,275
I_kwDODunzps5BZr2z
3,553
set_format("np") no longer works for Image data
{ "login": "cgarciae", "id": 5862228, "node_id": "MDQ6VXNlcjU4NjIyMjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5862228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cgarciae", "html_url": "https://github.com/cgarciae", "followers_url": "https://api.github.com/users/cgarciae/followers", "following_url": "https://api.github.com/users/cgarciae/following{/other_user}", "gists_url": "https://api.github.com/users/cgarciae/gists{/gist_id}", "starred_url": "https://api.github.com/users/cgarciae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cgarciae/subscriptions", "organizations_url": "https://api.github.com/users/cgarciae/orgs", "repos_url": "https://api.github.com/users/cgarciae/repos", "events_url": "https://api.github.com/users/cgarciae/events{/privacy}", "received_events_url": "https://api.github.com/users/cgarciae/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "A quick fix for now is doing this:\r\n\r\n```python\r\nX_train = np.stack(dataset[\"train\"][\"image\"])[..., None]", "This error also propagates to jax and is even trickier to fix, since `.with_format(type='jax')` will use numpy conversion internally (and fail). For a three line failure:\r\n\r\n```python\r\ndataset = datasets.load_dataset(\"mnist\")\r\ndataset.set_format(\"jax\")\r\nX_train = dataset[\"train\"][\"image\"]\r\n```", "Hi! We've recently introduced a new Image feature that yields PIL Images (and caches transforms on them) instead of arrays.\r\n\r\nHowever, this feature requires a custom transform to yield np arrays directly:\r\n```python\r\nddict = datasets.load_dataset(\"mnist\")\r\n\r\ndef pil_image_to_array(batch):\r\n return {\"image\": [np.array(img) for img in batch[\"image\"]]} # or jnp.array(img) for Jax\r\n\r\nddict.set_transform(pil_image_to_array, columns=\"image\", output_all_columns=True)\r\n```\r\n\r\n[Docs](https://huggingface.co/docs/datasets/master/process.html#format-transform) on `set_transform`.\r\n\r\nAlso, the approach proposed by @cgarciae is not the best because it loads the entire column in memory.\r\n\r\n@albertvillanova @lhoestq WDYT? The Audio and the Image feature currently don't support the TF/Jax/PT Formatters, but for the Numpy Formatter maybe it makes more sense to return np arrays (and not a dict in the case of the Audio feature or a PIL Image object in the case of the Image feature).", "Yes I agree it should return arrays and not a PIL image (and possible an array instead of a dict for audio data).\r\nI'm currently finishing some code refactoring of the image and audio and opening a PR today. Maybe we can look into that after the refactoring" ]
1,641,748,693,000
1,642,081,166,000
null
NONE
null
## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array ``` but now it doesn't work, `set_format("np")` seems to have no effect and the dataset just returns a list/array of PIL images instead of numpy arrays as requested.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3553/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3553/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3552
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3552/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3552/comments
https://api.github.com/repos/huggingface/datasets/issues/3552/events
https://github.com/huggingface/datasets/pull/3552
1,096,985,204
PR_kwDODunzps4wsM29
3,552
Add the KMWP & DKTC dataset.
{ "login": "sooftware", "id": 42150335, "node_id": "MDQ6VXNlcjQyMTUwMzM1", "avatar_url": "https://avatars.githubusercontent.com/u/42150335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sooftware", "html_url": "https://github.com/sooftware", "followers_url": "https://api.github.com/users/sooftware/followers", "following_url": "https://api.github.com/users/sooftware/following{/other_user}", "gists_url": "https://api.github.com/users/sooftware/gists{/gist_id}", "starred_url": "https://api.github.com/users/sooftware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sooftware/subscriptions", "organizations_url": "https://api.github.com/users/sooftware/orgs", "repos_url": "https://api.github.com/users/sooftware/repos", "events_url": "https://api.github.com/users/sooftware/events{/privacy}", "received_events_url": "https://api.github.com/users/sooftware/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,641,661,934,000
1,641,910,410,000
null
NONE
null
Add the KMWP & DKTC dataset. Additional notes: - Both datasets will be released on January 10 through the GitHub link below. - https://github.com/tunib-ai/DKTC - https://github.com/tunib-ai/KMWP - So it doesn't work as a link at the moment, but the code will work soon (after it is released on January 10).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3552/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3551
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3551/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3551/comments
https://api.github.com/repos/huggingface/datasets/issues/3551/events
https://github.com/huggingface/datasets/pull/3551
1,096,561,111
PR_kwDODunzps4wq_AO
3,551
Add more compression types for `to_json`
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq, I looked into how to compress with `zipfile` for which few methods exist, let me know which one looks good:\r\n1. create the file in normal `wb` mode and then zip it separately\r\n2. use `ZipFile.write_str` to write file into the archive. For this we'll need to change how we're writing files from `_write` method \r\n\r\nHow `pandas` handles it is that they have created a wrapper for standard library class `ZipFile` and allow the returned file-like handle to accept byte strings via `write` method instead of `write_str` (purpose was to change the name of function by creating that wrapper)", "1. sounds not ideal since it creates an intermediary file.\r\nI like pandas' approach. Is it possible to implement 2. using the pandas class ? Or maybe we can have something similar ?" ]
1,641,579,902,000
1,642,168,983,000
null
CONTRIBUTOR
null
This PR adds `bz2`, `xz`, and `zip` (WIP) for `to_json`. I also plan to add `infer` like how `pandas` does it
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3551/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3550
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3550/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3550/comments
https://api.github.com/repos/huggingface/datasets/issues/3550/events
https://github.com/huggingface/datasets/issues/3550
1,096,522,377
I_kwDODunzps5BW5qJ
3,550
Bug in `openbookqa` dataset
{ "login": "lucadiliello", "id": 23355969, "node_id": "MDQ6VXNlcjIzMzU1OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucadiliello", "html_url": "https://github.com/lucadiliello", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "repos_url": "https://api.github.com/users/lucadiliello/repos", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,576,777,000
1,642,425,393,000
null
CONTRIBUTOR
null
## Describe the bug Dataset entries contains a typo. ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> obqa = load_dataset('openbookqa', 'main') >>> obqa['train'][0] ``` ## Expected results ```python {'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['A', 'B', 'C', 'D']}, 'answerKey': 'D'} ``` ## Actual results ```python {'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting']}, 'answerKey': 'D'} ``` The bug is present in all configs and all splits. ## Environment info - `datasets` version: 1.17.0 - Platform: Linux-5.4.0-1057-aws-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3550/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3549
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3549/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3549/comments
https://api.github.com/repos/huggingface/datasets/issues/3549/events
https://github.com/huggingface/datasets/pull/3549
1,096,426,996
PR_kwDODunzps4wqkGt
3,549
Fix sem_eval_2018_task_1 download location
{ "login": "maxpel", "id": 31095360, "node_id": "MDQ6VXNlcjMxMDk1MzYw", "avatar_url": "https://avatars.githubusercontent.com/u/31095360?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maxpel", "html_url": "https://github.com/maxpel", "followers_url": "https://api.github.com/users/maxpel/followers", "following_url": "https://api.github.com/users/maxpel/following{/other_user}", "gists_url": "https://api.github.com/users/maxpel/gists{/gist_id}", "starred_url": "https://api.github.com/users/maxpel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maxpel/subscriptions", "organizations_url": "https://api.github.com/users/maxpel/orgs", "repos_url": "https://api.github.com/users/maxpel/repos", "events_url": "https://api.github.com/users/maxpel/events{/privacy}", "received_events_url": "https://api.github.com/users/maxpel/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,641,569,872,000
1,641,569,872,000
null
CONTRIBUTOR
null
This changes the download location of sem_eval_2018_task_1 files to include the test set labels as discussed in https://github.com/huggingface/datasets/issues/2745#issuecomment-954588500_ with @lhoestq.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3549/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3548/comments
https://api.github.com/repos/huggingface/datasets/issues/3548/events
https://github.com/huggingface/datasets/issues/3548
1,096,409,512
I_kwDODunzps5BWeGo
3,548
Specify the feature types of a dataset on the Hub without needing a dataset script
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "abidlabs", "id": 1778297, "node_id": "MDQ6VXNlcjE3NzgyOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abidlabs", "html_url": "https://github.com/abidlabs", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "repos_url": "https://api.github.com/users/abidlabs/repos", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "type": "User", "site_admin": false }
[ { "login": "abidlabs", "id": 1778297, "node_id": "MDQ6VXNlcjE3NzgyOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1778297?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abidlabs", "html_url": "https://github.com/abidlabs", "followers_url": "https://api.github.com/users/abidlabs/followers", "following_url": "https://api.github.com/users/abidlabs/following{/other_user}", "gists_url": "https://api.github.com/users/abidlabs/gists{/gist_id}", "starred_url": "https://api.github.com/users/abidlabs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abidlabs/subscriptions", "organizations_url": "https://api.github.com/users/abidlabs/orgs", "repos_url": "https://api.github.com/users/abidlabs/repos", "events_url": "https://api.github.com/users/abidlabs/events{/privacy}", "received_events_url": "https://api.github.com/users/abidlabs/received_events", "type": "User", "site_admin": false } ]
null
[]
1,641,568,626,000
1,641,568,677,000
null
MEMBER
null
**Is your feature request related to a problem? Please describe.** Currently if I upload a CSV with paths to audio files, the column type is string instead of Audio. **Describe the solution you'd like** I'd like to be able to specify the types of the column, so that when loading the dataset I directly get the features types I want. The feature types could read from the `dataset_infos.json` for example. **Describe alternatives you've considered** Create a dataset script to specify the features, but that seems complicated for a simple thing. cc @abidlabs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3548/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3548/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3547
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3547/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3547/comments
https://api.github.com/repos/huggingface/datasets/issues/3547/events
https://github.com/huggingface/datasets/issues/3547
1,096,405,515
I_kwDODunzps5BWdIL
3,547
Datasets created with `push_to_hub` can't be accessed in offline mode
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting. I think this can be fixed by improving the `CachedDatasetModuleFactory` and making it look into the `parquet` cache directory (datasets from push_to_hub are loaded with the parquet dataset builder). I'll look into it" ]
1,641,568,345,000
1,641,811,484,000
null
MEMBER
null
## Describe the bug In offline mode, one can still access previously-cached datasets. This fails with datasets created with `push_to_hub`. ## Steps to reproduce the bug in Python: ``` import datasets mpwiki = datasets.load_dataset("teven/matched_passages_wikidata") ``` in bash: ``` export HF_DATASETS_OFFLINE=1 ``` in Python: ``` import datasets mpwiki = datasets.load_dataset("teven/matched_passages_wikidata") ``` ## Expected results `datasets` should find the previously-cached dataset. ## Actual results ConnectionError: Couln't reach the Hugging Face Hub for dataset 'teven/matched_passages_wikidata': Offline mode is enabled ## Environment info - `datasets` version: 1.16.2.dev0 - Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3547/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3546
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3546/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3546/comments
https://api.github.com/repos/huggingface/datasets/issues/3546/events
https://github.com/huggingface/datasets/pull/3546
1,096,367,684
PR_kwDODunzps4wqYIV
3,546
Remove print statements in datasets
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The CI failures are unrelated to the changes." ]
1,641,565,824,000
1,641,578,956,000
null
CONTRIBUTOR
null
This is a second time I'm removing print statements in our datasets, so I've added a test to avoid these issues in the future.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3546/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3546/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3545
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3545/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3545/comments
https://api.github.com/repos/huggingface/datasets/issues/3545/events
https://github.com/huggingface/datasets/pull/3545
1,096,189,889
PR_kwDODunzps4wpziv
3,545
fix: 🐛 pass token when retrieving the split names
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Currently, it does not work with https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0/blob/main/common_voice_7_0.py#L146 (which was the goal), because `dl_manager.download_config.use_auth_token` is ignored, and the authentication is required to be use `huggingface-cli login`.\r\nIn my use case (dataset viewer), I'd prefer to use a specific \"User Token Access\", with only the \"read\" role (https://huggingface.co/settings/token).\r\n\r\nSee https://github.com/huggingface/datasets-preview-backend/issues/74#issuecomment-1007316853 for the context", "> Simply passing download_config is ok :)\r\n\r\nhmm, I prefer only passing use_auth_token. But the question is more: is it correct, in the (convoluted) case if `download_config.use_auth_token` exists and is different from `use_auth_token`? Which one should be used?", "If both are passed, `use_auth_token` should have the priority (more specific parameters have the higher priority)" ]
1,641,551,362,000
1,641,811,907,000
null
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3545/timeline
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3544
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3544/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3544/comments
https://api.github.com/repos/huggingface/datasets/issues/3544/events
https://github.com/huggingface/datasets/issues/3544
1,095,784,681
I_kwDODunzps5BUFjp
3,544
Ability to split a dataset in multiple files.
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,641,510,145,000
1,641,510,145,000
null
CONTRIBUTOR
null
Hello, **Is your feature request related to a problem? Please describe.** My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset. I understand that we shouldn't overwrite an arrow file as this could cause Segfault and so on. Before 1.16, I was able to overwrite the dataset and that would work most of the time with some retries. **Describe the solution you'd like** I was thinking that if we could append `Dataset._data_files`, when the workers reload the Dataset, they would get the new columns. **Describe alternatives you've considered** I currently need to 1. Save multiple "versions" of the dataset and load the latest. 2. Try working with cache files to get the latest columns. **Additional context** I think this would be a great addition to HFDataset as Parquet supports multi-files input out of the box! I can make a PR myself with some pointers as needed :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3544/timeline
null
null
null
false

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
5
Add dataset card