url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.09B
node_id
stringlengths
18
32
number
int64
1
3.5k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
null
assignees
sequence
milestone
null
comments
sequence
created_at
int64
1,587B
1,641B
updated_at
int64
1,587B
1,641B
closed_at
int64
1,587B
1,641B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3504
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3504/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3504/comments
https://api.github.com/repos/huggingface/datasets/issues/3504/events
https://github.com/huggingface/datasets/issues/3504
1,090,682,230
I_kwDODunzps5BAn12
3,504
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst
{ "login": "ToddMorrill", "id": 12600692, "node_id": "MDQ6VXNlcjEyNjAwNjky", "avatar_url": "https://avatars.githubusercontent.com/u/12600692?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ToddMorrill", "html_url": "https://github.com/ToddMorrill", "followers_url": "https://api.github.com/users/ToddMorrill/followers", "following_url": "https://api.github.com/users/ToddMorrill/following{/other_user}", "gists_url": "https://api.github.com/users/ToddMorrill/gists{/gist_id}", "starred_url": "https://api.github.com/users/ToddMorrill/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ToddMorrill/subscriptions", "organizations_url": "https://api.github.com/users/ToddMorrill/orgs", "repos_url": "https://api.github.com/users/ToddMorrill/repos", "events_url": "https://api.github.com/users/ToddMorrill/events{/privacy}", "received_events_url": "https://api.github.com/users/ToddMorrill/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,640,802,200,000
1,640,802,200,000
null
NONE
null
## Describe the bug I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt). https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset # This takes a few minutes to run, so go grab a tea or coffee while you wait :) data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst" pubmed_dataset = load_dataset("json", data_files=data_files, split="train") pubmed_dataset ``` I also tried with `wget` as follows. ``` wget https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst ``` ## Expected results I expect to be able to download this file. ## Actual results Traceback ``` --------------------------------------------------------------------------- timeout Traceback (most recent call last) /usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self) 158 try: --> 159 conn = connection.create_connection( 160 (self._dns_host, self.port), self.timeout, **extra_kw /usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 /usr/lib/python3/dist-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock timeout: timed out During handling of the above exception, another exception occurred: ConnectTimeoutError Traceback (most recent call last) /usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 664 # Make the request on the httplib connection object. --> 665 httplib_response = self._make_request( 666 conn, /usr/lib/python3/dist-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 375 try: --> 376 self._validate_conn(conn) 377 except (SocketTimeout, BaseSSLError) as e: /usr/lib/python3/dist-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 995 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 996 conn.connect() 997 /usr/lib/python3/dist-packages/urllib3/connection.py in connect(self) 313 # Add certificate verification --> 314 conn = self._new_conn() 315 hostname = self.host /usr/lib/python3/dist-packages/urllib3/connection.py in _new_conn(self) 163 except SocketTimeout: --> 164 raise ConnectTimeoutError( 165 self, ConnectTimeoutError: (<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)') During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) /usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 438 if not chunked: --> 439 resp = conn.urlopen( 440 method=request.method, /usr/lib/python3/dist-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 718 --> 719 retries = retries.increment( 720 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] /usr/lib/python3/dist-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 435 if new_retry.is_exhausted(): --> 436 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 437 MaxRetryError: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)')) During handling of the above exception, another exception occurred: ConnectTimeout Traceback (most recent call last) /tmp/ipykernel_15104/606583593.py in <module> 3 # This takes a few minutes to run, so go grab a tea or coffee while you wait :) 4 data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst" ----> 5 pubmed_dataset = load_dataset("json", data_files=data_files, split="train") 6 pubmed_dataset ~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1655 1656 # Create a dataset builder -> 1657 builder_instance = load_dataset_builder( 1658 path=path, 1659 name=name, ~/.local/lib/python3.8/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs) 1492 download_config = download_config.copy() if download_config else DownloadConfig() 1493 download_config.use_auth_token = use_auth_token -> 1494 dataset_module = dataset_module_factory( 1495 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files 1496 ) ~/.local/lib/python3.8/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs) 1116 # Try packaged 1117 if path in _PACKAGED_DATASETS_MODULES: -> 1118 return PackagedDatasetModuleFactory( 1119 path, data_files=data_files, download_config=download_config, download_mode=download_mode 1120 ).get_module() ~/.local/lib/python3.8/site-packages/datasets/load.py in get_module(self) 773 else get_patterns_locally(str(Path().resolve())) 774 ) --> 775 data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token) 776 module_path, hash = _PACKAGED_DATASETS_MODULES[self.name] 777 builder_kwargs = {"hash": hash, "data_files": data_files} ~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 576 for key, patterns_for_key in patterns.items(): 577 out[key] = ( --> 578 DataFilesList.from_local_or_remote( 579 patterns_for_key, 580 base_path=base_path, ~/.local/lib/python3.8/site-packages/datasets/data_files.py in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token) 545 base_path = base_path if base_path is not None else str(Path().resolve()) 546 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) --> 547 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token) 548 return cls(data_files, origin_metadata) 549 ~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_origin_metadata_locally_or_by_urls(data_files, max_workers, use_auth_token) 492 data_files: List[Union[Path, Url]], max_workers=64, use_auth_token: Optional[Union[bool, str]] = None 493 ) -> Tuple[str]: --> 494 return thread_map( 495 partial(_get_single_origin_metadata_locally_or_by_urls, use_auth_token=use_auth_token), 496 data_files, ~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in thread_map(fn, *iterables, **tqdm_kwargs) 92 """ 93 from concurrent.futures import ThreadPoolExecutor ---> 94 return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) 95 96 ~/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py in _executor_map(PoolExecutor, fn, *iterables, **tqdm_kwargs) 74 map_args.update(chunksize=chunksize) 75 with PoolExecutor(**pool_kwargs) as ex: ---> 76 return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs)) 77 78 ~/.local/lib/python3.8/site-packages/tqdm/notebook.py in __iter__(self) 252 def __iter__(self): 253 try: --> 254 for obj in super(tqdm_notebook, self).__iter__(): 255 # return super(tqdm...) will not catch exception 256 yield obj ~/.local/lib/python3.8/site-packages/tqdm/std.py in __iter__(self) 1171 # (note: keep this check outside the loop for performance) 1172 if self.disable: -> 1173 for obj in iterable: 1174 yield obj 1175 return /usr/lib/python3.8/concurrent/futures/_base.py in result_iterator() 617 # Careful not to keep a reference to the popped future 618 if timeout is None: --> 619 yield fs.pop().result() 620 else: 621 yield fs.pop().result(end_time - time.monotonic()) /usr/lib/python3.8/concurrent/futures/_base.py in result(self, timeout) 442 raise CancelledError() 443 elif self._state == FINISHED: --> 444 return self.__get_result() 445 else: 446 raise TimeoutError() /usr/lib/python3.8/concurrent/futures/_base.py in __get_result(self) 387 if self._exception: 388 try: --> 389 raise self._exception 390 finally: 391 # Break a reference cycle with the exception in self._exception /usr/lib/python3.8/concurrent/futures/thread.py in run(self) 55 56 try: ---> 57 result = self.fn(*self.args, **self.kwargs) 58 except BaseException as exc: 59 self.future.set_exception(exc) ~/.local/lib/python3.8/site-packages/datasets/data_files.py in _get_single_origin_metadata_locally_or_by_urls(data_file, use_auth_token) 483 if isinstance(data_file, Url): 484 data_file = str(data_file) --> 485 return (request_etag(data_file, use_auth_token=use_auth_token),) 486 else: 487 data_file = str(data_file.resolve()) ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in request_etag(url, use_auth_token) 489 def request_etag(url: str, use_auth_token: Optional[Union[str, bool]] = None) -> Optional[str]: 490 headers = get_authentication_headers_for_url(url, use_auth_token=use_auth_token) --> 491 response = http_head(url, headers=headers, max_retries=3) 492 response.raise_for_status() 493 etag = response.headers.get("ETag") if response.ok else None ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries) 474 headers = copy.deepcopy(headers) or {} 475 headers["user-agent"] = get_datasets_user_agent(user_agent=headers.get("user-agent")) --> 476 response = _request_with_retry( 477 method="HEAD", 478 url=url, ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params) 407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err: 408 if tries > max_retries: --> 409 raise err 410 else: 411 logger.info(f"{method} request to {url} timed out, retrying... [{tries/max_retries}]") ~/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params) 403 tries += 1 404 try: --> 405 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) 406 success = True 407 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err: /usr/lib/python3/dist-packages/requests/api.py in request(method, url, **kwargs) 58 # cases, and look like a memory leak in others. 59 with sessions.Session() as session: ---> 60 return session.request(method=method, url=url, **kwargs) 61 62 /usr/lib/python3/dist-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 531 } 532 send_kwargs.update(settings) --> 533 resp = self.send(prep, **send_kwargs) 534 535 return resp /usr/lib/python3/dist-packages/requests/sessions.py in send(self, request, **kwargs) 644 645 # Send the request --> 646 r = adapter.send(request, **kwargs) 647 648 # Total elapsed time of the request (approximately) /usr/lib/python3/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 502 # TODO: Remove this in 3.0.0: see #2811 503 if not isinstance(e.reason, NewConnectionError): --> 504 raise ConnectTimeout(e, request=request) 505 506 if isinstance(e.reason, ResponseError): ConnectTimeout: HTTPSConnectionPool(host='the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f06dd698850>, 'Connection to the-eye.eu timed out. (connect timeout=10.0)')) ``` ## Environment info - `datasets` version: 1.17.0 - Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3504/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3504/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3503
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3503/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3503/comments
https://api.github.com/repos/huggingface/datasets/issues/3503/events
https://github.com/huggingface/datasets/issues/3503
1,090,472,735
I_kwDODunzps5A_0sf
3,503
Batched in filter throws error
{ "login": "gpucce", "id": 32967787, "node_id": "MDQ6VXNlcjMyOTY3Nzg3", "avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gpucce", "html_url": "https://github.com/gpucce", "followers_url": "https://api.github.com/users/gpucce/followers", "following_url": "https://api.github.com/users/gpucce/following{/other_user}", "gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}", "starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gpucce/subscriptions", "organizations_url": "https://api.github.com/users/gpucce/orgs", "repos_url": "https://api.github.com/users/gpucce/repos", "events_url": "https://api.github.com/users/gpucce/events{/privacy}", "received_events_url": "https://api.github.com/users/gpucce/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,640,779,264,000
1,640,779,264,000
null
NONE
null
I hope this is really a bug, I could not find it among the open issues ## Describe the bug using `batched=False` in DataSet.filter throws error ```python TypeError: filter() got an unexpected keyword argument 'batched' ``` but in the docs it is lister as an argument. ## Steps to reproduce the bug ```python task = "mnli" max_length = 128 tokenizer = AutoTokenizer.from_pretrained("./pretrained_models/pretrained_models_drozd/sl250.m.gsic.titech.ac.jp:8000/21.11.17_06.30.32_roberta-base_a0057/checkpoints/smpl_400M/hf/") dataset = load_dataset("glue", task) task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mnli-mm": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } ##### tokenization_parameters sentence1_key, sentence2_key = task_to_keys[task] def preprocess_function(examples, max_length): if sentence2_key is None: return tokenizer( examples[sentence1_key], truncation=True, max_length=max_length ) return tokenizer( examples[sentence1_key], examples[sentence2_key], truncation=False, padding="max_length", max_length=max_length, ) encoded_dataset = dataset.map( lambda x: preprocess_function(x, max_length=max_length), batched=False ) encoded_dataset.filter(lambda x: len(x['input_ids']) <= max_length, batched=False) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1, 1.17.0 - Platform: ubuntu - Python version: 3.8.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3503/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3502/comments
https://api.github.com/repos/huggingface/datasets/issues/3502/events
https://github.com/huggingface/datasets/pull/3502
1,090,438,558
PR_kwDODunzps4wXSLi
3,502
Add QuALITY
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,775,526,000
1,640,775,526,000
null
CONTRIBUTOR
null
Fixes #3441.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3502/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3502", "html_url": "https://github.com/huggingface/datasets/pull/3502", "diff_url": "https://github.com/huggingface/datasets/pull/3502.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3502.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3501/comments
https://api.github.com/repos/huggingface/datasets/issues/3501/events
https://github.com/huggingface/datasets/pull/3501
1,090,413,758
PR_kwDODunzps4wXM8H
3,501
Update pib dataset card
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,772,880,000
1,640,776,401,000
1,640,776,401,000
MEMBER
null
Related to #3496
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3501/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3501", "html_url": "https://github.com/huggingface/datasets/pull/3501", "diff_url": "https://github.com/huggingface/datasets/pull/3501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3501.patch", "merged_at": 1640776401000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3500/comments
https://api.github.com/repos/huggingface/datasets/issues/3500/events
https://github.com/huggingface/datasets/pull/3500
1,090,406,133
PR_kwDODunzps4wXLTB
3,500
Docs: Add VCTK dataset description
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,772,125,000
1,640,772,125,000
null
CONTRIBUTOR
null
This PR is a very minor followup to #1837, with only docs changes (single comment string).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3500/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3500", "html_url": "https://github.com/huggingface/datasets/pull/3500", "diff_url": "https://github.com/huggingface/datasets/pull/3500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3500.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3499/comments
https://api.github.com/repos/huggingface/datasets/issues/3499/events
https://github.com/huggingface/datasets/issues/3499
1,090,132,618
I_kwDODunzps5A-hqK
3,499
Adjusting chunk size for streaming datasets
{ "login": "JoelNiklaus", "id": 3775944, "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoelNiklaus", "html_url": "https://github.com/JoelNiklaus", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,640,726,273,000
1,640,726,273,000
null
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the frequent decompressing. **Describe the solution you'd like** I would appreciate a parameter in the load_dataset function, that allows me to set the chunksize myself (to a value like 100'000 in my case). Like that, I hope to improve the processing time.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3499/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3498/comments
https://api.github.com/repos/huggingface/datasets/issues/3498/events
https://github.com/huggingface/datasets/pull/3498
1,090,096,332
PR_kwDODunzps4wWL5U
3,498
update `pretty_name` for all datasets
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,721,007,000
1,640,765,478,000
null
CONTRIBUTOR
null
I made a script some time back to fetch `pretty_names` from `papers_with_code` dataset and some rules. Updating them in the `README` of `datasets`. Took only first 200 datasets in consideration and after some eye balling most of them were looking good to me!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3498/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3498", "html_url": "https://github.com/huggingface/datasets/pull/3498", "diff_url": "https://github.com/huggingface/datasets/pull/3498.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3498.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3497/comments
https://api.github.com/repos/huggingface/datasets/issues/3497/events
https://github.com/huggingface/datasets/issues/3497
1,090,050,148
I_kwDODunzps5A-Nhk
3,497
Changing sampling rate in audio dataset and subsequently mapping with `num_proc > 1` leads to weird bug
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Same error occures when using max samples with https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py" ]
1,640,714,629,000
1,640,725,234,000
null
MEMBER
null
Running: ```python from datasets import load_dataset, DatasetDict import datasets from transformers import AutoFeatureExtractor raw_datasets = DatasetDict() raw_datasets["train"] = load_dataset("common_voice", "ab", split="train") feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") raw_datasets = raw_datasets.cast_column( "audio", datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate) ) num_workers = 16 def prepare_dataset(batch): sample = batch["audio"] inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"]) batch["input_values"] = inputs.input_values[0] batch["input_length"] = len(batch["input_values"]) return batch raw_datasets.map( prepare_dataset, remove_columns=next(iter(raw_datasets.values())).column_names, num_proc=16, desc="preprocess datasets", ) ``` gives ```bash File "/home/patrick/experiments/run_bug.py", line 25, in <module> raw_datasets.map( File "/home/patrick/python_bin/datasets/dataset_dict.py", line 492, in map { File "/home/patrick/python_bin/datasets/dataset_dict.py", line 493, in <dictcomp> k: dataset.map( File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2139, in map shards = [ File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2140, in <listcomp> self.shard(num_shards=num_proc, index=rank, contiguous=True, keep_in_memory=keep_in_memory) File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 3164, in shard return self.select( File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 485, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/patrick/python_bin/datasets/fingerprint.py", line 411, in wrapper out = func(self, *args, **kwargs) File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2756, in select return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint) File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2667, in _new_dataset_with_indices return Dataset( File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 659, in __init__ raise ValueError( ValueError: External features info don't match the dataset: Got {'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)} with type struct<client_id: string, path: string, audio: string, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string> but expected something like {'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)} with type struct<client_id: string, path: string, audio: struct<path: string, bytes: binary>, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string> ``` Versions: ```python - `datasets` version: 1.16.2.dev0 - Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 6.0.1 ``` and `transformers`: ``` - `transformers` version: 4.16.0.dev0 - Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33 - Python version: 3.9.7 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3497/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3496/comments
https://api.github.com/repos/huggingface/datasets/issues/3496/events
https://github.com/huggingface/datasets/pull/3496
1,089,989,155
PR_kwDODunzps4wV1_w
3,496
Update version of pib dataset and make it streamable
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It seems like there is still an error: `Message: 'TarContainedFile' object has no attribute 'readable'`\r\n\r\nhttps://huggingface.co/datasets/pib/viewer", "@severo I was wondering about that...\r\n\r\nIt works fine when I run it in streaming mode in my terminal:\r\n```python\r\nIn [3]: from datasets import load_dataset; ds = load_dataset(\"pib\", \"gu-pa\", split=\"train\", streaming=True); item = next(iter(ds))\r\n\r\nIn [4]: item\r\nOut[4]: \r\n{'translation': {'gu': 'એવો નિર્ણય લેવાયો હતો કે ખંતપૂર્વકની કામગીરી હાથ ધરવા, કાયદેસર અને ટેકનિકલ મૂલ્યાંકન કરવા, વેન્ચર કેપિટલ ઇન્વેસ્ટમેન્ટ સમિતિની બેઠક યોજવા વગેરે એઆઇએફને કરવામાં આવેલ પ્રતિબદ્ધતાના 0.50 ટકા સુધી અને બાકીની રકમ એફએફએસને પૂર્ણ કરવામાં આવશે.',\r\n 'pa': 'ਇਹ ਵੀ ਫੈਸਲਾ ਕੀਤਾ ਗਿਆ ਕਿ ਐੱਫਆਈਆਈ ਅਤੇ ਬਕਾਏ ਲਈ ਕੀਤੀਆਂ ਗਈਆਂ ਵਚਨਬੱਧਤਾਵਾਂ ਦੇ 0.50 % ਦੀ ਸੀਮਾ ਤੱਕ ਐੱਫਈਐੱਸ ਨੂੰ ਮਿਲਿਆ ਜਾਏਗਾ, ਇਸ ਨਾਲ ਉੱਦਮ ਪੂੰਜੀ ਨਿਵੇਸ਼ ਕਮੇਟੀ ਦੀ ਬੈਠਕ ਦਾ ਆਯੋਜਨ ਉਚਿਤ ਸਾਵਧਾਨੀ, ਕਾਨੂੰਨੀ ਅਤੇ ਤਕਨੀਕੀ ਮੁੱਲਾਂਕਣ ਲਈ ਸੰਚਾਲਨ ਖਰਚ ਆਦਿ ਦੀ ਪੂਰਤੀ ਹੋਵੇਗੀ।'}}\r\n```" ]
1,640,707,315,000
1,640,795,865,000
1,640,767,377,000
MEMBER
null
This PR: - Updates version of pib dataset: from 0.0.0 to 1.3.0 - Makes the dataset streamable Fix #3491. CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3496/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3496", "html_url": "https://github.com/huggingface/datasets/pull/3496", "diff_url": "https://github.com/huggingface/datasets/pull/3496.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3496.patch", "merged_at": 1640767377000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3495/comments
https://api.github.com/repos/huggingface/datasets/issues/3495/events
https://github.com/huggingface/datasets/issues/3495
1,089,983,632
I_kwDODunzps5A99SQ
3,495
Add VoxLingua107
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
1,640,706,703,000
1,640,706,703,000
null
CONTRIBUTOR
null
## Adding a Dataset - **Name:** VoxLingua107 - **Description:** VoxLingua107 is a speech dataset for training spoken language identification models. - **Paper:** https://arxiv.org/abs/2011.12998 - **Data:** http://bark.phon.ioc.ee/voxlingua107/ - **Motivation:** 107 languages, totaling 6628 hours for the train split. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3495/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3494/comments
https://api.github.com/repos/huggingface/datasets/issues/3494/events
https://github.com/huggingface/datasets/pull/3494
1,089,983,103
PR_kwDODunzps4wV0vB
3,494
Clone full repo to detect new tags when mirroring datasets on the Hub
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Good catch !!", "The CI fail is unrelated to this PR and fixed on master, merging :)" ]
1,640,706,647,000
1,640,707,641,000
1,640,707,640,000
MEMBER
null
The new releases of `datasets` were not detected because the shallow clone in the CI wasn't getting the git tags. By cloning the full repository we can properly detect a new release, and tag all the dataset repositories accordingly cc @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3494/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3494/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3494", "html_url": "https://github.com/huggingface/datasets/pull/3494", "diff_url": "https://github.com/huggingface/datasets/pull/3494.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3494.patch", "merged_at": 1640707640000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3493
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3493/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3493/comments
https://api.github.com/repos/huggingface/datasets/issues/3493/events
https://github.com/huggingface/datasets/pull/3493
1,089,967,286
PR_kwDODunzps4wVxfr
3,493
Fix VCTK encoding
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,705,016,000
1,640,706,498,000
1,640,706,497,000
MEMBER
null
utf-8 encoding was missing in the VCTK dataset builder added in #3351
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3493/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3493", "html_url": "https://github.com/huggingface/datasets/pull/3493", "diff_url": "https://github.com/huggingface/datasets/pull/3493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3493.patch", "merged_at": 1640706497000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3492/comments
https://api.github.com/repos/huggingface/datasets/issues/3492/events
https://github.com/huggingface/datasets/pull/3492
1,089,952,943
PR_kwDODunzps4wVufr
3,492
Add `gzip` for `to_json`
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,703,671,000
1,640,712,204,000
null
CONTRIBUTOR
null
(Partially) closes #3480. I have added `gzip` compression for `to_json`. I realised we can run into this compression problem with `to_csv` as well. `IOHandler` can be used for `to_csv` too. Please let me know if any changes are required.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3492/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3492", "html_url": "https://github.com/huggingface/datasets/pull/3492", "diff_url": "https://github.com/huggingface/datasets/pull/3492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3492.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3491/comments
https://api.github.com/repos/huggingface/datasets/issues/3491/events
https://github.com/huggingface/datasets/issues/3491
1,089,918,018
I_kwDODunzps5A9tRC
3,491
Update version of pib dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,640,700,238,000
1,640,767,377,000
1,640,767,377,000
MEMBER
null
On the Hub we have v0, while there exists v1.3. Related to bigscience-workshop/data_tooling#130
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3491/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3491/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3490/comments
https://api.github.com/repos/huggingface/datasets/issues/3490/events
https://github.com/huggingface/datasets/issues/3490
1,089,730,181
I_kwDODunzps5A8_aF
3,490
Does datasets support load text from HDFS?
{ "login": "dancingpipi", "id": 20511825, "node_id": "MDQ6VXNlcjIwNTExODI1", "avatar_url": "https://avatars.githubusercontent.com/u/20511825?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dancingpipi", "html_url": "https://github.com/dancingpipi", "followers_url": "https://api.github.com/users/dancingpipi/followers", "following_url": "https://api.github.com/users/dancingpipi/following{/other_user}", "gists_url": "https://api.github.com/users/dancingpipi/gists{/gist_id}", "starred_url": "https://api.github.com/users/dancingpipi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dancingpipi/subscriptions", "organizations_url": "https://api.github.com/users/dancingpipi/orgs", "repos_url": "https://api.github.com/users/dancingpipi/repos", "events_url": "https://api.github.com/users/dancingpipi/events{/privacy}", "received_events_url": "https://api.github.com/users/dancingpipi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,681,762,000
1,640,682,973,000
null
NONE
null
The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine, so I wander does datasets support read data from hdfs?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3490/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3489/comments
https://api.github.com/repos/huggingface/datasets/issues/3489/events
https://github.com/huggingface/datasets/pull/3489
1,089,401,926
PR_kwDODunzps4wT97d
3,489
Avoid unnecessary list creations
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,629,256,000
1,640,681,995,000
null
CONTRIBUTOR
null
Like in `join([... for s in ...])`. Also changed other things that I saw: * Use a `with` statement for many `open` that missed them, so the files don't remain open. * Remove unused variables. * Many HTTP links converted into HTTPS (verified). * Remove unnecessary "r" mode arg in `open` (double-checked it was actually the default in each case). * Remove Python 2 style of using `super`. * Run `pyupgrade $(find . -name "*.py" -type f) --py36-plus` (which already does some of the previous points). * Run `dos2unix $(find . -name "*.py" -type f)` (CRLF to LF line endings). * Fix typos.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3489/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3489", "html_url": "https://github.com/huggingface/datasets/pull/3489", "diff_url": "https://github.com/huggingface/datasets/pull/3489.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3489.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3488/comments
https://api.github.com/repos/huggingface/datasets/issues/3488/events
https://github.com/huggingface/datasets/issues/3488
1,089,345,653
I_kwDODunzps5A7hh1
3,488
URL query parameters are set as path in the compression hop for fsspec
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,640,622,540,000
1,640,622,540,000
null
MEMBER
null
## Describe the bug There is an ssue with `StreamingDownloadManager._extract`. I don't know how the test `test_streaming_gg_drive_gzipped` passes: For ```python TEST_GG_DRIVE_GZIPPED_URL = "https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz" urlpath = StreamingDownloadManager().download_and_extract(TEST_GG_DRIVE_GZIPPED_URL) ``` gives `urlpath`: ```python 'gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz::https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz' ``` The gzip path makes no sense: `gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz` ## Steps to reproduce the bug ```python from datasets.utils.streaming_download_manager import StreamingDownloadManager dl_manager = StreamingDownloadManager() urlpath = dl_manager.extract("https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz") print(urlpath) ``` ## Expected results The query parameters should not be set as part of the path.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3488/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3487/comments
https://api.github.com/repos/huggingface/datasets/issues/3487/events
https://github.com/huggingface/datasets/pull/3487
1,089,209,031
PR_kwDODunzps4wTVeN
3,487
Update ADD_NEW_DATASET.md
{ "login": "apergo-ai", "id": 68908804, "node_id": "MDQ6VXNlcjY4OTA4ODA0", "avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apergo-ai", "html_url": "https://github.com/apergo-ai", "followers_url": "https://api.github.com/users/apergo-ai/followers", "following_url": "https://api.github.com/users/apergo-ai/following{/other_user}", "gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions", "organizations_url": "https://api.github.com/users/apergo-ai/orgs", "repos_url": "https://api.github.com/users/apergo-ai/repos", "events_url": "https://api.github.com/users/apergo-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/apergo-ai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,607,891,000
1,640,617,245,000
1,640,617,245,000
CONTRIBUTOR
null
fixed make style prompt for Windows Terminal
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3487/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3487", "html_url": "https://github.com/huggingface/datasets/pull/3487", "diff_url": "https://github.com/huggingface/datasets/pull/3487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3487.patch", "merged_at": 1640617245000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3486/comments
https://api.github.com/repos/huggingface/datasets/issues/3486/events
https://github.com/huggingface/datasets/pull/3486
1,089,171,551
PR_kwDODunzps4wTNd1
3,486
Fix weird spacing in ManualDownloadError message
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,604,036,000
1,640,682,206,000
1,640,682,028,000
CONTRIBUTOR
null
`textwrap.dedent` works based on the spaces at the beginning. Before this change, there wasn't any space.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3486/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3486", "html_url": "https://github.com/huggingface/datasets/pull/3486", "diff_url": "https://github.com/huggingface/datasets/pull/3486.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3486.patch", "merged_at": 1640682028000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3485/comments
https://api.github.com/repos/huggingface/datasets/issues/3485/events
https://github.com/huggingface/datasets/issues/3485
1,089,027,581
I_kwDODunzps5A6T39
3,485
skip columns which cannot set to specific format when set_format
{ "login": "tshu-w", "id": 13161779, "node_id": "MDQ6VXNlcjEzMTYxNzc5", "avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tshu-w", "html_url": "https://github.com/tshu-w", "followers_url": "https://api.github.com/users/tshu-w/followers", "following_url": "https://api.github.com/users/tshu-w/following{/other_user}", "gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}", "starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions", "organizations_url": "https://api.github.com/users/tshu-w/orgs", "repos_url": "https://api.github.com/users/tshu-w/repos", "events_url": "https://api.github.com/users/tshu-w/events{/privacy}", "received_events_url": "https://api.github.com/users/tshu-w/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "You can add columns that you wish to set into `torch` format using `dataset.set_format(\"torch\", ['id', 'abc'])` so that input batch of the transform only contains those columns", "Sorry, I miss `output_all_columns` args and thought after `dataset.set_format(\"torch\", columns=columns)` I can only get specific columns I assigned." ]
1,640,589,595,000
1,640,596,027,000
1,640,596,027,000
NONE
null
**Is your feature request related to a problem? Please describe.** When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns. **Describe the solution you'd like** skip columns which cannot set to specific format when set_format instead of raise an error.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3485/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3484/comments
https://api.github.com/repos/huggingface/datasets/issues/3484/events
https://github.com/huggingface/datasets/issues/3484
1,088,910,402
I_kwDODunzps5A53RC
3,484
make shape verification to use ArrayXD instead of nested lists for map
{ "login": "tshu-w", "id": 13161779, "node_id": "MDQ6VXNlcjEzMTYxNzc5", "avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tshu-w", "html_url": "https://github.com/tshu-w", "followers_url": "https://api.github.com/users/tshu-w/followers", "following_url": "https://api.github.com/users/tshu-w/following{/other_user}", "gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}", "starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions", "organizations_url": "https://api.github.com/users/tshu-w/orgs", "repos_url": "https://api.github.com/users/tshu-w/repos", "events_url": "https://api.github.com/users/tshu-w/events{/privacy}", "received_events_url": "https://api.github.com/users/tshu-w/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,640,571,362,000
1,640,571,362,000
null
NONE
null
As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nested lists for map can help user reduce unnecessary cast. I notice datasets have done something special for `input_ids` and `attention_mask` which is also unnecessary after this feature added.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3484/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3483/comments
https://api.github.com/repos/huggingface/datasets/issues/3483/events
https://github.com/huggingface/datasets/pull/3483
1,088,784,157
PR_kwDODunzps4wSAW4
3,483
Remove unused phony rule from Makefile
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,529,433,000
1,640,529,433,000
null
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3483/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3483", "html_url": "https://github.com/huggingface/datasets/pull/3483", "diff_url": "https://github.com/huggingface/datasets/pull/3483.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3483.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3482/comments
https://api.github.com/repos/huggingface/datasets/issues/3482/events
https://github.com/huggingface/datasets/pull/3482
1,088,317,921
PR_kwDODunzps4wQqE1
3,482
Fix duplicate keys in NewsQA
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Flaky tests?" ]
1,640,343,719,000
1,640,536,193,000
null
CONTRIBUTOR
null
* Fix duplicate keys in NewsQA when loading from CSV files. * Fix s/narqa/newsqa/ in the download manually error message. * Make the download manually error message show nicely when printed. Otherwise, is hard to read due to spacing issues. * Fix the format of the license text. * Reformat the code to make it simpler.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3482/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3482", "html_url": "https://github.com/huggingface/datasets/pull/3482", "diff_url": "https://github.com/huggingface/datasets/pull/3482.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3482.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3481/comments
https://api.github.com/repos/huggingface/datasets/issues/3481/events
https://github.com/huggingface/datasets/pull/3481
1,088,308,343
PR_kwDODunzps4wQoJu
3,481
Fix overriding of filesystem info
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,342,551,000
1,640,344,139,000
1,640,344,139,000
MEMBER
null
Previously, `BaseCompressedFileFileSystem.info` was overridden and transformed from function to dict. This generated a bug for filesystem methods that use `self.info()`, like e.g. `fs.isfile()`. This PR: - Adds tests for `fs.isfile` (that use `fs.info`). - Fixes custom `BaseCompressedFileFileSystem.info` by removing its overriding.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3481/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3481", "html_url": "https://github.com/huggingface/datasets/pull/3481", "diff_url": "https://github.com/huggingface/datasets/pull/3481.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3481.patch", "merged_at": 1640344139000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3480
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3480/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3480/comments
https://api.github.com/repos/huggingface/datasets/issues/3480/events
https://github.com/huggingface/datasets/issues/3480
1,088,267,110
I_kwDODunzps5A3aNm
3,480
the compression format requested when saving a dataset in json format is not respected
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Thanks for reporting @SaulLu.\r\n\r\nAt first sight I think the problem is caused because `pandas` only takes into account the `compression` parameter if called with a non-null file path or buffer. And in our implementation, we call pandas `to_json` with `None` `path_or_buf`.\r\n\r\nWe should fix this:\r\n- either handling directly the `compression` parameter ourselves\r\n- or refactoring to pass non-null path or buffer to pandas\r\n\r\nCC: @lhoestq", "I was thinking if we can handle the `compression` parameter by ourselves? Compression types will be similar to what `pandas` offer. Initially, we can try this with 2-3 compression types and see how good/bad it is? Let me know if it sounds good, I can raise a PR for this next week", "Hi ! Thanks for your help @bhavitvyamalik :)\r\nMaybe let's start with `gzip` ? I think it's the most common use case, then if we're fine with it we can add other compression methods" ]
1,640,337,831,000
1,640,622,318,000
null
NONE
null
## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json. however when we pass for example `compression="gzip"`, the saved file is not compressed. Would you also have expected compression to be applied? :relaxed: ## Steps to reproduce the bug ```python my_dict = {"a": [1, 2, 3], "b": [1, 2, 3]} ``` ### Result with datasets ```python from datasets import Dataset dataset = Dataset.from_dict(my_dict) dataset.to_json("dic_with_datasets.jsonl.gz", compression="gzip") !cat dic_with_datasets.jsonl.gz ``` output ``` {"a":1,"b":1} {"a":2,"b":2} {"a":3,"b":3} ``` Note: I would expected to see binary data here ### Result with pandas ```python import pandas as pd df = pd.DataFrame(my_dict) df.to_json("dic_with_pandas.jsonl.gz", lines=True, orient="records", compression="gzip") !cat dic_with_pandas.jsonl.gz ``` output ``` 4��a�dic_with_pandas.jsonl��VJT�2�QJ��\� ��g��yƵ���������)��� ``` Note: It looks like binary data ## Expected results I would have expected that the saved result with datasets would also be a binary file ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.11 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3480/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3479/comments
https://api.github.com/repos/huggingface/datasets/issues/3479/events
https://github.com/huggingface/datasets/issues/3479
1,088,232,880
I_kwDODunzps5A3R2w
3,479
Dataset preview is not available (I think for all Hugging Face datasets)
{ "login": "Abirate", "id": 66887439, "node_id": "MDQ6VXNlcjY2ODg3NDM5", "avatar_url": "https://avatars.githubusercontent.com/u/66887439?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Abirate", "html_url": "https://github.com/Abirate", "followers_url": "https://api.github.com/users/Abirate/followers", "following_url": "https://api.github.com/users/Abirate/following{/other_user}", "gists_url": "https://api.github.com/users/Abirate/gists{/gist_id}", "starred_url": "https://api.github.com/users/Abirate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Abirate/subscriptions", "organizations_url": "https://api.github.com/users/Abirate/orgs", "repos_url": "https://api.github.com/users/Abirate/repos", "events_url": "https://api.github.com/users/Abirate/events{/privacy}", "received_events_url": "https://api.github.com/users/Abirate/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "You're right, we have an issue today with the datasets preview. We're investigating.", "It should be fixed now. Thanks for reporting.", "Down again. ", "Fixed for good." ]
1,640,333,928,000
1,640,356,066,000
1,640,356,066,000
NONE
null
## Dataset viewer issue for '*french_book_reviews*' **Link:** https://huggingface.co/datasets/Abirate/french_book_reviews **short description of the issue** For my dataset, the dataset preview is no longer functional (it used to work: The dataset had been added the day before and it was fine...) And, after looking over the datasets, I discovered that this issue affects all Hugging Face datasets (as of yesterday, December 23, 2021, around 10 p.m. (CET)). **Am I the one who added this dataset** : Yes **Note**: here a screenshot showing the issue ![Dataset preview is not available for my dataset](https://user-images.githubusercontent.com/66887439/147333078-60734578-420d-4e91-8691-a90afeaa8948.jpg) **And here for glue dataset :** ![Dataset preview is not available for other Hugging Face datasets(glue)](https://user-images.githubusercontent.com/66887439/147333492-26fa530c-befd-4992-8361-70c51397a25a.jpg)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3479/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3479/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3478/comments
https://api.github.com/repos/huggingface/datasets/issues/3478/events
https://github.com/huggingface/datasets/pull/3478
1,087,860,180
PR_kwDODunzps4wPMWq
3,478
Extend support for streaming datasets that use os.walk
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Nice. I'll update the dataset viewer once merged, and test on these four datasets" ]
1,640,277,775,000
1,640,343,020,000
1,640,343,019,000
MEMBER
null
This PR extends the support in streaming mode for datasets that use `os.walk`, by patching that function. This PR adds support for streaming mode to datasets: 1. autshumato 1. code_x_glue_cd_code_to_text 1. code_x_glue_tc_nl_code_search_adv 1. nchlt CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3478/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3478", "html_url": "https://github.com/huggingface/datasets/pull/3478", "diff_url": "https://github.com/huggingface/datasets/pull/3478.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3478.patch", "merged_at": 1640343019000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3477/comments
https://api.github.com/repos/huggingface/datasets/issues/3477/events
https://github.com/huggingface/datasets/pull/3477
1,087,850,253
PR_kwDODunzps4wPKPX
3,477
Use `iter_files` instead of `str(Path(...)` in image dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "`iter_archive` is about to support ZIP archives. I think we should use this no ?\r\n\r\nsee #3347 https://github.com/huggingface/datasets/pull/3379", "I was interested in the support for isfile/dir in remote.\r\n\r\nAnyway, `iter_files` will be available for community users.", "I'm not a big fan of having two functions that do the same thing. What do you think ?", "They do not do the same thing:\r\n- One iterates over files in a directory\r\n- The other I guess will iterate over the members of an archive", "Makes sense ! Sounds good then - sorry for my misunderstanding\r\n\r\nNote that `iter_archive` will be more performant for data streaming that `iter_files` thanks to the buffering so maybe in the future we can `iter_archive` for some of these datasets", "Yes, @lhoestq I agree with you: once `iter_archive` supports zip files, it will be more suitable than `iter_files` for these 2 datasets.\r\n\r\nAnyway, this PR also implements `isfile`/`isdir` in streaming mode, besides fixing `iter_files`. And I'm interested in having those in master.\r\n\r\nMaybe, could we merge this PR into master and take note to refactor the datasets to use `iter_archive` once zip is supported?\r\nOther option could be to split this PR into 2..." ]
1,640,276,815,000
1,640,704,502,000
1,640,704,502,000
CONTRIBUTOR
null
Use `iter_files` in the `beans` and the `cats_vs_dogs` dataset scripts as suggested by @albertvillanova. Additional changes: * Fix `iter_files` in `MockDownloadManager` (see this [CI error](https://app.circleci.com/pipelines/github/huggingface/datasets/9247/workflows/2657ff8a-b531-4fd9-a9fc-6541a72e8d83/jobs/57028)) * Add support for `os.path.isdir` and `os.path.isfile` in streaming (`os.path.isfile` is needed in `StreamingDownloadManager`'s `iter_files` to make `cats_vs_dogs` streamable) TODO: - [ ] add tests for `xisdir` and `xisfile`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3477/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3477", "html_url": "https://github.com/huggingface/datasets/pull/3477", "diff_url": "https://github.com/huggingface/datasets/pull/3477.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3477.patch", "merged_at": 1640704502000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3476/comments
https://api.github.com/repos/huggingface/datasets/issues/3476/events
https://github.com/huggingface/datasets/pull/3476
1,087,622,872
PR_kwDODunzps4wOZ8a
3,476
Extend support for streaming datasets that use ET.parse
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,258,326,000
1,640,273,670,000
1,640,273,670,000
MEMBER
null
This PR extends the support in streaming mode for datasets that use `ET.parse`, by patching the function. This PR adds support for streaming mode to datasets: 1. ami 1. assin 1. assin2 1. counter 1. enriched_web_nlg 1. europarl_bilingual 1. hyperpartisan_news_detection 1. polsum 1. qa4mre 1. quail 1. ted_talks_iwslt 1. udhr 1. web_nlg 1. winograd_wsc CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3476/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3476/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3476", "html_url": "https://github.com/huggingface/datasets/pull/3476", "diff_url": "https://github.com/huggingface/datasets/pull/3476.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3476.patch", "merged_at": 1640273670000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3475/comments
https://api.github.com/repos/huggingface/datasets/issues/3475/events
https://github.com/huggingface/datasets/issues/3475
1,087,352,041
I_kwDODunzps5Az6zp
3,475
The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish
{ "login": "puzzler10", "id": 17426779, "node_id": "MDQ6VXNlcjE3NDI2Nzc5", "avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/puzzler10", "html_url": "https://github.com/puzzler10", "followers_url": "https://api.github.com/users/puzzler10/followers", "following_url": "https://api.github.com/users/puzzler10/following{/other_user}", "gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}", "starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions", "organizations_url": "https://api.github.com/users/puzzler10/orgs", "repos_url": "https://api.github.com/users/puzzler10/repos", "events_url": "https://api.github.com/users/puzzler10/events{/privacy}", "received_events_url": "https://api.github.com/users/puzzler10/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @puzzler10, thanks for reporting.\r\n\r\nPlease note this dataset is not hosted on Hugging Face Hub. See: \r\nhttps://github.com/huggingface/datasets/blob/c8f914473b041833fd47178fa4373cdcb56ac522/datasets/rotten_tomatoes/rotten_tomatoes.py#L42\r\n\r\nIf there are issues with the source data of a dataset, you should contact the data owners/creators instead. In the homepage associated with this dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/), you can find the authors of the dataset and how to contact them:\r\n> If you have any questions or comments regarding this site, please send email to Bo Pang or Lillian Lee.\r\n\r\nP.S.: Please also note that the example you gave of non-English review is in Portuguese (not Spanish). ;)", "Maybe best to just put a quick sentence in the dataset description that highlights this? " ]
1,640,231,803,000
1,640,305,383,000
null
NONE
null
## Describe the bug See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user. ## Steps to reproduce the bug Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomatoes) for the dataset, set the offset to 4160 for the train dataset, and scroll through the results. I found ones at index 4166 and 4173. There's others too (e.g. index 2888) but those two are easy to find like that. ## Expected results English movie reviews only. ## Actual results Example of a Spanish movie review (4173): > "É uma pena que , mais tarde , o próprio filme abandone o tom de paródia e passe a utilizar os mesmos clichês que havia satirizado "
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3475/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3474
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3474/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3474/comments
https://api.github.com/repos/huggingface/datasets/issues/3474/events
https://github.com/huggingface/datasets/pull/3474
1,086,945,384
PR_kwDODunzps4wMMt0
3,474
Decode images when iterating
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,187,289,000
1,640,707,690,000
1,640,707,690,000
MEMBER
null
If I iterate over a vision dataset, the images are not decoded, and the dictionary with the bytes is returned. This PR enables image decoding in `Dataset.__iter__` Close https://github.com/huggingface/datasets/issues/3473
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3474/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3474", "html_url": "https://github.com/huggingface/datasets/pull/3474", "diff_url": "https://github.com/huggingface/datasets/pull/3474.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3474.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3473/comments
https://api.github.com/repos/huggingface/datasets/issues/3473/events
https://github.com/huggingface/datasets/issues/3473
1,086,937,610
I_kwDODunzps5AyVoK
3,473
Iterating over a vision dataset doesn't decode the images
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
closed
false
null
[]
null
[ "As discussed, I remember I set `decoded=False` here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed.", "> I set decoded=False here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed\r\n\r\nhttps://github.com/huggingface/datasets/pull/3430 will add more control to decoding, so I think it's OK to enable decoding in `__iter__` for now. After we merge the linked PR, the user can easily disable it again.", "@mariosasko I wonder why there is no issue in `Audio` feature with decoding disabled in `__iter__`, whereas there is in `Image` feature.\r\n\r\nEnabling decoding in `__iter__` will make fail Audio regressions tests: https://github.com/huggingface/datasets/runs/4608657230?check_suite_focus=true\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_not_decoded\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded\r\n========================= 2 failed, 15 passed in 8.37s =========================", "Please also note that the regression tests were implemented in accordance with the specifications:\r\n- when doing a `map` (wich calls `__iter__`) of a function that doesn't access the audio field, the decoding should be disabled; this is why the decoding is disabled in `__iter__` (and only enabled in `__getitem__`).", "> I wonder why there is no issue in Audio feature with decoding disabled in __iter__, whereas there is in Image feature.\r\n\r\n@albertvillanova Not sure if I understand this part. Currently, both the Image and the Audio feature don't decode data in `__iter__`, so their behavior is aligned there.\r\n", "Therefore, this is not an issue, neither for Audio nor Image feature.\r\n\r\nCould you please elaborate more on the expected use case? @lhoestq @NielsRogge \r\n\r\nThe expected use cases (in accordance with the specs: see #2324):\r\n- decoding should be enabled when accessing a specific item (`__getitem__`)\r\n- decoding should be disabled while iterating (`__iter__`) to allow preprocessing of non-audio/image features (like label or text, for example) using `.map`\r\n- decoding should be enabled in a `.map` only if the `.map` function accesses the audio/image feature (implemented using `LazyDict`)", "For me it's not an issue, actually. I just (mistakenly) tried to iterate over a PyTorch Dataset instead of a PyTorch DataLoader, \r\n\r\ni.e. I did this:\r\n\r\n`batch = next(iter(train_ds)) `\r\n\r\nwhereas I actually wanted to do\r\n\r\n`batch = next(iter(train_dataloader))`\r\n\r\nand then it turned out that in the first case, the image was a string of bytes rather than a Pillow image, hence Quentin opened an issue.", "Thanks @NielsRogge for the context.\r\n\r\nSo IMO everything is working as expected.\r\n\r\nI'm closing this issue. Feel free to reopen it again if further changes of the specs should be addressed.", "Thanks for the details :)\r\n\r\nI still think that it's unexpected to get different results when doing\r\n```python\r\nfor i in range(len(dataset)):\r\n sample = dataset[i]\r\n```\r\nand\r\n```python\r\nfor sample in dataset:\r\n pass\r\n```\r\neven though I understand that if you don't need to decode the data, then decoding image or audio data when iterating is a waste of time and resources.\r\n\r\nBut in this case users can still drop the column that need decoding to get the full speed back no ?" ]
1,640,186,792,000
1,640,614,401,000
1,640,272,917,000
MEMBER
null
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # passes first_image = next(iter(mnist))["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # fails ``` ## Expected results The image should be decoded, as a PIL Image ## Actual results We get a dictionary ``` {'bytes': b'\x89PNG\r\n\x1a\n\x00..., 'path': None} ``` ## Environment info - `datasets` version: 1.17.1.dev0 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.2 - PyArrow version: 6.0.0 The bug also exists in 1.17.0 ## Investigation I think the issue is that decoding is disabled in `__iter__`: https://github.com/huggingface/datasets/blob/dfe5b73387c5e27de6a16b0caeb39d3b9ded66d6/src/datasets/arrow_dataset.py#L1651-L1661 Do you remember why it was disabled in the first place @albertvillanova ? Also cc @mariosasko @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3473/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3472/comments
https://api.github.com/repos/huggingface/datasets/issues/3472/events
https://github.com/huggingface/datasets/pull/3472
1,086,908,508
PR_kwDODunzps4wMEwA
3,472
Fix `str(Path(...))` conversion in streaming on Linux
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,185,563,000
1,640,191,973,000
1,640,191,972,000
CONTRIBUTOR
null
Fix `str(Path(...))` conversion in streaming on Linux. This should fix the streaming of the `beans` and `cats_vs_dogs` datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3472/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3472/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3472", "html_url": "https://github.com/huggingface/datasets/pull/3472", "diff_url": "https://github.com/huggingface/datasets/pull/3472.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3472.patch", "merged_at": 1640191972000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3471/comments
https://api.github.com/repos/huggingface/datasets/issues/3471/events
https://github.com/huggingface/datasets/pull/3471
1,086,588,074
PR_kwDODunzps4wLAk6
3,471
Fix Tashkeela dataset to yield stripped text
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,162,490,000
1,640,167,928,000
1,640,167,927,000
MEMBER
null
This PR: - Yields stripped text - Fix path for Windows - Adds license - Adds more info in dataset card Close bigscience-workshop/data_tooling#279
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3471/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3471", "html_url": "https://github.com/huggingface/datasets/pull/3471", "diff_url": "https://github.com/huggingface/datasets/pull/3471.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3471.patch", "merged_at": 1640167927000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3470/comments
https://api.github.com/repos/huggingface/datasets/issues/3470/events
https://github.com/huggingface/datasets/pull/3470
1,086,049,888
PR_kwDODunzps4wJO8t
3,470
Fix rendering of docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,107,021,000
1,640,165,027,000
1,640,165,027,000
MEMBER
null
Minor fix in docs. Currently, `ClassLabel` docstring rendering is not right.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3470/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3470", "html_url": "https://github.com/huggingface/datasets/pull/3470", "diff_url": "https://github.com/huggingface/datasets/pull/3470.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3470.patch", "merged_at": 1640165027000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3469/comments
https://api.github.com/repos/huggingface/datasets/issues/3469/events
https://github.com/huggingface/datasets/pull/3469
1,085,882,664
PR_kwDODunzps4wIrOV
3,469
Fix METEOR missing NLTK's omw-1.4
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I also modified the doctest call to raise the exception that doctest may catch, instead of `doctest.UnexpectedException`.\r\nThis will make debugging easier if it happens again" ]
1,640,096,351,000
1,640,098,348,000
1,640,098,168,000
MEMBER
null
NLTK 3.6.6 now requires `omw-1.4` to be downloaded for METEOR to work. This should fix the CI on master
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3469/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3469", "html_url": "https://github.com/huggingface/datasets/pull/3469", "diff_url": "https://github.com/huggingface/datasets/pull/3469.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3469.patch", "merged_at": 1640098168000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3468/comments
https://api.github.com/repos/huggingface/datasets/issues/3468/events
https://github.com/huggingface/datasets/pull/3468
1,085,871,301
PR_kwDODunzps4wIozO
3,468
Add COCO dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The CI failures other than a missing dummy data file and missing fields in the card are unrelated to this PR. ", "Thanks a lot for this great work and fixing TFDS based script @mariosasko 🤗 will generate the dummy dataset and write the model card tomorrow!", "@mariosasko I added the dataset card, I'm on the dummy data rn. " ]
1,640,095,670,000
1,640,183,236,000
null
CONTRIBUTOR
null
This PR adds the MS COCO dataset. Compared to the [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/object_detection/coco.py) script, this implementation adds 8 additional configs to cover the tasks other than object detection. Some notes: * the data exposed by TFDS is contained in the `2014`, `2015`, `2017` and `2017_panoptic_segmentation` configs here * I've updated `encode_nested_example` for easier handling of missing values (cc @lhoestq @albertvillanova; will add tests if you are OK with the changes in `features.py`) * this implementation should fix https://github.com/huggingface/datasets/pull/3377#issuecomment-985559427 TODOs: - [x] dataset card - [ ] dummy data cc @merveenoyan Closes #2526
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3468/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3468/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3468", "html_url": "https://github.com/huggingface/datasets/pull/3468", "diff_url": "https://github.com/huggingface/datasets/pull/3468.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3468.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3467/comments
https://api.github.com/repos/huggingface/datasets/issues/3467/events
https://github.com/huggingface/datasets/pull/3467
1,085,870,665
PR_kwDODunzps4wIoqd
3,467
Push dataset infos.json to Hub
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The change from `___` to `--` was allowed by https://github.com/huggingface/moon-landing/pull/1657" ]
1,640,095,633,000
1,640,106,010,000
1,640,106,009,000
MEMBER
null
When doing `push_to_hub`, the feature types are lost (see issue https://github.com/huggingface/datasets/issues/3394). This PR fixes this by also pushing a `dataset_infos.json` file to the Hub, that stores the feature types. Other minor changes: - renamed the `___` separator to `--`, since `--` is now disallowed in a name in the back-end. I tested this feature with datasets like conll2003 that has feature types like `ClassLabel` that were previously lost. Close https://github.com/huggingface/datasets/issues/3394 I would like to include this in today's release (though not mandatory), so feel free to comment/suggest changes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3467/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3467/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3467", "html_url": "https://github.com/huggingface/datasets/pull/3467", "diff_url": "https://github.com/huggingface/datasets/pull/3467.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3467.patch", "merged_at": 1640106009000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3466/comments
https://api.github.com/repos/huggingface/datasets/issues/3466/events
https://github.com/huggingface/datasets/pull/3466
1,085,722,837
PR_kwDODunzps4wII3w
3,466
Add CRASS dataset
{ "login": "apergo-ai", "id": 68908804, "node_id": "MDQ6VXNlcjY4OTA4ODA0", "avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apergo-ai", "html_url": "https://github.com/apergo-ai", "followers_url": "https://api.github.com/users/apergo-ai/followers", "following_url": "https://api.github.com/users/apergo-ai/following{/other_user}", "gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions", "organizations_url": "https://api.github.com/users/apergo-ai/orgs", "repos_url": "https://api.github.com/users/apergo-ai/repos", "events_url": "https://api.github.com/users/apergo-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/apergo-ai/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi Albert,\r\nThank you for your comments.\r\nI hope I have uploaded my local git repo to include the dummy files and style reworkings.\r\nAdded YAML in Readme as well.\r\n\r\nPlease check again.\r\n\r\nHope it works now :)" ]
1,640,085,442,000
1,640,789,625,000
null
CONTRIBUTOR
null
Added crass dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3466/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3466", "html_url": "https://github.com/huggingface/datasets/pull/3466", "diff_url": "https://github.com/huggingface/datasets/pull/3466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3466.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3465/comments
https://api.github.com/repos/huggingface/datasets/issues/3465/events
https://github.com/huggingface/datasets/issues/3465
1,085,400,432
I_kwDODunzps5AseVw
3,465
Unable to load 'cnn_dailymail' dataset
{ "login": "talha1503", "id": 42352729, "node_id": "MDQ6VXNlcjQyMzUyNzI5", "avatar_url": "https://avatars.githubusercontent.com/u/42352729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/talha1503", "html_url": "https://github.com/talha1503", "followers_url": "https://api.github.com/users/talha1503/followers", "following_url": "https://api.github.com/users/talha1503/following{/other_user}", "gists_url": "https://api.github.com/users/talha1503/gists{/gist_id}", "starred_url": "https://api.github.com/users/talha1503/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/talha1503/subscriptions", "organizations_url": "https://api.github.com/users/talha1503/orgs", "repos_url": "https://api.github.com/users/talha1503/repos", "events_url": "https://api.github.com/users/talha1503/events{/privacy}", "received_events_url": "https://api.github.com/users/talha1503/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
open
false
null
[]
null
[ "Hi @talha1503, thanks for reporting.\r\n\r\nIt seems there is an issue with one of the data files hosted at Google Drive:\r\n```\r\nGoogle Drive - Quota exceeded\r\n\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nAs you probably know, Hugging Face does not host the data, and in this case the data owner decided to host their data at Google Drive, which has quota limits.\r\n\r\nIs there anything we could do, @lhoestq @mariosasko?", "This looks related to https://github.com/huggingface/datasets/issues/996" ]
1,640,057,541,000
1,640,097,343,000
null
NONE
null
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True) ``` ## Expected results Expecting to load 'cnn_dailymail' dataset. ## Actual results `NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3465/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3464/comments
https://api.github.com/repos/huggingface/datasets/issues/3464/events
https://github.com/huggingface/datasets/issues/3464
1,085,399,097
I_kwDODunzps5AseA5
3,464
struct.error: 'i' format requires -2147483648 <= number <= 2147483647
{ "login": "koukoulala", "id": 30341159, "node_id": "MDQ6VXNlcjMwMzQxMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/30341159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/koukoulala", "html_url": "https://github.com/koukoulala", "followers_url": "https://api.github.com/users/koukoulala/followers", "following_url": "https://api.github.com/users/koukoulala/following{/other_user}", "gists_url": "https://api.github.com/users/koukoulala/gists{/gist_id}", "starred_url": "https://api.github.com/users/koukoulala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/koukoulala/subscriptions", "organizations_url": "https://api.github.com/users/koukoulala/orgs", "repos_url": "https://api.github.com/users/koukoulala/repos", "events_url": "https://api.github.com/users/koukoulala/events{/privacy}", "received_events_url": "https://api.github.com/users/koukoulala/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,640,057,341,000
1,640,057,341,000
null
NONE
null
## Describe the bug A clear and concise description of what the bug is. using latest datasets=datasets-1.16.1-py3-none-any.whl process my own multilingual dataset by following codes, and the number of rows in all dataset is 306000, the max_length of each sentence is 256: ![image](https://user-images.githubusercontent.com/30341159/146865779-3d25d011-1f42-4026-9e1b-76f6e1d172e9.png) then I get this error: ![image](https://user-images.githubusercontent.com/30341159/146865844-e60a404c-5f3a-403c-b2f1-acd943b5cdb8.png) I have seen the issue in #2134 and #2150, so I don't understand why latest repo still can't deal with big dataset. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: linux docker - Python version: 3.6
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3464/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3463/comments
https://api.github.com/repos/huggingface/datasets/issues/3463/events
https://github.com/huggingface/datasets/pull/3463
1,085,078,795
PR_kwDODunzps4wGB4P
3,463
Update swahili_news dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,024,420,000
1,640,067,843,000
1,640,067,842,000
MEMBER
null
Update dataset with latest verion data files. Fix #3462. Close bigscience-workshop/data_tooling#107
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3463/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3463", "html_url": "https://github.com/huggingface/datasets/pull/3463", "diff_url": "https://github.com/huggingface/datasets/pull/3463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3463.patch", "merged_at": 1640067841000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3462/comments
https://api.github.com/repos/huggingface/datasets/issues/3462/events
https://github.com/huggingface/datasets/issues/3462
1,085,049,661
I_kwDODunzps5ArIs9
3,462
Update swahili_news dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,640,022,241,000
1,640,067,842,000
1,640,067,841,000
MEMBER
null
Please note also: the HuggingFace version at https://huggingface.co/datasets/swahili_news is outdated. An updated version, with deduplicated text and official splits, can be found at https://zenodo.org/record/5514203. ## Adding a Dataset - **Name:** swahili_news Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Related to: - bigscience-workshop/data_tooling#107
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3462/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3461/comments
https://api.github.com/repos/huggingface/datasets/issues/3461/events
https://github.com/huggingface/datasets/pull/3461
1,085,007,346
PR_kwDODunzps4wFzDP
3,461
Fix links in metrics description
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,019,379,000
1,640,020,492,000
1,640,020,491,000
MEMBER
null
Remove Markdown syntax for links in metrics description, as it is not properly rendered. Related to #3437.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3461/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3461", "html_url": "https://github.com/huggingface/datasets/pull/3461", "diff_url": "https://github.com/huggingface/datasets/pull/3461.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3461.patch", "merged_at": 1640020491000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3460/comments
https://api.github.com/repos/huggingface/datasets/issues/3460/events
https://github.com/huggingface/datasets/pull/3460
1,085,002,469
PR_kwDODunzps4wFyCf
3,460
Don't encode lists as strings when using `Value("string")`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,019,049,000
1,640,019,891,000
null
MEMBER
null
Following https://github.com/huggingface/datasets/pull/3456#event-5792250497 it looks like `datasets` can silently convert lists to strings using `str()`, instead of raising an error. This PR fixes this and should fix the issue with WER showing low values if the input format is not right.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3460/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3460", "html_url": "https://github.com/huggingface/datasets/pull/3460", "diff_url": "https://github.com/huggingface/datasets/pull/3460.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3460.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3459/comments
https://api.github.com/repos/huggingface/datasets/issues/3459/events
https://github.com/huggingface/datasets/issues/3459
1,084,969,672
I_kwDODunzps5Aq1LI
3,459
dataset.filter overwriting previously set dataset._indices values, resulting in the wrong elements being selected.
{ "login": "mmajurski", "id": 9354454, "node_id": "MDQ6VXNlcjkzNTQ0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9354454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mmajurski", "html_url": "https://github.com/mmajurski", "followers_url": "https://api.github.com/users/mmajurski/followers", "following_url": "https://api.github.com/users/mmajurski/following{/other_user}", "gists_url": "https://api.github.com/users/mmajurski/gists{/gist_id}", "starred_url": "https://api.github.com/users/mmajurski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmajurski/subscriptions", "organizations_url": "https://api.github.com/users/mmajurski/orgs", "repos_url": "https://api.github.com/users/mmajurski/repos", "events_url": "https://api.github.com/users/mmajurski/events{/privacy}", "received_events_url": "https://api.github.com/users/mmajurski/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I think this is a duplicate of [#3190](https://github.com/huggingface/datasets/issues/3190)?", "Upgrading the datasets version as per #3190 fixes this bug. \r\nI'm Marking as closed." ]
1,640,017,009,000
1,640,018,097,000
1,640,018,097,000
NONE
null
## Describe the bug When using dataset.select to select a subset of a dataset, dataset._indices are set to indicate which elements are now considered in the dataset. The same thing happens when you shuffle the dataset; dataset._indices are set to indicate what the new order of the data is. However, if you then use a dataset.filter, that filter interacts with those dataset._indices values in a non-intuitive manner. https://huggingface.co/docs/datasets/_modules/datasets/arrow_dataset.html#Dataset.filter Effectively, it looks like the original set of _indices were discared and overwritten by the set created during the filter operation. I think this is actually an issue with how the map function handles dataset._indices. Ideally it should use the _indices it gets passed, and then return an updated _indices which reflect the map transformation applied to the starting _indices. ## Steps to reproduce the bug ```python dataset = load_dataset('imdb', split='train', keep_in_memory=True) dataset = dataset.shuffle(keep_in_memory=True) dataset = dataset.select(range(0, 10), keep_in_memory=True) print("initial 10 elements") print(dataset['label']) # -> [1, 1, 0, 1, 0, 0, 0, 1, 0, 0] dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True) print("filtered 10 elements looking for label 0") print(dataset['label']) # -> [1, 1, 1, 1, 1, 1] ``` ## Actual results ``` $ python indices_bug.py initial 10 elements [1, 1, 0, 1, 0, 0, 0, 1, 0, 0] filtered 10 elements looking for label 0 [1, 1, 1, 1, 1, 1] ``` This code block first shuffles the dataset (to get a mix of label 0 and label 1). Then it selects just the first 10 elements (the number of elements does not matter, 10 is just easy to visualize). The important part is that you select some subset of the dataset. Finally, a filter is applied to pull out just the elements with `label == 0`. The bug is that you cannot combine any dataset operation which sets the dataset._indices with filter. In this case I have 2, shuffle and subset. If you just use a single dataset._indices operation (in this case shuffle) the bug still shows up. The shuffle sets the dataset._indices and then filter uses those indices in the map, then overwrites dataset._indices with the filter results. ```python dataset = load_dataset('imdb', split='train', keep_in_memory=True) dataset = dataset.shuffle(keep_in_memory=True) dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True) dataset = dataset.select(range(0, 10), keep_in_memory=True) print(dataset['label']) # -> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` ## Expected results In an ideal world, the dataset filter would respect any dataset._indices values which had previously been set. If you use dataset.filter with the base dataset (where dataset._indices has not been set) then the filter command works as expected. ## Environment info Here are the commands required to rebuild the conda environment from scratch. ``` # create a virtual environment conda create -n dataset_indices python=3.8 -y # activate the virtual environment conda activate dataset_indices # install huggingface datasets conda install datasets ``` <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-41-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 3.0.0 ### Full Conda Environment ``` $ conda env export name: dasaset_indices channels: - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=4.5=1_gnu - abseil-cpp=20210324.2=h2531618_0 - aiohttp=3.8.1=py38h7f8727e_0 - aiosignal=1.2.0=pyhd3eb1b0_0 - arrow-cpp=3.0.0=py38h6b21186_4 - attrs=21.2.0=pyhd3eb1b0_0 - aws-c-common=0.4.57=he6710b0_1 - aws-c-event-stream=0.1.6=h2531618_5 - aws-checksums=0.1.9=he6710b0_0 - aws-sdk-cpp=1.8.185=hce553d0_0 - bcj-cffi=0.5.1=py38h295c915_0 - blas=1.0=mkl - boost-cpp=1.73.0=h27cfd23_11 - bottleneck=1.3.2=py38heb32a55_1 - brotli=1.0.9=he6710b0_2 - brotli-python=1.0.9=py38heb0550a_2 - brotlicffi=1.0.9.2=py38h295c915_0 - brotlipy=0.7.0=py38h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - c-ares=1.17.1=h27cfd23_0 - ca-certificates=2021.10.26=h06a4308_2 - certifi=2021.10.8=py38h06a4308_0 - cffi=1.14.6=py38h400218f_0 - conllu=4.4.1=pyhd3eb1b0_0 - cryptography=36.0.0=py38h9ce1e76_0 - dataclasses=0.8=pyh6d0b6a4_7 - dill=0.3.4=pyhd3eb1b0_0 - double-conversion=3.1.5=he6710b0_1 - et_xmlfile=1.1.0=py38h06a4308_0 - filelock=3.4.0=pyhd3eb1b0_0 - frozenlist=1.2.0=py38h7f8727e_0 - gflags=2.2.2=he6710b0_0 - glog=0.5.0=h2531618_0 - gmp=6.2.1=h2531618_2 - grpc-cpp=1.39.0=hae934f6_5 - huggingface_hub=0.0.17=pyhd3eb1b0_0 - icu=58.2=he6710b0_3 - idna=3.3=pyhd3eb1b0_0 - importlib-metadata=4.8.2=py38h06a4308_0 - importlib_metadata=4.8.2=hd3eb1b0_0 - intel-openmp=2021.4.0=h06a4308_3561 - krb5=1.19.2=hac12032_0 - ld_impl_linux-64=2.35.1=h7274673_9 - libboost=1.73.0=h3ff78a5_11 - libcurl=7.80.0=h0b77cf5_0 - libedit=3.1.20210910=h7f8727e_0 - libev=4.33=h7f8727e_1 - libevent=2.1.8=h1ba5d50_1 - libffi=3.3=he6710b0_2 - libgcc-ng=9.3.0=h5101ec6_17 - libgomp=9.3.0=h5101ec6_17 - libnghttp2=1.46.0=hce63b2e_0 - libprotobuf=3.17.2=h4ff587b_1 - libssh2=1.9.0=h1ba5d50_1 - libstdcxx-ng=9.3.0=hd4cf53a_17 - libthrift=0.14.2=hcc01f38_0 - libxml2=2.9.12=h03d6c58_0 - libxslt=1.1.34=hc22bd24_0 - lxml=4.6.3=py38h9120a33_0 - lz4-c=1.9.3=h295c915_1 - mkl=2021.4.0=h06a4308_640 - mkl-service=2.4.0=py38h7f8727e_0 - mkl_fft=1.3.1=py38hd3c417c_0 - mkl_random=1.2.2=py38h51133e4_0 - multiprocess=0.70.12.2=py38h7f8727e_0 - multivolumefile=0.2.3=pyhd3eb1b0_0 - ncurses=6.3=h7f8727e_2 - numexpr=2.7.3=py38h22e1b3c_1 - numpy=1.21.2=py38h20f2e39_0 - numpy-base=1.21.2=py38h79a1101_0 - openpyxl=3.0.9=pyhd3eb1b0_0 - openssl=1.1.1l=h7f8727e_0 - orc=1.6.9=ha97a36c_3 - packaging=21.3=pyhd3eb1b0_0 - pip=21.2.4=py38h06a4308_0 - py7zr=0.16.1=pyhd3eb1b0_1 - pycparser=2.21=pyhd3eb1b0_0 - pycryptodomex=3.10.1=py38h27cfd23_1 - pyopenssl=21.0.0=pyhd3eb1b0_1 - pyparsing=3.0.4=pyhd3eb1b0_0 - pyppmd=0.16.1=py38h295c915_0 - pysocks=1.7.1=py38h06a4308_0 - python=3.8.12=h12debd9_0 - python-dateutil=2.8.2=pyhd3eb1b0_0 - python-xxhash=2.0.2=py38h7f8727e_0 - pyzstd=0.14.4=py38h7f8727e_3 - re2=2020.11.01=h2531618_1 - readline=8.1=h27cfd23_0 - requests=2.26.0=pyhd3eb1b0_0 - setuptools=58.0.4=py38h06a4308_0 - six=1.16.0=pyhd3eb1b0_0 - snappy=1.1.8=he6710b0_0 - sqlite=3.36.0=hc218d9a_0 - texttable=1.6.4=pyhd3eb1b0_0 - tk=8.6.11=h1ccaba5_0 - typing_extensions=3.10.0.2=pyh06a4308_0 - uriparser=0.9.3=he6710b0_1 - utf8proc=2.6.1=h27cfd23_0 - wheel=0.37.0=pyhd3eb1b0_1 - xxhash=0.8.0=h7f8727e_3 - xz=5.2.5=h7b6447c_0 - zipp=3.6.0=pyhd3eb1b0_0 - zlib=1.2.11=h7f8727e_4 - zstd=1.4.9=haebb681_0 - pip: - async-timeout==4.0.2 - charset-normalizer==2.0.9 - datasets==1.16.1 - fsspec==2021.11.1 - huggingface-hub==0.2.1 - multidict==5.2.0 - pandas==1.3.5 - pyarrow==6.0.1 - pytz==2021.3 - pyyaml==6.0 - tqdm==4.62.3 - typing-extensions==4.0.1 - urllib3==1.26.7 - yarl==1.7.2 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3459/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3458/comments
https://api.github.com/repos/huggingface/datasets/issues/3458/events
https://github.com/huggingface/datasets/pull/3458
1,084,926,025
PR_kwDODunzps4wFiRb
3,458
Fix duplicated tag in wikicorpus dataset card
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "CI is failing just because of empty sections - merging" ]
1,640,014,456,000
1,640,016,205,000
1,640,016,204,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3458/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3458/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3458", "html_url": "https://github.com/huggingface/datasets/pull/3458", "diff_url": "https://github.com/huggingface/datasets/pull/3458.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3458.patch", "merged_at": 1640016204000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3457/comments
https://api.github.com/repos/huggingface/datasets/issues/3457/events
https://github.com/huggingface/datasets/issues/3457
1,084,862,121
I_kwDODunzps5Aqa6p
3,457
Add CMU Graphics Lab Motion Capture dataset
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
open
false
null
[]
null
[]
1,640,010,879,000
1,640,013,736,000
null
NONE
null
## Adding a Dataset - **Name:** CMU Graphics Lab Motion Capture database - **Description:** The database contains free motions which you can download and use. - **Data:** http://mocap.cs.cmu.edu/ - **Motivation:** Nice motion capture dataset Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3457/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3456/comments
https://api.github.com/repos/huggingface/datasets/issues/3456/events
https://github.com/huggingface/datasets/pull/3456
1,084,687,973
PR_kwDODunzps4wEwXz
3,456
[WER] Better error message for wer
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! I don't think this would solve this issue.\r\nCurrently it looks like there's a bug that converts the list `[\"hello it's nice\"]` to a string `'[\"hello it's nice\"]'` since this is what the metric expects as input. The conversion is done before the data are passed to `_compute()`.\r\n\r\nThis is `Value(\"string\").encode_example` that is called to do the conversion. Since `str()` encoding is too permissive we should consider raising an error if the example is not a string (even though it can be converted to string). ", "> called\r\n\r\nAh yeah you're right", "I just opened https://github.com/huggingface/datasets/pull/3460 to fix that. It now raises an error instead of computing the wrong WER", "Thank you - that should be good enough!" ]
1,640,000,320,000
1,640,019,217,000
1,640,019,216,000
MEMBER
null
Currently we have the following problem when using the WER. When the input format to the WER metric is wrong, instead of throwing an error message a word-error-rate is computed which is incorrect. E.g. when doing the following: ```python from datasets import load_metric wer = load_metric("wer") target_str = ["hello this is nice", "hello the weather is bloomy"] pred_str = [["hello it's nice"], ["hello it's the weather"]] print("Wrong:", wer.compute(predictions=pred_str, references=target_str)) print("Correct", wer.compute(predictions=[x[0] for x in pred_str], references=target_str)) ``` We get: ``` Wrong: 1.0 Correct 0.5555555555555556 ``` meaning that we get a word-error rate for incorrectly passed input formats. We should raise an error here instead so that people don't spent hours fixing a model while it's their incorrect evaluation metric is the problem for a low WER.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3456/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3456", "html_url": "https://github.com/huggingface/datasets/pull/3456", "diff_url": "https://github.com/huggingface/datasets/pull/3456.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3456.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3455
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3455/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3455/comments
https://api.github.com/repos/huggingface/datasets/issues/3455/events
https://github.com/huggingface/datasets/issues/3455
1,084,599,650
I_kwDODunzps5Apa1i
3,455
Easier information editing
{ "login": "borgr", "id": 6416600, "node_id": "MDQ6VXNlcjY0MTY2MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borgr", "html_url": "https://github.com/borgr", "followers_url": "https://api.github.com/users/borgr/followers", "following_url": "https://api.github.com/users/borgr/following{/other_user}", "gists_url": "https://api.github.com/users/borgr/gists{/gist_id}", "starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borgr/subscriptions", "organizations_url": "https://api.github.com/users/borgr/orgs", "repos_url": "https://api.github.com/users/borgr/repos", "events_url": "https://api.github.com/users/borgr/events{/privacy}", "received_events_url": "https://api.github.com/users/borgr/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "Hi ! I guess you are talking about the dataset cards that are in this repository on github ?\r\n\r\nI think github allows to submit a PR even for 1 line though the `Edit file` button on the page of the dataset card.\r\n\r\nMaybe let's mention this in `CONTRIBUTING.md` ?" ]
1,639,995,043,000
1,640,011,739,000
null
NONE
null
**Is your feature request related to a problem? Please describe.** It requires a lot of effort to improve a datasheet. **Describe the solution you'd like** UI or at least a link to the place where the code that needs to be edited is (and an easy way to edit this code directly from the site, without cloning, branching, makefile etc.) **Describe alternatives you've considered** The current Ux is to have the 8 steps for contribution while One just wishes to change a line a type etc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3455/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3454/comments
https://api.github.com/repos/huggingface/datasets/issues/3454/events
https://github.com/huggingface/datasets/pull/3454
1,084,519,107
PR_kwDODunzps4wENam
3,454
Fix iter_archive generator
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,639,990,215,000
1,639,994,700,000
1,639,994,699,000
MEMBER
null
This PR: - Adds tests to DownloadManager and StreamingDownloadManager `iter_archive` for both path and file inputs - Fixes bugs in `iter_archive` introduced in: - #3443 Fix #3453.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3454/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3454", "html_url": "https://github.com/huggingface/datasets/pull/3454", "diff_url": "https://github.com/huggingface/datasets/pull/3454.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3454.patch", "merged_at": 1639994699000 }
true
YAML Metadata Error: "language[0]" must only contain lowercase characters
YAML Metadata Error: "language[0]" with value "'en-US'" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". If you want to use BCP-47 identifiers, you can specify them in language_bcp47.

Dataset Card for GitHub Issues

Dataset Summary

GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.

Downloads last month
46
Edit dataset card