Datasets:
ArXiv:
License:
Dataset Viewer
Full Screen Viewer
Full Screen
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status response.raise_for_status() File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1024, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/c3/35/c335ac3f7fd7e7e6ee67e3991df201f60e2cd8a023c8312fa239a47ecca757cd/3438697f6f76a2998be6b691dd6b00f5339e3572e959fe5ce4e245c66ab034a2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20240830%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240830T162009Z&X-Amz-Expires=259200&X-Amz-Signature=dbba98390e77ff76105a063a28dd2cfed315fd7ab0ea61ec62845c61908fde62&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27data-00000-of-00001.arrow%3B%20filename%3D%22data-00000-of-00001.arrow%22%3B&x-id=GetObject The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info for split_generator in builder._split_generators( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/arrow/arrow.py", line 35, in _split_generators data_files = dl_manager.download_and_extract(self.config.data_files) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 166, in download_and_extract return self.extract(self.download(url_or_urls)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 118, in extract urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 494, in map_nested mapped = [ File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 495, in <listcomp> map_nested( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 511, in map_nested mapped = [ File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 512, in <listcomp> _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 373, in _single_map_nested return function(data_struct) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 123, in _extract protocol = _get_extraction_protocol(urlpath, download_config=self.download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 839, in _get_extraction_protocol return _get_extraction_protocol_with_magic_number(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 812, in _get_extraction_protocol_with_magic_number magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 765, in read return super().read(length) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1846, in read out = self.cache._fetch(self.loc, self.loc + length) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/caching.py", line 189, in _fetch self.cache = self.fetcher(start, end) # new block replaces old File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 728, in _fetch_range hf_raise_for_status(r) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 371, in hf_raise_for_status raise HfHubHTTPError(str(e), response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/c3/35/c335ac3f7fd7e7e6ee67e3991df201f60e2cd8a023c8312fa239a47ecca757cd/3438697f6f76a2998be6b691dd6b00f5339e3572e959fe5ce4e245c66ab034a2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20240830%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240830T162009Z&X-Amz-Expires=259200&X-Amz-Signature=dbba98390e77ff76105a063a28dd2cfed315fd7ab0ea61ec62845c61908fde62&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27data-00000-of-00001.arrow%3B%20filename%3D%22data-00000-of-00001.arrow%22%3B&x-id=GetObject The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
LOTSA Data
The Large-scale Open Time Series Archive (LOTSA) is a collection of open time series datasets for time series forecasting. It was collected for the purpose of pre-training Large Time Series Models.
See the paper and codebase for more information.
Citation
If you're using LOTSA data in your research or applications, please cite it using this BibTeX:
BibTeX:
@article{woo2024unified,
title={Unified Training of Universal Time Series Forecasting Transformers},
author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Xiong, Caiming and Savarese, Silvio and Sahoo, Doyen},
journal={arXiv preprint arXiv:2402.02592},
year={2024}
}
- Downloads last month
- 4,395