Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -4706,7 +4706,7 @@ configs:
|
|
4706 |
|
4707 |
Time series datasets used for training and evaluation of the [Chronos](https://github.com/amazon-science/chronos-forecasting) forecasting models.
|
4708 |
|
4709 |
-
Note that some Chronos datasets that rely on a custom builder script are available in the companion repo [`autogluon/chronos_datasets_extra`](https://huggingface.co/datasets/autogluon/chronos_datasets_extra).
|
4710 |
|
4711 |
See the [paper](https://arxiv.org/abs/2403.07815) for more information.
|
4712 |
|
@@ -4715,9 +4715,9 @@ See the [paper](https://arxiv.org/abs/2403.07815) for more information.
|
|
4715 |
All datasets satisfy the following high-level schema:
|
4716 |
- Each dataset row corresponds to a single (univariate or multivariate) time series.
|
4717 |
- There exists one column with name `id` and type `string` that contains the unique identifier of each time series.
|
4718 |
-
- There exists one column of type `Sequence` with dtype `timestamp[ms]`. This column contains
|
4719 |
- There exists at least one column of type `Sequence` with numeric (`float`, `double`, or `int`) dtype. These columns can be interpreted as target time series.
|
4720 |
-
-
|
4721 |
- Remaining columns of types other than `Sequence` (e.g., `string` or `float`) can be interpreted as static covariates.
|
4722 |
|
4723 |
Datasets can be loaded using the 🤗 [`datasets`](https://huggingface.co/docs/datasets/en/index) library
|
@@ -4766,11 +4766,11 @@ Example output
|
|
4766 |
### Dealing with large datasets
|
4767 |
Note that some datasets, such as subsets of WeatherBench, are extremely large (~100GB). To work with them efficiently, we recommend either loading them from disk (files will be downloaded to disk, but won't be all loaded into memory)
|
4768 |
```python
|
4769 |
-
ds = datasets.load_dataset("autogluon/chronos_datasets", "weatherbench_daily", keep_in_memory=False)
|
4770 |
```
|
4771 |
or, for the largest datasets like `weatherbench_hourly_temperature`, reading them in streaming format (chunks will be downloaded one at a time)
|
4772 |
```python
|
4773 |
-
ds = datasets.load_dataset("autogluon/chronos_datasets", "weatherbench_hourly_temperature", streaming=True)
|
4774 |
```
|
4775 |
|
4776 |
## License
|
|
|
4706 |
|
4707 |
Time series datasets used for training and evaluation of the [Chronos](https://github.com/amazon-science/chronos-forecasting) forecasting models.
|
4708 |
|
4709 |
+
Note that some Chronos datasets (`ETTh`, `ETTm`, `brazilian_cities_temperature` and `spanish_energy_and_weather`) that rely on a custom builder script are available in the companion repo [`autogluon/chronos_datasets_extra`](https://huggingface.co/datasets/autogluon/chronos_datasets_extra).
|
4710 |
|
4711 |
See the [paper](https://arxiv.org/abs/2403.07815) for more information.
|
4712 |
|
|
|
4715 |
All datasets satisfy the following high-level schema:
|
4716 |
- Each dataset row corresponds to a single (univariate or multivariate) time series.
|
4717 |
- There exists one column with name `id` and type `string` that contains the unique identifier of each time series.
|
4718 |
+
- There exists one column of type `Sequence` with dtype `timestamp[ms]`. This column contains the timestamps of the observations. Timestamps are guaranteed to have a regular frequency that can be obtained with [`pandas.infer_freq`](https://pandas.pydata.org/docs/reference/api/pandas.infer_freq.html).
|
4719 |
- There exists at least one column of type `Sequence` with numeric (`float`, `double`, or `int`) dtype. These columns can be interpreted as target time series.
|
4720 |
+
- For each row, all columns of type `Sequence` have same length.
|
4721 |
- Remaining columns of types other than `Sequence` (e.g., `string` or `float`) can be interpreted as static covariates.
|
4722 |
|
4723 |
Datasets can be loaded using the 🤗 [`datasets`](https://huggingface.co/docs/datasets/en/index) library
|
|
|
4766 |
### Dealing with large datasets
|
4767 |
Note that some datasets, such as subsets of WeatherBench, are extremely large (~100GB). To work with them efficiently, we recommend either loading them from disk (files will be downloaded to disk, but won't be all loaded into memory)
|
4768 |
```python
|
4769 |
+
ds = datasets.load_dataset("autogluon/chronos_datasets", "weatherbench_daily", keep_in_memory=False, split="train")
|
4770 |
```
|
4771 |
or, for the largest datasets like `weatherbench_hourly_temperature`, reading them in streaming format (chunks will be downloaded one at a time)
|
4772 |
```python
|
4773 |
+
ds = datasets.load_dataset("autogluon/chronos_datasets", "weatherbench_hourly_temperature", streaming=True, split="train")
|
4774 |
```
|
4775 |
|
4776 |
## License
|