Abdul Fatir Ansari commited on
Commit
e149d4e
1 Parent(s): c5c013d

Add model, update readme

Browse files
Files changed (3) hide show
  1. README.md +67 -0
  2. config.json +51 -0
  3. model.safetensors +3 -0
README.md CHANGED
@@ -1,3 +1,70 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: time-series-forecasting
4
+ tags:
5
+ - time series
6
+ - forecasting
7
+ - pretrained models
8
+ - foundation models
9
+ - time series foundation models
10
+ - time-series
11
  ---
12
+
13
+ # Chronos⚡️-Tiny
14
+
15
+ Pre-release of Chronos⚡️ (read: Chronos-Bolt) pretrained time series forecasting models. Chronos⚡️ models are based on the [T5 architecture](https://arxiv.org/abs/1910.10683) and are available in the following sizes.
16
+
17
+
18
+ <div align="center">
19
+
20
+ | Model | Parameters | Based on |
21
+ | ---------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- |
22
+ | [**chronos-bolt-tiny**](https://huggingface.co/autogluon/chronos-bolt-tiny) | 9M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) |
23
+ | [**chronos-bolt-mini**](https://huggingface.co/autogluon/chronos-bolt-mini) | 21M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) |
24
+ | [**chronos-bolt-small**](https://huggingface.co/autogluon/chronos-bolt-small) | 48M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) |
25
+ | [**chronos-bolt-base**](https://huggingface.co/autogluon/chronos-bolt-base) | 205M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) |
26
+
27
+ </div>
28
+
29
+
30
+ ## Usage
31
+
32
+ A minimal example showing how to perform inference using Chronos⚡️ with AutoGluon:
33
+
34
+ ```
35
+ pip install --pre autogluon
36
+ ```
37
+
38
+ ```python
39
+ from autogluon.timeseries import TimeSeriesPredictor, TimeSeriesDataFrame
40
+
41
+ df = TimeSeriesDataFrame("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_hourly/train.csv")
42
+
43
+ predictions = TimeSeriesPredictor().fit(
44
+ df,
45
+ hyperparameters={
46
+ "Chronos": [
47
+ {"model_path": "autogluon/chronos-bolt-tiny"},
48
+ ]
49
+ },
50
+ ).predict(
51
+ df
52
+ )
53
+ ```
54
+
55
+ ## Citation
56
+
57
+ If you find Chronos or Chronos⚡️ models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815):
58
+
59
+ ```
60
+ @article{ansari2024chronos,
61
+ author = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
62
+ title = {Chronos: Learning the Language of Time Series},
63
+ journal = {arXiv preprint arXiv:2403.07815},
64
+ year = {2024}
65
+ }
66
+ ```
67
+
68
+ ## License
69
+
70
+ This project is licensed under the Apache-2.0 License.
config.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "autogluon/chronos-bolt-tiny",
3
+ "architectures": [
4
+ "ChronosBoltModelForForecasting"
5
+ ],
6
+ "chronos_config": {
7
+ "context_length": 2048,
8
+ "input_patch_size": 16,
9
+ "input_patch_stride": 16,
10
+ "prediction_length": 64,
11
+ "quantiles": [
12
+ 0.1,
13
+ 0.2,
14
+ 0.3,
15
+ 0.4,
16
+ 0.5,
17
+ 0.6,
18
+ 0.7,
19
+ 0.8,
20
+ 0.9
21
+ ],
22
+ "use_reg_token": true
23
+ },
24
+ "chronos_pipeline_class": "ChronosBoltPipeline",
25
+ "classifier_dropout": 0.0,
26
+ "d_ff": 1024,
27
+ "d_kv": 64,
28
+ "d_model": 256,
29
+ "decoder_start_token_id": 0,
30
+ "dense_act_fn": "relu",
31
+ "dropout_rate": 0.1,
32
+ "eos_token_id": 1,
33
+ "feed_forward_proj": "relu",
34
+ "initializer_factor": 0.05,
35
+ "is_encoder_decoder": true,
36
+ "is_gated_act": false,
37
+ "layer_norm_epsilon": 1e-06,
38
+ "model_type": "t5",
39
+ "n_positions": 512,
40
+ "num_decoder_layers": 4,
41
+ "num_heads": 4,
42
+ "num_layers": 4,
43
+ "pad_token_id": 0,
44
+ "reg_token_id": 1,
45
+ "relative_attention_max_distance": 128,
46
+ "relative_attention_num_buckets": 32,
47
+ "torch_dtype": "float32",
48
+ "transformers_version": "4.39.3",
49
+ "use_cache": true,
50
+ "vocab_size": 2
51
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75068728d376d2bec670379eeef4bfb4d24c0cfe24d957451f8d19b447030a32
3
+ size 34622352