Update README.md
Browse files
README.md
CHANGED
@@ -8,8 +8,10 @@ TTM, also known as TinyTimeMixer, are compact pre-trained models for Time-Series
|
|
8 |
**With less than 1 Million parameters, TTM introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
|
9 |
|
10 |
TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTM is pre-trained on diverse public time-series datasets which
|
11 |
-
can be easily fine-tuned for your target data. Refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) for more details.
|
12 |
-
|
|
|
|
|
13 |
|
14 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
15 |
|
@@ -26,10 +28,10 @@ version supports point forecasting use-cases ranging from minutely to hourly res
|
|
26 |
PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
|
27 |
- TTM (1024-96, released in this model card with 1M parameters) outperforms pre-trained MOIRAI-Small (14M parameters) by 10%, MOIRAI-Base (91M parameters) by 2% and
|
28 |
MOIRAI-Large (311M parameters) by 3% on zero-shot forecasting (fl = 96). [[notebook]](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_1024_96.ipynb)
|
29 |
-
- TTM quick fine-tuning also outperforms the
|
30 |
-
M4-hourly dataset which existing pretrained TS models are finding
|
31 |
- TTM takes only a *few seconds for zeroshot/inference* and a *few minutes for finetuning* in 1 GPU machine, as
|
32 |
-
opposed to long timing-requirements and heavy computing infra needs of other existing
|
33 |
|
34 |
|
35 |
## Model Description
|
|
|
8 |
**With less than 1 Million parameters, TTM introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
|
9 |
|
10 |
TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTM is pre-trained on diverse public time-series datasets which
|
11 |
+
can be easily fine-tuned for your target data. Refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) for more details.
|
12 |
+
|
13 |
+
**The current open-source version supports point forecasting use-cases ranging from minutely to hourly resolutions
|
14 |
+
(Ex. 10 min, 15 min, 1 hour, etc.)**
|
15 |
|
16 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
17 |
|
|
|
28 |
PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
|
29 |
- TTM (1024-96, released in this model card with 1M parameters) outperforms pre-trained MOIRAI-Small (14M parameters) by 10%, MOIRAI-Base (91M parameters) by 2% and
|
30 |
MOIRAI-Large (311M parameters) by 3% on zero-shot forecasting (fl = 96). [[notebook]](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_1024_96.ipynb)
|
31 |
+
- TTM quick fine-tuning also outperforms the competitive statistical baselines (Statistical ensemble and S-Naive) in
|
32 |
+
M4-hourly dataset which existing pretrained TS models are finding difficult to outperform. [[notebook]](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_m4_hourly.ipynb)
|
33 |
- TTM takes only a *few seconds for zeroshot/inference* and a *few minutes for finetuning* in 1 GPU machine, as
|
34 |
+
opposed to long timing-requirements and heavy computing infra needs of other existing pre-trained models.
|
35 |
|
36 |
|
37 |
## Model Description
|