|
--- |
|
license: gpl-3.0 |
|
viewer: false |
|
--- |
|
# ChaosBench |
|
We propose ChaosBench, a large-scale, multi-channel, physics-based benchmark for subseasonal-to-seasonal (S2S) climate prediction. |
|
It is framed as a high-dimensional video regression task that consists of 45-year, 60-channel observations |
|
for validating physics-based and data-driven models, and training the latter. |
|
Physics-based forecasts are generated from 4 national weather agencies with 44-day lead-time and serve as baselines to data-driven forecasts. |
|
Our benchmark is one of the first to incorporate physics-based metrics to ensure physically-consistent and explainable models. |
|
We establish two tasks: full and sparse dynamics prediction. |
|
|
|
๐: [https://leap-stc.github.io/ChaosBench/](https://leap-stc.github.io/ChaosBench/) |
|
|
|
๐: [https://arxiv.org/abs/2402.00712](https://arxiv.org/abs/2402.00712) |
|
|
|
## Getting Started |
|
**Step 1**: Clone the [ChaosBench](https://github.com/leap-stc/ChaosBench) Github repository and install requirements |
|
``` |
|
pip install -r requirements.txt |
|
``` |
|
|
|
**Step 2**: Create local directory to store your data, e.g., |
|
``` |
|
cd ChaosBench |
|
mkdir data |
|
``` |
|
|
|
**Step 3**: Navigate to `chaosbench/config.py` and change the field `DATA_DIR = ChaosBench/data` |
|
|
|
**Step 4**: Initialize the space by running |
|
``` |
|
cd ChaosBench/data/ |
|
wget https://huggingface.co/datasets/LEAP/ChaosBench/resolve/main/process.sh |
|
chmod +x process.sh |
|
``` |
|
**Step 5**: Download the data |
|
|
|
``` |
|
# NOTE: you can also run each line one at a time to retrieve individual dataset |
|
|
|
./process.sh era5 # Required: For input ERA5 data |
|
./process.sh climatology # Required: For climatology |
|
./process.sh ukmo # Optional: For simulation from UKMO |
|
./process.sh ncep # Optional: For simulation from NCEP |
|
./process.sh cma # Optional: For simulation from CMA |
|
./process.sh ecmwf # Optional: For simulation from ECMWF |
|
``` |
|
|
|
## Dataset Overview |
|
|
|
- __Input:__ ERA5 Reanalysis (1979-2023) |
|
|
|
- __Target:__ The following table indicates the 48 variables (channels) that are available for Physics-based models. Note that the __Input__ ERA5 observations contains __ALL__ fields, including the unchecked boxes: |
|
|
|
Parameters/Levels (hPa) | 1000 | 925 | 850 | 700 | 500 | 300 | 200 | 100 | 50 | 10 |
|
:---------------------- | :----| :---| :---| :---| :---| :---| :---| :---| :--| :-| |
|
Geopotential height, z ($gpm$) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
|
Specific humidity, q ($kg kg^{-1}$) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | | | |
|
Temperature, t ($K$) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
|
U component of wind, u ($ms^{-1}$) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
|
V component of wind, v ($ms^{-1}$) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
|
Vertical velocity, w ($Pas^{-1}$) | | | | | ✓ | | | | | | |
|
|
|
- __Baselines:__ |
|
- Physics-based models: |
|
- [x] UKMO: UK Meteorological Office |
|
- [x] NCEP: National Centers for Environmental Prediction |
|
- [x] CMA: China Meteorological Administration |
|
- [x] ECMWF: European Centre for Medium-Range Weather Forecasts |
|
- Data-driven models: |
|
- [x] Lagged-Autoencoder |
|
- [x] Fourier Neural Operator (FNO) |
|
- [x] ResNet |
|
- [x] UNet |
|
- [x] ViT/ClimaX |
|
- [x] PanguWeather |
|
- [x] Fourcastnetv2 |
|
- [x] GraphCast |
|
|
|
## Evaluation Metrics |
|
We divide our metrics into 2 classes: (1) ML-based, which cover evaluation used in conventional computer vision and forecasting tasks, (2) Physics-based, which are aimed to construct a more physically-faithful and explainable data-driven forecast. |
|
|
|
- __Vision-based:__ |
|
- [x] RMSE |
|
- [x] Bias |
|
- [x] Anomaly Correlation Coefficient (ACC) |
|
- [x] Multiscale Structural Similarity Index (MS-SSIM) |
|
- __Physics-based:__ |
|
- [x] Spectral Divergence (SpecDiv) |
|
- [x] Spectral Residual (SpecRes) |
|
|
|
|
|
## Leaderboard |
|
You can access the full score and checkpoints in `logs/<MODEL_NAME>` within the following subdirectory: |
|
- Scores: `eval/<METRIC>.csv` |
|
- Model checkpoints: `lightning_logs/` |