Graph Machine Learning
AnemoI
English
aifs-single / README.md
anaprietonem's picture
Update README.md
3dd23b4 verified
|
raw
history blame
10.3 kB
metadata
license: cc-by-sa-4.0
metrics:
  - mse
pipeline_tag: graph-ml

AIFS Single - v0.2.1

Here, we introduce the Artificial Intelligence Forecasting System (AIFS), a data driven forecast model developed by the European Centre for Medium-Range Weather Forecasts (ECMWF).

AIFS 10 days forecast

We show that AIFS produces highly skilled forecasts for upper-air variables, surface weather parameters and tropical cyclone tracks. AIFS is run four times daily alongside ECMWF’s physics-based NWP model and forecasts are available to the public under ECMWF’s open data policy. (https://www.ecmwf.int/en/forecasts/datasets/open-data)

Model Details

Model Description

AIFS is based on a graph neural network (GNN) encoder and decoder, and a sliding window transformer processor, and is trained on ECMWF’s ERA5 re-analysis and ECMWF’s operational numerical weather prediction (NWP) analyses.

Encoder graph Decoder graph

It has a flexible and modular design and supports several levels of parallelism to enable training on high resolution input data. AIFS forecast skill is assessed by comparing its forecasts to NWP analyses and direct observational data.

  • Developed by: ECMWF
  • Model type: Encoder-processor-decoder model
  • License: CC BY-SA 4.0

Model Sources

  • Repository: Anemoi Anemoi is an open-source framework for creating machine learning (ML) weather forecasting systems, which ECMWF and a range of national meteorological services across Europe have co-developed.
  • Paper: https://arxiv.org/pdf/2406.01465

How to Get Started with the Model

To be able to run AIFS to generate a new forecast, you can use ai-models https://github.com/ecmwf-lab/ai-models. ai-models command can be used to run different models, since in this case we are looking at using AIFS we need to speficy anemoi as model-name and then pass the path to the checkpoint (aifs_single_v0.2.1.ckpt) and the initial conditions. You can find an example of a set of initial conditions in the GRIB file example_20241107_12_n320.grib.

Use the code below to get started with the model.

# 1st create the conda environment
export CONDA_ENV=aifs-env
conda create -n ${CONDA_ENV} python=3.10
conda activate ${CONDA_ENV}

pip install torch=2.4
pip install anemoi-inference[plugin] anemoi-models==0.2
pip install ninja
pip install flash-attn --no-build-isolation

# Run ai-models 

ai-models anemoi --checkpoint aifs_single_v0.2.1.ckpt --file example_20241107_12_n320.grib

! ISSUE WITH PYTORCH 2.5 and flash_attention, and cuda 12.4. For now keep it to torch 2.4

The above command will write the forecast results to anemoi.grib #missing - example how to plot/open the anemoi.grib

Training Details

Training Data

AIFS is trained to produce 6-hour forecasts. It receives as input a representation of the atmospheric states at t6ht_{−6h}, t0t_{0}, and then forecasts the state at time t+6ht_{+6h}.

The full list of input and output fields is shown below:

Field Level type Input/Output
Geopotential, horizontal and vertical wind components, specific humidity, temperature Pressure level: 50,100, 150, 200, 250,300, 400, 500, 600,700, 850, 925, 1000 Both
Surface pressure, mean sea-level pressure, skin temperature, 2 m temperature, 2 m dewpoint temperature, 10 m horizontal wind components, total column water Surface Both
Total precipitation, convective precipitation Surface Output
Land-sea mask, orography, standard deviation of sub-grid orography, slope of sub-scale orography, insolation, latitude/longitude, time of day/day of year Surface Input

Input and output states are normalised to unit variance and zero mean for each level. Some of the forcing variables, like orography, are min-max normalised.

Training Procedure

  • Pre-training: It was performed on ERA5 for the years 1979 to 2020 with a cosine learning rate (LR) schedule and a total of 260,000 steps. The LR is increased from 0 to 10410^{-4} during the first 1000 steps, then it is annealed to a minimum of 3×1073 × 10^{-7}.
  • Fine-tuning I: The pre-training is then followed by rollout on ERA5 for the years 1979 to 2018, this time with a LR of 6×1076 × 10^{-7}. As in Lam et al. [2023] we increase the rollout every 1000 training steps up to a maximum of 72 h (12 auto-regressive steps).
  • Fine-tuning II: Finally, to further improve forecast performance, we fine-tune the model on operational real-time IFS NWP analyses. This is done via another round of rollout training, this time using IFS operational analysis data from 2019 and 2020

Training Hyperparameters

  • Optimizer: We use AdamW (Loshchilov and Hutter [2019]) with the ββ-coefficients set to 0.9 and 0.95.

  • Loss function: The loss function is an area-weighted mean squared error (MSE) between the target atmospheric state and prediction.

  • Loss scaling: A loss scaling is applied for each output variable. The scaling was chosen empirically such that all prognostic variables have roughly equal contributions to the loss, with the exception of the vertical velocities, for which the weight was reduced. The loss weights also decrease linearly with height, which means that levels in the upper atmosphere (e.g., 50 hPa) contribute relatively little to the total loss value.

Speeds, Sizes, Times

Data parallelism is used for training, with a batch size of 16. One model instance is split across four 40GB A100 GPUs within one node. Training is done using mixed precision (Micikevicius et al. [2018]), and the entire process takes about one week, with 64 GPUs in total. The checkpoint size is 1.19 GB and it does not include the optimizer state.

Evaluation

Testing Data, Factors & Metrics

Testing Data

{{ testing_data | default("[More Information Needed]", true)}}

Factors

{{ testing_factors | default("[More Information Needed]", true)}}

Metrics

{{ testing_metrics | default("[More Information Needed]", true)}}

Results

{{ results | default("[More Information Needed]", true)}}

Summary

{{ results_summary | default("", true) }}

Model Examination [optional]

{{ model_examination | default("[More Information Needed]", true)}}

Technical Specifications

Hardware

We acknowledge PRACE for awarding us access to Leonardo, CINECA, Italy. In particular, this version of the AIFS has been trained on 64 A100 GPUs (40GB).

Software

The model was developed and trained using the AnemoI framework. AnemoI is a framework for developing machine learning weather forecasting models. It comprises of components or packages for preparing training datasets, conducting ML model training and a registry for datasets and trained models. AnemoI provides tools for operational inference, including interfacing to verification software. As a framework it seeks to handle many of the complexities that meteorological organisations will share, allowing them to easily train models from existing recipes but with their own data.

Citation

If you use this model in your work, please cite it as follows:

BibTeX:

@article{lang2024aifs,
  title={AIFS-ECMWF's data-driven forecasting system},
  author={Lang, Simon and Alexe, Mihai and Chantry, Matthew and Dramsch, Jesper and Pinault, Florian and Raoult, Baudouin and Clare, Mariana CA and Lessig, Christian and Maier-Gerber, Michael and Magnusson, Linus and others},
  journal={arXiv preprint arXiv:2406.01465},
  year={2024}
}

APA:

Lang, S., Alexe, M., Chantry, M., Dramsch, J., Pinault, F., Raoult, B., ... & Rabier, F. (2024). AIFS-ECMWF's data-driven forecasting system. arXiv preprint arXiv:2406.01465.

More Information

More Information Needed