ICCS
/

English
climate
Edit model card

CAM-ML-YOG-v0: Machine Learning-based Convection Parameterization

This repository contains a machine learning-based implementation of the convection parameterization in CAM (Community Atmosphere Model), leveraging PyTorch.

Model Overview

The model is a PyTorch adaptation of the original Fortran-based convection parameterization in CAM. It simulates subgrid-scale convection processes within a climate model and was developed to improve accuracy and efficiency.

Key Features:

  • Architecture: The model is built with custom neural network layers as seen in torch_nets/models.py.
  • Data Handling: Fortran-derived model weights are seamlessly converted to PyTorch, ensuring compatibility with existing CAM configurations.
  • Training and Fine-tuning: The model can be retrained or fine-tuned using custom climate data.

Model Conversion and Upload

This model was converted from the original CAM implementation by extracting the weights from Fortran and converting them into a PyTorch-compatible format. This conversion process involved the following steps:

  1. Extract Fortran Weights: Weights were extracted from the original CAM Fortran implementation.
  2. Convert to PyTorch: Using custom scripts, the weights were converted into a format compatible with PyTorch models.
  3. Upload to Hugging Face: The model was validated and uploaded to the Hugging Face Model Hub.

Usage

To use the model in PyTorch, first install the necessary dependencies:

pip install torch huggingface_hub

Then, download and load the model as follows:

from huggingface_hub import hf_hub_download
import torch

# Download model weights
model_path = hf_hub_download(repo_id="ICCS/cam-ml-yog-v0", filename="model.pth")

# Load model (replace 'YourModel' with the appropriate model class from torch_nets/models.py)
model = YourModel()
model.load_state_dict(torch.load(model_path))
model.eval()

# Use the model for predictions
input_data = ...  # Prepare your climate input data
output = model(input_data)

Fine-tuning

The model is designed to be easily fine-tuned using domain-specific climate data:

# Fine-tune the model
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
loss_fn = torch.nn.MSELoss()

# Example training loop
for epoch in range(epochs):
    optimizer.zero_grad()
    output = model(input_data)
    loss = loss_fn(output, target_data)
    loss.backward()
    optimizer.step()

Weight Updates

Please note that weight updates may be necessary as improvements are made to the model (see Issue #66).

References

For more details on the original implementation and how to contribute to the model’s development, see the GitHub repository.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .