Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

AbiToa3DCloud Dataset

A dataset for 3D cloud mask detection using 14-band ABI TOA (Top of Atmosphere) imagery, designed for SatVision-TOA applications making predictions along the CloudSat-CALIPSO curtain.

Overview

Cloud 3D structures dominate the impact to cloud radiative feedback at the top of the atmosphere [31, 32] and influence precipitation initiation time, intensity and duration at the surface [33, 34]. As such, the international GEWEX cloud assessment program recommends that cloud retrieval algorithms should focus on the vertical structure of clouds.

Unfortunately, at present, operational cloud masks are still only reported at 2D levels for passive space-borne instruments, such as MODIS and GOES-ABI. These limited masks are then used in the decision tree for many other products that are obscured by clouds (e.g., aerosol, land surface Bidirectional Reflectance Distribution Function (BRDF), active fire, etc.) to prevent next-step retrievals from occurring when a cloud is reported for the pixel. This workflow results in a significant under-utilization of information for retrieving features below clouds.

The AbiToa3DCloud dataset consists of 14-band multispectral satellite imagery captured by the GOES Advanced Baseline Imager (ABI) at the Top of Atmosphere (TOA) and CALIPSO curtain. The dataset is designed for 3D cloud mask retrieval and is suitable for applications in atmospheric research, remote sensing, and weather forecasting. Each image is accompanied by a cloud mask label, making it ideal for training deep learning models for 3D cloud retrieval.

In this downstream task of SatVision-TOA, the aim is to learn and predict the CloudSat+CALIPSO vertical cloud masks. We face two challenges. For one, we are applying a model pre-trained on MODIS TOA data to the ABI observations. Although the 14 ABI channels that best match MODIS channel frequencies are selected and ABI chips near the peripheries of the full-disk image are excluded considering strong slantwise-view distortion, we did not apply any further calibration nor image re-gridding to match MODIS footprint size. Moreover, ABI scans a fixed disk with 15 minutes refresh rate, covering the entire diurnal cycle, while MODIS-TOA model was trained on 9AM fixed local time TERRA-MODIS images with global coverage. This means that some of the ABI images embedding strong cloud or precipitation diurnal cycles were never seen by SatVision-TOA before.

The transfer learning skill is one sub-task we intentionally designed to evaluate the foundation model application range across similar but different instruments onboard different platforms. Therefore, outcomes of this downstream task will inform the applicability of this foundation model to other satellite measurements using MODIS-like instruments. These include but are not limited to NOAA’s VIIRS, NASA’s PACE-Ocean Color Instrument (OCI), Japan’s HIMAWARI-Advanced Himawari Imager (AHI), European’s METEOSAT-MTG (Meteosat Third Generation), and future NASA-NOAA collaborative Geostationary Extended Observation (GEO-XO) mission.

Dataset Overview

The training dataset is made of 128×128 pixel ABI image chips from 14 channel measurements with the CloudSat/CALIPSO overpasses the center of the cropped image. Because CloudSat/CALIPSO is on a sun-synchronized orbit with equator passing time of 1:30AM/1:30PM, the selected training and testing samples are confined to 1:30PM local time because we want to use all 14 channel measurements. A total of 7000 ABI chips were used for training and another randomly selected 1300 chips are kept for independent validation. The baseline DL model used a FCN model structure.

The dataset included here is a subsample of the main dataset because of storage constraints. Please reach out if you are interested in the full dataset.

  • Number of samples: Varies based on dataset split.
  • Image format: .npz, containing:
    • "chip": The 14-band multispectral satellite imagery.
    • "data": A dictionary that includes:
      • "Cloud_mask": The binary or categorical cloud mask (labels).
  • Image resolution: Configurable via config.DATA.IMG_SIZE.

Dataset Loading

A PyTorch dataset has been generated at pytorch-caney/3dclouds-dataset. A simple example to look at the data is listed below:

import numpy as np
data = np.load("example.npz", allow_pickle=True)
image = data["chip"]  # Shape: (14, H, W)
mask = data["data"].item()["Cloud_mask"]  # Corresponding cloud mask

Using PyTorch:

from torch.utils.data import DataLoader
from pytorch_caney.datasets.abi_3dcloud_dataset import AbiToa3DCloudDataset

dataset = AbiToa3DCloudDataset(config, data_paths=["path/to/data"])
dataloader = DataLoader(dataset, batch_size=8, shuffle=True)
Downloads last month
4