The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
OPTIMUS Dataset
This dataset contains approximately 600K image time series of 40-50 Sentinel-2 satellite images captured between January 2016 and December 2023.
It also includes 300 time series that are labeled with binary "change" or "no change" labels.
It is used to train and evaluate OPTIMUS (TODO - paper link).
The time series are distributed globally, with half of the time series selected at random locations covered by Sentinel-2, and the other half sampled specifically within urban areas.
Each image is 512x512 at roughly 10 m/pixel (the source image is 10 m/pixel but it is re-projected to WebMercator). Within each time series, the images are aligned and so cover the same location at different timestamps.
The dataset is released under Apache License 2.0.
Dataset Details
Images
The bulk of the dataset is stored in tar files in the "images" directory. Once extracted, these images follow this directory structure:
2016-01/
tci/
1234_5678.png
2345_6789.png
...
2016-03/
tci/
1234_5678.png
...
2016-05/
...
...
2023-11/
Here, the top level folders are different timestamps, so one time series consists of the images with the same filename (like 1234_5678.png
) across the different timestamp folders.
The filename identifies a position in the WebMercator grid at zoom level 13 (where the world is split into 2^13 tiles vertically and 2^13 tiles horizontally). This matches the grid system used in Satlas; see https://github.com/allenai/satlas/blob/main/SatlasPretrain.md#coordinates for how to get the corner longitude/latitude coordinates from the tile.
For example, here are the corners of 1234_5678.png:
import math
def mercator_to_geo(p, zoom=13, pixels=512):
n = 2**zoom
x = p[0] / pixels
y = p[1] / pixels
x = x * 360.0 / n - 180
y = math.atan(math.sinh(math.pi * (1 - 2.0 * y / n)))
y = y * 180 / math.pi
return (x, y)
for offset in [(0, 0), (0, 1), (1, 0), (1, 1)]:
print(mercator_to_geo((1234 + offset[0], 5678 + offset[1]), pixels=1))
Each image is cropped from a Sentinel-2 L1C scene, using B04/B03/B02 only. See https://dataspace.copernicus.eu/explore-data/data-collections/sentinel-data/sentinel-2 for details about the Sentinel-2 mission.
Other Files
Besides the images, there are additional files:
index.json
identifies which tar files contain which tiles. It is a list of groups of files, andgroups[1234]
corresponds to the files present in 1234.tar.2024_dataset_tiles_random.json
and2024_dataset_tiles_urban.json
differentiate which tiles were selected based on random global sampling, and which were selected based on targeted sampling of urban areas.forest_loss_dataset.tar
contains additional image time series that contain forest loss.2024_dataset_evaluation.json
contain the annotations for the 300 time series in the evaluation set. It maps the image IDs to a binary label of 0 or 1, where 1 indicates 'change' and 0 'no change'.
Authors
- Raymond Yu (University of Washington)
- Paul Han (University of Washington)
- Josh Myers-Dean (Allen Institute of AI)
- Piper Wolters (Allen Institute of AI)
- Favyen Bastani (Allen Institute of AI)
- Downloads last month
- 309