physicsgen / README.md
mspitzna
Update README to reflect new image paths and add missing images
5d03d17
metadata
license: cc-by-nc-nd-4.0
size_categories:
  - 100K<n<1M
task_categories:
  - image-to-image

PhysicsGen: Can Generative Models Learn from Images to Predict Complex Physical Relations?

Paper

Accepted at IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025.

Preprint is available here: https://arxiv.org/abs/2503.05333

Website: https://www.physics-gen.org/ Github: https://github.com/physicsgen/physicsgen

Overview

PhysicsGen is a synthetic dataset collection generated via simulation for physical guided generative modeling, focusing on tasks such as sound propagation. The dataset includes multiple variants that simulate different physical phenomena, each accompanied by corresponding metadata and images.

Variants

  • Urban Sound Propagation: [sound_baseline, sound_reflection, sound_diffraction, sound_combined]

    Each sound example includes:

    • Geographic coordinates: lat, long
    • Sound intensity: db
    • Images: soundmap, osm, soundmap_512
    • Additional metadata: temperature, humidity, yaw, sample_id
  • Lens Distortion: [lens_p1, lens_p2]

    Each lens example includes:

    • Calibration parameters: fx, k1, k2, k3, p1, p2, cx
    • Label file path: label_path
    • Note: The script for applying the distortion to the CelebA Dataset is located here.
  • Dynamics of rolling and bouncing movements: [ball_roll, ball_bounce]

    Each ball example includes:

    • Metadata: ImgName, StartHeight, GroundIncli, InputTime, TargetTime
    • Images: input_image, target_image

Data is divided into train, test, and eval splits. For efficient storage and faster uploads, the data is converted and stored as Parquet files with image data stored as binary blobs.

Usage

You can load and use the dataset with the Hugging Face datasets library. For example, to load the sound_combined variant:

from datasets import load_dataset

dataset = load_dataset("mspitzna/physicsgen", name="sound_combined", trust_remote_code=True)

# Access a sample from the training split.
sample = dataset["train"][0]

input_img = sample["osm"]
target_img = sample["soundmap_512"]

# plot Input vs Target Image for a single sample
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
ax1.imshow(input_img)
ax2.imshow(target_img)
plt.show()

image info

Results (Summary - see paper for full details)

PhysicsGen includes baseline results for several models across the three tasks. See the paper for a complete evaluation.

License

This dataset is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International

Funding Acknowledgement

We express our gratitude for the financial support provided by the German Federal Ministry of Education and Research (BMBF). This project is part of the "Forschung an Fachhochschulen in Kooperation mit Unternehmen (FH-Kooperativ)" program, within the joint project KI-Bohrer, and is funded under the grant number 13FH525KX1.

BMBF-Logo