Datasets:
Languages:
English
Size:
10K - 100K
metadata
pretty_name: Wind Tunnel dataset
size_categories:
- 10K<n<100K
Wind Tunnel Dataset
The Wind Tunnel Dataset contains 20,000 wind tunnel simulations, organized into three subsets: 70% training, 20% validation, and 10% test. The simulations were generated using OpenFOAM and Inductiva and are based on 1,000 unique objects, each with 20 variations. The simulations cover 4 wind speeds and 5 different rotation angles, with each simulation running for 300 iterations. The input object meshes were generated using the Instant Meshes model and the Stanford Cars Dataset.
Dataset Structure
data
├── train
│ ├── <SIMULATION_ID>
│ │ ├── input_mesh.obj
│ │ ├── openfoam_mesh.obj
│ │ ├── pressure_field_mesh.vtk
│ │ ├── simulation_metadata.json
│ │ └── streamlines_mesh.ply
│ └── ...
├── validation
│ └── ...
└── test
└── ...
Dataset Files
- input_mesh.obj: OBJ file with the input mesh.
- openfoam_mesh.obj: OBJ file with the OpenFOAM mesh.
- pressure_field_mesh.vtk: VTK file with the pressure field data.
- streamlines_mesh.ply: PLY file with the streamlines.
- metadata.json: JSON with metadata such as input parameters and some output results.
Downloading the Dataset:
1. Using snapshot_download()
from huggingface_hub import snapshot_download
dataset_name = "inductiva/windtunnel"
# Download the entire dataset
snapshot_download(repo_id=dataset_name)
# Download to a specific local directory
snapshot_download(repo_id=dataset_name, local_dir="local_folder")
# Download only the input mesh files across all simulations
snapshot_download(allow_patterns=["*/*/*/input_mesh.obj"], repo_id=dataset_name)
2. Using load_dataset()
from datasets import load_dataset
# Load the dataset (streaming is supported)
dataset = load_dataset("inductiva/windtunnel", streaming=False)
# Display dataset information
print(dataset)
# Access a sample from the training set
sample = dataset["train"][0]
print("Sample from training set:", sample)