windtunnel-20k / README.md
rvalerio's picture
Update README.md
ab9cce6 verified
|
raw
history blame
3.51 kB
---
pretty_name: Wind Tunnel dataset
size_categories:
- 10K<n<100K
---
# Wind Tunnel Dataset
The Wind Tunnel Dataset contains 20,000 [OpenFOAM](https://www.openfoam.com/) simulations of 1,000 unique automobile-like objects placed in a virtual wind tunnel.
Each object is simulated under 20 distinct conditions: 4 random wind speeds ranging from 10 to 50 m/s, and 5 rotation angles (0°, 180° and 3 random angles).
To ensure stable and reliable results, each simulation runs for 300 iterations.
The meshes for these automobile-like objects were generated using the [Instant Mesh model](https://github.com/TencentARC/InstantMesh) and sourced from the [Stanford Cars Dataset](https://www.kaggle.com/datasets/jessicali9530/stanford-cars-dataset).
The entire dataset of 20,000 simulations is organized into three subsets: 70% for training, 20% for validation, and 10% for testing.
The data generation process itself was orchestrated using the [Inductiva API](https://inductiva.ai/), which allowed us to run hundreds of OpenFOAM simulations in parallel on the cloud.
<p align="center">
<img src="https://huggingface.co/datasets/inductiva/windtunnel/resolve/main/example.png", width="500px">
</p>
### Dataset Structure
```
data
├── train
│ ├── <SIMULATION_ID>
│ │ ├── input_mesh.obj
│ │ ├── openfoam_mesh.obj
│ │ ├── pressure_field_mesh.vtk
│ │ ├── simulation_metadata.json
│ │ └── streamlines_mesh.ply
│ └── ...
├── validation
│ └── ...
└── test
└── ...
```
### Dataset Files
Each simulation in the Wind Tunnel Dataset is accompanied by several key files that provide both input and output data.
Here’s a breakdown of the files included in each simulation:
- **input_mesh.obj**: OBJ file with the input mesh.
- **openfoam_mesh.obj**: OBJ file with the OpenFOAM mesh.
- **pressure_field_mesh.vtk**: VTK file with the pressure field data.
- **streamlines_mesh.ply**: PLY file with the streamlines.
- **metadata.json**: JSON with metadata about the input parameters and about some output results such as the force coefficients (obtained via simulation) and the path of the output files.
## Downloading the Dataset:
### 1. Using snapshot_download()
```python
from huggingface_hub import snapshot_download
dataset_name = "inductiva/windtunnel"
# Download the entire dataset
snapshot_download(repo_id=dataset_name)
# Download to a specific local directory
snapshot_download(repo_id=dataset_name, local_dir="local_folder")
# Download only the input mesh files across all simulations
snapshot_download(allow_patterns=["*/*/*/input_mesh.obj"], repo_id=dataset_name)
```
### 2. Using load_dataset()
```python
from datasets import load_dataset
# Load the dataset (streaming is supported)
dataset = load_dataset("inductiva/windtunnel", streaming=False)
# Display dataset information
print(dataset)
# Access a sample from the training set
sample = dataset["train"][0]
print("Sample from training set:", sample)
```
## What's next?
If you have any issues using this dataset, feel free to reach out to us at [support@intuctiva.ai](support@intuctiva.ai)
To learn more about how we created this dataset—or how you can generate synthetic datasets for Physics-AI models—visit [Inductiva.AI](inductiva.ai) or check out our blog post on [transforming complex simulation workflows into easy-to-use Python classes](https://inductiva.ai/blog/article/transform-complex-simulations).