Datasets:

Modalities:
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
Rene commited on
Commit
a1c6013
1 Parent(s): c7d2fb3

Added visualizer and updated readme file

Browse files
Files changed (4) hide show
  1. README.md +71 -2
  2. Visualizer.ipynb +0 -0
  3. data.png +3 -0
  4. training.png +3 -0
README.md CHANGED
@@ -8,7 +8,8 @@ This dataset contains the data for the first test case (1D compressible SPH) for
8
 
9
  You can find the full paper [here](https://arxiv.org/abs/2403.16680).
10
 
11
- The source core repository is available [here](https://github.com/tum-pbs/SFBC/) and also contains information on the data generation
 
12
 
13
  For the other test case datasets look here:
14
 
@@ -22,4 +23,72 @@ For the other test case datasets look here:
22
 
23
  ## File Layout
24
 
25
- The datasets are stored as hdf5 files with a single file per experiment. Within each file there is a set of configuration parameters and each frame of the simulation stored separately as a group. Each frame contains information for all fluid particles and all potentially relevant information. For the 2D test cases there is a pre-defined test/train split on a simulation level, wheras the 1D and 3D cases do not contain such a split.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  You can find the full paper [here](https://arxiv.org/abs/2403.16680).
10
 
11
+ The source core repository is available [here](https://github.com/tum-pbs/SFBC/) and also contains information on the data generation. You can install our BasisConvolution framework simply by running
12
+ `pip install BasisConvolution`
13
 
14
  For the other test case datasets look here:
15
 
 
23
 
24
  ## File Layout
25
 
26
+ The datasets are stored as hdf5 files with a single file per experiment. Within each file there is a set of configuration parameters and each frame of the simulation stored separately as a group. Each frame contains information for all fluid particles and all potentially relevant information. For the 2D test cases there is a pre-defined test/train split on a simulation level, wheras the 1D and 3D cases do not contain such a split.
27
+
28
+
29
+ ## Demonstration
30
+
31
+ This repository contains a simple Jupyter notebook (Visualizer.ipynb) that loads the dataset in its current folder and visualizes it first:
32
+
33
+ ![alt text](data.png)
34
+
35
+ And then runs a simple training on it to learn the SPH summation-based density for different basis functions:
36
+
37
+ ![alt text](training.png)
38
+
39
+ ## Minimum Working Example
40
+
41
+ Below you can find a fully work but simple example of loading our dataset, building a network (based on our SFBC framework) and doing a single network step. This relies on our SFBC/BasisConvolution framework that you can find [here](https://github.com/tum-pbs/SFBC/) or simply install it via pip (`pip install BasisConvolution`)
42
+
43
+ ```py
44
+ from BasisConvolution.util.hyperparameters import parseHyperParameters, finalizeHyperParameters
45
+ from BasisConvolution.util.network import buildModel, runInference
46
+ from BasisConvolution.util.augment import loadAugmentedBatch
47
+ from BasisConvolution.util.arguments import parser
48
+ import shlex
49
+ import torch
50
+ from torch.utils.data import DataLoader
51
+ from BasisConvolution.util.dataloader import datasetLoader, processFolder
52
+
53
+ # Example arguments
54
+ args = parser.parse_args(shlex.split(f'--fluidFeatures constant:1 --boundaryFeatures constant:1 --groundTruth compute[rho]:constant:1/constant:rho0 --basisFunctions ffourier --basisTerms 4 --windowFunction "None" --maxUnroll 0 --frameDistance 0 --epochs 1'))
55
+ # Parse the arguments
56
+ hyperParameterDict = parseHyperParameters(args, None)
57
+ hyperParameterDict['device'] = 'cuda' # make sure to use a gpu if you can
58
+ hyperParameterDict['iterations'] = 2**10 # Works good enough for this toy problem
59
+ hyperParameterDict['batchSize'] = 4 # Automatic batched loading is supported
60
+ hyperParameterDict['boundary'] = False # Make sure the data loader does not expect boundary data (this yields a warning if not set)
61
+
62
+ # Build the dataset
63
+ datasetPath = 'dataset'
64
+ train_ds = datasetLoader(processFolder(hyperParameterDict, datasetPath))
65
+ # And its respective loader/iterator combo as a batch sampler (this is our preferred method)
66
+ train_loader = DataLoader(train_ds, shuffle=True, batch_size = hyperParameterDict['batchSize']).batch_sampler
67
+ train_iter = iter(train_loader)
68
+ # Align the hyperparameters with the dataset, e.g., dimensionality
69
+ finalizeHyperParameters(hyperParameterDict, train_ds)
70
+ # Build a model for the given hyperparameters
71
+ model, optimizer, scheduler = buildModel(hyperParameterDict, verbose = False)
72
+ # Get a batch of data
73
+
74
+ try:
75
+ bdata = next(train_iter)
76
+ except StopIteration:
77
+ train_iter = iter(train_loader)
78
+ bdata = next(train_iter)
79
+ # Load the data, the data loader does augmentation and neighbor searching automatically
80
+ configs, attributes, currentStates, priorStates, trajectoryStates = loadAugmentedBatch(bdata, train_ds, hyperParameterDict)
81
+ # Run the forward pass
82
+ optimizer.zero_grad()
83
+ predictions = runInference(currentStates, configs, model, verbose = False)
84
+ # Compute the Loss
85
+ gts = [traj[0]['fluid']['target'] for traj in trajectoryStates]
86
+ losses = [torch.nn.functional.mse_loss(prediction, gt) for prediction, gt in zip(predictions, gts)]
87
+ # Run the backward pass
88
+ loss = torch.stack(losses).mean()
89
+ loss.backward()
90
+ optimizer.step()
91
+ # Print the loss
92
+ print(loss.item())
93
+ print('Done')
94
+ ```
Visualizer.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
data.png ADDED

Git LFS Details

  • SHA256: 287683cc8f26303e06c51f38b884760101df989d53e8fd0bc748bc31f79f5444
  • Pointer size: 132 Bytes
  • Size of remote file: 1.64 MB
training.png ADDED

Git LFS Details

  • SHA256: 5e9eb9db6d951f66f3fe1c58b551cd120bb00346adb3fe9eaeb70b4457f54736
  • Pointer size: 131 Bytes
  • Size of remote file: 200 kB