File size: 9,236 Bytes
f19283f
2ddce02
f19283f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64b82ee
 
059868d
 
6943c9a
059868d
698ba1b
6943c9a
 
 
 
 
 
 
 
 
 
059868d
6943c9a
4ecc66e
 
 
698ba1b
 
6943c9a
 
 
059868d
6943c9a
 
dfc172c
6943c9a
 
 
 
 
4ecc66e
059868d
6943c9a
 
7d0d28a
6943c9a
 
4ecc66e
 
6943c9a
 
 
64b82ee
6943c9a
4ecc66e
6943c9a
4ecc66e
6943c9a
4ecc66e
7d0d28a
 
 
 
2ddce02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d0d28a
 
 
 
 
 
 
 
 
 
 
 
 
4ecc66e
1a96a71
 
 
 
 
7d0d28a
4ecc66e
 
 
 
 
 
 
6943c9a
4ecc66e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6943c9a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4ecc66e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6943c9a
4ecc66e
 
 
6943c9a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
64b82ee
 
 
2ddce02
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
---
task_categories:
- audio-classification
- text-to-video
language:
- en
tags:
- audio-visual
- physical-properties
- pitch-estimation
pretty_name: Sound-of-Water 50
size_categories:
- n<1K
configs:
- config_name: default
  data_files:
  - split: train
    path: "splits/train.csv"
  - split: test_I
    path: "splits/test_I.csv"
  - split: test_II
    path: "splits/test_II.csv"
  - split: test_III
    path: "splits/test_III.csv"
---


<!-- # <img src="./assets/pouring-water-logo5.png" alt="Logo" width="40">  -->
# 🚰 The Sound of Water: Inferring Physical Properties from Pouring Liquids

<!-- <p align="center">
  <a href="https://arxiv.org/abs/2411.11222" target="_blank">
    <img src="https://img.shields.io/badge/arXiv-Paper-red" alt="arXiv">
  </a>
  &nbsp;&nbsp;&nbsp;
  <a target="_blank" href="https://colab.research.google.com/github/bpiyush/SoundOfWater/blob/main/playground.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
  </a>
  &nbsp;&nbsp;&nbsp;
  <a href="https://your_gradio_demo_link" target="_blank">
    <img src="https://img.shields.io/badge/Gradio-Demo-orange" alt="Gradio">
  </a>
</p> -->


This dataset is associated with the paper "The Sound of Water: Inferring Physical Properties from Pouring Liquids".

Arxiv link: https://arxiv.org/abs/2411.11222


<!-- Add a teaser image. -->
<p align="center">
  <img src="./assets/pitch_on_spectrogram-compressed.gif" alt="Teaser" width="100%">
</p>

*Key insight*: As water is poured, the fundamental frequency that we hear changes predictably over time as a function of physical properties (e.g., container dimensions).

**TL;DR**: We present a method to infer physical properties of liquids from *just* the sound of pouring. We show in theory how *pitch* can be used to derive various physical properties such as container height, flow rate, etc. Then, we train a pitch detection network (`wav2vec2`) using simulated and real data. The resulting model can predict the physical properties of pouring liquids with high accuracy. The latent representations learned also encode information about liquid mass and container shape.


##  πŸ“‘ Table of Contents

- [🚰 The Sound of Water: Inferring Physical Properties from Pouring Liquids](#-the-sound-of-water-inferring-physical-properties-from-pouring-liquids)
  - [πŸ“‘ Table of Contents](#-table-of-contents)
  - [πŸ“š Dataset Overview](#-dataset-overview)
  - [πŸŽ₯ Video and 🎧 audio samples](#-video-and--audio-samples)
  - [πŸ—‚οΈ Splits](#️-splits)
  - [πŸ“ Annotations](#-annotations)
      - [Container measurements and other metadata](#container-measurements-and-other-metadata)
      - [Container bounding boxes](#container-bounding-boxes)
  - [🎬 YouTube samples](#-youtube-samples)
  - [πŸ“œ Citation](#-citation)
  - [πŸ™ Acknowledgements](#-acknowledgements)
  - [πŸ™…πŸ» Potential Biases](#-potential-biases)


## πŸ“š Dataset Overview

We collect a dataset of 805 clean videos that show the action of pouring water in a container. Our dataset spans over 50 unique containers made of 5 different materials, 4 different shapes and with hot and cold water. Some example containers are shown below.

<p align="center">
  <img width="650" alt="image" src="./assets/containers-v2.png">
</p>

Download the dataset with:

```python
# Note: this shall take 5-10 mins.

# Optionally, disable progress bars
# os.environ["HF_HUB_DISABLE_PROGRESS_BARS"] = True

from huggingface_hub import snapshot_download
snapshot_download(
    repo_id="bpiyush/sound-of-water",
    repo_type="dataset",
    local_dir="/path/to/dataset/SoundOfWater",
)
```


The dataset is stored in the following directory structure:
```sh
SoundOfWater/
|-- annotations
|-- assets
|-- audios
|-- README.md
|-- splits
|-- videos
`-- youtube_samples

6 directories, 1 file
```


## Demo

Check out the demo [here](https://huggingface.co/spaces/bpiyush/SoundOfWater). You can upload a video of pouring and the model estimates pitch and physical properties.

## πŸŽ₯ Video and 🎧 audio samples

The video and audio samples are stored in the `./videos/` and `./audios/` directories, respectively.
Note that we have trimmed the videos between the precise start and end of the pouring action.
If you need untrimmed videos, please contact us separately and we may be able to help.

The metadata for each video is a row in "./annotations/localisation.csv". 

## πŸ—‚οΈ Splits

We create four splits of the dataset.
All of the splits can be found in the `./splits/` directory.
The splits are as follows:
<table>
<style>
    table td:nth-child(n+2), table th:nth-child(n+2) {
      text-align: center;
    }
</style>
  <tr>
    <th>Split</th>
    <th colspan="2">Opacity</th>
    <th colspan="3">Shapes</th>
    <th>Containers</th>
    <th>Videos</th>
    <th>Description</th>
  </tr>
  <tr>
    <td></td>
    <td><i>Transparent</i></td>
    <td><i>Opaque</i></td>
    <td><i>Cylinder</i></td>
    <td><i>Semi-cone</i></td>
    <td><i>Bottle</i></td>
    <td></td>
    <td></td>
    <td></td>
  </tr>
  <tr>
    <td>Train</td>
    <td>βœ“</td>
    <td>βœ—</td>
    <td>βœ“</td>
    <td>βœ“</td>
    <td>βœ—</td>
    <td>18</td>
    <td>195</td>
    <td>Transparent cylinder-like containers</td>
  </tr>
  <tr>
    <td>Test I</td>
    <td>βœ“</td>
    <td>βœ—</td>
    <td>βœ“</td>
    <td>βœ“</td>
    <td>βœ—</td>
    <td>13</td>
    <td>54</td>
    <td>Test set with seen containers</td>
  </tr>
  <tr>
    <td>Test II</td>
    <td>βœ—</td>
    <td>βœ“</td>
    <td>βœ“</td>
    <td>βœ“</td>
    <td>βœ—</td>
    <td>19</td>
    <td>327</td>
    <td>Test set with unseen containers</td>
  </tr>
  <tr>
    <td>Test III</td>
    <td>βœ“</td>
    <td>βœ“</td>
    <td>βœ“</td>
    <td>βœ“</td>
    <td>βœ“</td>
    <td>25</td>
    <td>434</td>
    <td>Shape clf. with unseen containers</td>
  </tr>
</table>


## πŸ“ Annotations

An example row with metadata for a video looks like:
```json
{
    "video_id": "VID_20240116_230040",
    "start_time": 2.057,
    "end_time": 16.71059,
    "setting": "ws-kitchen",
    "bg-noise": "no",
    "water_temperature": "normal",
    "liquid": "water_normal",
    "container_id": "container_1",
    "flow_rate_appx": "constant",
    "comment": null,
    "clean": "yes",
    "time_annotation_mode": "manual",
    "shape": "cylindrical",
    "material": "plastic",
    "visibility": "transparent",
    "example_video_id": "VID_20240116_230040",
    "measurements": {
        "diameter_bottom": 5.7,
        "diameter_top": 6.3,
        "net_height": 19.7,
        "thickness": 0.32
    },
    "hyperparameters": {
        "beta": 0.0
    },
    "physical_parameters": null,
    "item_id": "VID_20240116_230040_2.1_16.7"
}
```

#### Container measurements and other metadata

All metadata for the containers is stored in the `./annotations/` file.

| **File** | **Description** |
| --- | --- |
| `localisation.csv` | Each row is metadata (e.g., container) for each video. |
| `containers.yaml` | Metadata for each container. |
| `liquids.yaml` | Metadata for each liquid. |
| `materials.yaml` | Metadata for each material. |


#### Container bounding boxes

The bounding box annotations for containers are stored here: `./annotations/container_bboxes/`. 
These are generated in a zero-shot manner using [LangSAM](https://github.com/luca-medeiros/lang-segment-anything).


## 🎬 YouTube samples

We also provide 4 samples searched from YouTube. These are used for qualitative evaluation.


<!-- Add a citation -->
## πŸ“œ Citation

If you find this repository useful, please consider giving a star ⭐ and citation

```bibtex
@article{sound_of_water_bagad,
  title={The Sound of Water: Inferring Physical Properties from Pouring Liquids},
  author={Bagad, Piyush and Tapaswi, Makarand and Snoek, Cees G. M. and Zisserman, Andrew},
  journal={arXiv},
  year={2024}
}
```

<!-- Add acknowledgements, license, etc. here. -->
## πŸ™ Acknowledgements

* We thank Ashish Thandavan for support with infrastructure and Sindhu
Hegde, Ragav Sachdeva, Jaesung Huh, Vladimir Iashin, Prajwal KR, and Aditya Singh for useful
discussions.
* This research is funded by EPSRC Programme Grant VisualAI EP/T028572/1, and a Royal Society Research Professorship RP / R1 / 191132.

We also want to highlight closely related work that could be of interest:

* [Analyzing Liquid Pouring Sequences via Audio-Visual Neural Networks](https://gamma.cs.unc.edu/PSNN/). IROS (2019).
* [Human sensitivity to acoustic information from vessel filling](https://psycnet.apa.org/record/2000-13210-019). Journal of Experimental Psychology (2020).
* [See the Glass Half Full: Reasoning About Liquid Containers, Their Volume and Content](https://arxiv.org/abs/1701.02718). ICCV (2017).
* [CREPE: A Convolutional Representation for Pitch Estimation](https://arxiv.org/abs/1802.06182). ICASSP (2018).

## πŸ™…πŸ» Potential Biases

The dataset is recorded on a standard mobile phone from the authors themselves. It is recorded in a  indoor setting. As far as possible, we have tried to not include any personal information in the videos. Thus, it is unlikely to include harmdul biases. Plus, the scale of the dataset is small and is not likely to be used for training large models.