Datasets:

ArXiv:
DOI:
License:

seismic network update

#2
by kylewhy - opened
This view is limited to 50 files because it contains too many changes.  See the raw diff here.
.gitattributes CHANGED
@@ -52,4 +52,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
55
- *.csv filter=lfs diff=lfs merge=lfs -text
 
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
55
+ ncedc_eventid.h5 filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -5,58 +5,66 @@ license: mit
5
  # Quakeflow_NC
6
 
7
  ## Introduction
8
- This dataset is part of the data (1970-2020) from [NCEDC (Northern California Earthquake Data Center)](https://ncedc.org/index.html) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
9
-
10
- Cite the NCEDC and PhaseNet:
11
-
12
- Zhu, W., & Beroza, G. C. (2018). PhaseNet: A Deep-Neural-Network-Based Seismic Arrival Time Picking Method. arXiv preprint arXiv:1803.03211.
13
-
14
- NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC.
15
-
16
- Acknowledge the NCEDC:
17
-
18
- Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.
19
 
20
  ```
21
- Group: / len:16227
22
- |- Group: /nc71111584 len:2
23
- | |-* begin_time = 2020-01-02T07:01:19.620
24
- | |-* depth_km = 3.69
25
- | |-* end_time = 2020-01-02T07:03:19.620
26
- | |-* event_id = nc71111584
27
- | |-* event_time = 2020-01-02T07:01:48.240
28
- | |-* event_time_index = 2862
29
- | |-* latitude = 37.6545
30
- | |-* longitude = -118.8798
31
- | |-* magnitude = -0.15
32
  | |-* magnitude_type = D
33
- | |-* num_stations = 2
34
- | |- Dataset: /nc71111584/NC.MCB..HH (shape:(3, 12000))
35
  | | |- (dtype=float32)
36
- | | | |-* azimuth = 233.0
37
- | | | |-* component = ['E' 'N' 'Z']
38
- | | | |-* distance_km = 1.9
39
  | | | |-* dt_s = 0.01
40
- | | | |-* elevation_m = 2391.0
41
- | | | |-* emergence_angle = 159.0
42
- | | | |-* event_id = ['nc71111584' 'nc71111584']
43
- | | | |-* latitude = 37.6444
44
  | | | |-* location =
45
- | | | |-* longitude = -118.8968
46
  | | | |-* network = NC
47
- | | | |-* phase_index = [3000 3101]
48
  | | | |-* phase_polarity = ['U' 'N']
49
- | | | |-* phase_remark = ['IP' 'ES']
50
- | | | |-* phase_score = [1 2]
51
- | | | |-* phase_time = ['2020-01-02T07:01:49.620' '2020-01-02T07:01:50.630']
52
  | | | |-* phase_type = ['P' 'S']
53
- | | | |-* snr = [2.82143 3.055604 1.8412642]
54
- | | | |-* station = MCB
55
  | | | |-* unit = 1e-6m/s
56
- | |- Dataset: /nc71111584/NC.MCB..HN (shape:(3, 12000))
57
  | | |- (dtype=float32)
58
- | | | |-* azimuth = 233.0
59
- | | | |-* component = ['E' 'N' 'Z']
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  ......
61
  ```
62
 
@@ -65,96 +73,61 @@ Waveform data, metadata, or data products for this study were accessed through t
65
  ### Requirements
66
  - datasets
67
  - h5py
68
- - fsspec
69
- - pytorch
70
 
71
  ### Usage
72
- Import the necessary packages:
73
  ```python
74
  import h5py
75
  import numpy as np
76
  import torch
 
77
  from datasets import load_dataset
78
- ```
79
- We have 6 configurations for the dataset:
80
- - "station"
81
- - "event"
82
- - "station_train"
83
- - "event_train"
84
- - "station_test"
85
- - "event_test"
86
-
87
- "station" yields station-based samples one by one, while "event" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix "_train" and "_test" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.
88
-
89
- The sample of `station` is a dictionary with the following keys:
90
- - `data`: the waveform with shape `(3, nt)`, the default time length is 8192
91
- - `begin_time`: the begin time of the waveform data
92
- - `end_time`: the end time of the waveform data
93
- - `phase_time`: the phase arrival time
94
- - `phase_index`: the time point index of the phase arrival time
95
- - `phase_type`: the phase type
96
- - `phase_polarity`: the phase polarity in ('U', 'D', 'N')
97
- - `event_time`: the event time
98
- - `event_time_index`: the time point index of the event time
99
- - `event_location`: the event location with shape `(3,)`, including latitude, longitude, depth
100
- - `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
101
 
102
- The sample of `event` is a dictionary with the following keys:
103
- - `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
104
- - `begin_time`: the begin time of the waveform data
105
- - `end_time`: the end time of the waveform data
106
- - `phase_time`: the phase arrival time with shape `(n_station,)`
107
- - `phase_index`: the time point index of the phase arrival time with shape `(n_station,)`
108
- - `phase_type`: the phase type with shape `(n_station,)`
109
- - `phase_polarity`: the phase polarity in ('U', 'D', 'N') with shape `(n_station,)`
110
- - `event_time`: the event time
111
- - `event_time_index`: the time point index of the event time
112
- - `event_location`: the space-time coordinates of the event with shape `(n_staion, 3)`
113
- - `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
114
-
115
- The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
116
- ```python
117
  # load dataset
118
  # ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
119
  # So we recommend to directly load the dataset and convert it into iterable later
120
  # The dataset is very large, so you need to wait for some time at the first time
121
-
122
- # to load "station_test" with test split
123
- quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="test")
124
- # or
125
- quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
126
-
127
- # to load "event" with train split
128
- quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="event", split="train")
129
  ```
130
-
131
- #### Example loading the dataset
132
  ```python
133
- quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
134
 
135
  # print the first sample of the iterable dataset
136
  for example in quakeflow_nc:
137
  print("\nIterable test\n")
138
  print(example.keys())
139
  for key in example.keys():
140
- if key == "data":
141
- print(key, np.array(example[key]).shape)
142
- else:
143
- print(key, example[key])
144
  break
145
 
146
- # %%
147
- quakeflow_nc = quakeflow_nc.with_format("torch")
148
- dataloader = DataLoader(quakeflow_nc, batch_size=8, num_workers=0, collate_fn=lambda x: x)
149
 
150
  for batch in dataloader:
151
  print("\nDataloader test\n")
152
- print(f"Batch size: {len(batch)}")
153
- print(batch[0].keys())
154
- for key in batch[0].keys():
155
- if key == "data":
156
- print(key, np.array(batch[0][key]).shape)
157
- else:
158
- print(key, batch[0][key])
159
  break
160
  ```
 
5
  # Quakeflow_NC
6
 
7
  ## Introduction
8
+ This dataset is part of the data from NCEDC (Northern California Earthquake Data Center) and is organised as several HDF5 files. The dataset structure is shown below: (File [ncedc_event_dataset_000.h5.txt](./ncedc_event_dataset_000.h5.txt) shows the structure of the firsr shard of the dataset, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
 
 
 
 
 
 
 
 
 
 
9
 
10
  ```
11
+ Group: / len:10000
12
+ |- Group: /nc100012 len:5
13
+ | |-* begin_time = 1987-05-08T00:15:48.890
14
+ | |-* depth_km = 7.04
15
+ | |-* end_time = 1987-05-08T00:17:48.890
16
+ | |-* event_id = nc100012
17
+ | |-* event_time = 1987-05-08T00:16:14.700
18
+ | |-* event_time_index = 2581
19
+ | |-* latitude = 37.5423
20
+ | |-* longitude = -118.4412
21
+ | |-* magnitude = 1.1
22
  | |-* magnitude_type = D
23
+ | |-* num_stations = 5
24
+ | |- Dataset: /nc100012/NC.MRS..EH (shape:(3, 12000))
25
  | | |- (dtype=float32)
26
+ | | | |-* azimuth = 265.0
27
+ | | | |-* component = ['Z']
28
+ | | | |-* distance_km = 39.1
29
  | | | |-* dt_s = 0.01
30
+ | | | |-* elevation_m = 3680.0
31
+ | | | |-* emergence_angle = 93.0
32
+ | | | |-* event_id = ['nc100012' 'nc100012']
33
+ | | | |-* latitude = 37.5107
34
  | | | |-* location =
35
+ | | | |-* longitude = -118.8822
36
  | | | |-* network = NC
37
+ | | | |-* phase_index = [3274 3802]
38
  | | | |-* phase_polarity = ['U' 'N']
39
+ | | | |-* phase_remark = ['IP' 'S']
40
+ | | | |-* phase_score = [1 1]
41
+ | | | |-* phase_time = ['1987-05-08T00:16:21.630' '1987-05-08T00:16:26.920']
42
  | | | |-* phase_type = ['P' 'S']
43
+ | | | |-* snr = [0. 0. 1.98844361]
44
+ | | | |-* station = MRS
45
  | | | |-* unit = 1e-6m/s
46
+ | |- Dataset: /nc100012/NN.BEN.N1.EH (shape:(3, 12000))
47
  | | |- (dtype=float32)
48
+ | | | |-* azimuth = 329.0
49
+ | | | |-* component = ['Z']
50
+ | | | |-* distance_km = 22.5
51
+ | | | |-* dt_s = 0.01
52
+ | | | |-* elevation_m = 2476.0
53
+ | | | |-* emergence_angle = 102.0
54
+ | | | |-* event_id = ['nc100012' 'nc100012']
55
+ | | | |-* latitude = 37.7154
56
+ | | | |-* location = N1
57
+ | | | |-* longitude = -118.5741
58
+ | | | |-* network = NN
59
+ | | | |-* phase_index = [3010 3330]
60
+ | | | |-* phase_polarity = ['U' 'N']
61
+ | | | |-* phase_remark = ['IP' 'S']
62
+ | | | |-* phase_score = [0 0]
63
+ | | | |-* phase_time = ['1987-05-08T00:16:18.990' '1987-05-08T00:16:22.190']
64
+ | | | |-* phase_type = ['P' 'S']
65
+ | | | |-* snr = [0. 0. 7.31356192]
66
+ | | | |-* station = BEN
67
+ | | | |-* unit = 1e-6m/s
68
  ......
69
  ```
70
 
 
73
  ### Requirements
74
  - datasets
75
  - h5py
76
+ - torch (for PyTorch)
 
77
 
78
  ### Usage
 
79
  ```python
80
  import h5py
81
  import numpy as np
82
  import torch
83
+ from torch.utils.data import Dataset, IterableDataset, DataLoader
84
  from datasets import load_dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86
  # load dataset
87
  # ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
88
  # So we recommend to directly load the dataset and convert it into iterable later
89
  # The dataset is very large, so you need to wait for some time at the first time
90
+ quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="train")
91
+ quakeflow_nc
 
 
 
 
 
 
92
  ```
93
+ If you want to use the first several shards of the dataset, you can download the script `quakeflow_nc.py` and change the code below:
 
94
  ```python
95
+ # change the 37 to the number of shards you want
96
+ _URLS = {
97
+ "NCEDC": [f"{_REPO}/ncedc_event_dataset_{i:03d}.h5" for i in range(37)]
98
+ }
99
+ ```
100
+ Then you can use the dataset like this:
101
+ ```python
102
+ quakeflow_nc = datasets.load_dataset("./quakeflow_nc.py", split="train")
103
+ quakeflow_nc
104
+ ```
105
+ Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
106
+ ```python
107
+ quakeflow_nc = quakeflow_nc.to_iterable_dataset()
108
+ quakeflow_nc = quakeflow_nc.with_format("torch")
109
+ # because add examples formatting to get tensors when using the "torch" format
110
+ # has not been implemented yet, we need to manually add the formatting
111
+ quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
112
+ try:
113
+ isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
114
+ except:
115
+ raise Exception("quakeflow_nc is not an IterableDataset")
116
 
117
  # print the first sample of the iterable dataset
118
  for example in quakeflow_nc:
119
  print("\nIterable test\n")
120
  print(example.keys())
121
  for key in example.keys():
122
+ print(key, example[key].shape, example[key].dtype)
 
 
 
123
  break
124
 
125
+ dataloader = DataLoader(quakeflow_nc, batch_size=4)
 
 
126
 
127
  for batch in dataloader:
128
  print("\nDataloader test\n")
129
+ print(batch.keys())
130
+ for key in batch.keys():
131
+ print(key, batch[key].shape, batch[key].dtype)
 
 
 
 
132
  break
133
  ```
events.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:84166f6a0be6a02caeb8d11ed3495e5256db698c795dbb3db4d45d8b863313d8
3
- size 46863258
 
 
 
 
events_test.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:74b5bf132e23763f851035717a1baa92ab8fb73253138b640103390dce33e154
3
- size 1602217
 
 
 
 
events_train.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ef579400d9354ecaf142bdc7023291c952dbfc20d6bafab4715dff1774b3f7a5
3
- size 45261178
 
 
 
 
example.py DELETED
@@ -1,54 +0,0 @@
1
- # %%
2
- import datasets
3
- import numpy as np
4
- from torch.utils.data import DataLoader
5
-
6
- quakeflow_nc = datasets.load_dataset(
7
- "AI4EPS/quakeflow_nc",
8
- name="station",
9
- split="train",
10
- # name="station_test",
11
- # split="test",
12
- # download_mode="force_redownload",
13
- trust_remote_code=True,
14
- num_proc=36,
15
- )
16
- # quakeflow_nc = datasets.load_dataset(
17
- # "./quakeflow_nc.py",
18
- # name="station",
19
- # split="train",
20
- # # name="statoin_test",
21
- # # split="test",
22
- # num_proc=36,
23
- # )
24
-
25
- print(quakeflow_nc)
26
-
27
- # print the first sample of the iterable dataset
28
- for example in quakeflow_nc:
29
- print("\nIterable dataset\n")
30
- print(example)
31
- print(example.keys())
32
- for key in example.keys():
33
- if key == "waveform":
34
- print(key, np.array(example[key]).shape)
35
- else:
36
- print(key, example[key])
37
- break
38
-
39
- # %%
40
- quakeflow_nc = quakeflow_nc.with_format("torch")
41
- dataloader = DataLoader(quakeflow_nc, batch_size=8, num_workers=0, collate_fn=lambda x: x)
42
-
43
- for batch in dataloader:
44
- print("\nDataloader dataset\n")
45
- print(f"Batch size: {len(batch)}")
46
- print(batch[0].keys())
47
- for key in batch[0].keys():
48
- if key == "waveform":
49
- print(key, np.array(batch[0][key]).shape)
50
- else:
51
- print(key, batch[0][key])
52
- break
53
-
54
- # %%
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
merge_hdf5.py DELETED
@@ -1,65 +0,0 @@
1
- # %%
2
- import os
3
-
4
- import h5py
5
- import matplotlib.pyplot as plt
6
- from tqdm import tqdm
7
-
8
- # %%
9
- h5_dir = "waveform_h5"
10
- h5_out = "waveform.h5"
11
- h5_train = "waveform_train.h5"
12
- h5_test = "waveform_test.h5"
13
-
14
- # # %%
15
- # h5_dir = "waveform_h5"
16
- # h5_out = "waveform.h5"
17
- # h5_train = "waveform_train.h5"
18
- # h5_test = "waveform_test.h5"
19
-
20
- h5_files = sorted(os.listdir(h5_dir))
21
- train_files = h5_files[:-1]
22
- test_files = h5_files[-1:]
23
- # train_files = h5_files
24
- # train_files = [x for x in train_files if (x != "2014.h5") and (x not in [])]
25
- # test_files = []
26
- print(f"train files: {train_files}")
27
- print(f"test files: {test_files}")
28
-
29
- # %%
30
- with h5py.File(h5_out, "w") as fp:
31
- # external linked file
32
- for h5_file in h5_files:
33
- with h5py.File(os.path.join(h5_dir, h5_file), "r") as f:
34
- for event in tqdm(f.keys(), desc=h5_file, total=len(f.keys())):
35
- if event not in fp:
36
- fp[event] = h5py.ExternalLink(os.path.join(h5_dir, h5_file), event)
37
- else:
38
- print(f"{event} already exists")
39
- continue
40
-
41
- # %%
42
- with h5py.File(h5_train, "w") as fp:
43
- # external linked file
44
- for h5_file in train_files:
45
- with h5py.File(os.path.join(h5_dir, h5_file), "r") as f:
46
- for event in tqdm(f.keys(), desc=h5_file, total=len(f.keys())):
47
- if event not in fp:
48
- fp[event] = h5py.ExternalLink(os.path.join(h5_dir, h5_file), event)
49
- else:
50
- print(f"{event} already exists")
51
- continue
52
-
53
- # %%
54
- with h5py.File(h5_test, "w") as fp:
55
- # external linked file
56
- for h5_file in test_files:
57
- with h5py.File(os.path.join(h5_dir, h5_file), "r") as f:
58
- for event in tqdm(f.keys(), desc=h5_file, total=len(f.keys())):
59
- if event not in fp:
60
- fp[event] = h5py.ExternalLink(os.path.join(h5_dir, h5_file), event)
61
- else:
62
- print(f"{event} already exists")
63
- continue
64
-
65
- # %%
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
models/phasenet_picks.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b51df5987a2a05e44e0949b42d00a28692109da521911c55d2692ebfad0c54d7
3
- size 9355127
 
 
 
 
models/phasenet_plus_events.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f686ebf8da632b71a947e4ee884c76f30a313ae0e9d6e32d1f675828884a95f7
3
- size 7381331
 
 
 
 
models/phasenet_plus_picks.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:83d241a54477f722cd032efe8368a653bba170e1abebf3d9097d7756cfd54b23
3
- size 9987053
 
 
 
 
models/phasenet_pt_picks.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb7ea98484b5e6e1c4c79ea5eb1e38bce43e87b546fc6d29c72d187a6d8b1d00
3
- size 8715799
 
 
 
 
ncedc_event_dataset_000.h5.txt ADDED
The diff for this file is too large to render. See raw diff
 
picks.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:52f077ae9f94481d4b80f37c9f15038ee1e3636d5da2da3b1d4aaa2991879cc3
3
- size 422247029
 
 
 
 
picks_test.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb09f0ac169bf451cfcfb4547359756cb1a53828bf4074971d9160a3aa171f38
3
- size 21850235
 
 
 
 
picks_train.csv DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d22c5d5eb1c27a723525c657c1308a3b643d6f3e716eb1c43e064b7a87bb0819
3
- size 400397230
 
 
 
 
quakeflow_nc.py CHANGED
@@ -17,21 +17,27 @@
17
  """QuakeFlow_NC: A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format."""
18
 
19
 
20
- from typing import Dict, List, Optional, Tuple, Union
21
-
22
- import datasets
23
- import fsspec
24
  import h5py
25
  import numpy as np
26
  import torch
 
 
 
 
 
 
27
 
28
  # TODO: Add BibTeX citation
29
  # Find for instance the citation on arxiv or on the dataset repo/website
30
  _CITATION = """\
31
  @InProceedings{huggingface:dataset,
32
- title = {NCEDC dataset for QuakeFlow},
33
- author={Zhu et al.},
34
- year={2023}
 
35
  }
36
  """
37
 
@@ -50,74 +56,18 @@ _LICENSE = ""
50
  # TODO: Add link to the official dataset URLs here
51
  # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
52
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
53
- _REPO = "https://huggingface.co/datasets/AI4EPS/quakeflow_nc/resolve/main/waveform_h5"
54
- _FILES = [
55
- "1987.h5",
56
- "1988.h5",
57
- "1989.h5",
58
- "1990.h5",
59
- "1991.h5",
60
- "1992.h5",
61
- "1993.h5",
62
- "1994.h5",
63
- "1995.h5",
64
- "1996.h5",
65
- "1997.h5",
66
- "1998.h5",
67
- "1999.h5",
68
- "2000.h5",
69
- "2001.h5",
70
- "2002.h5",
71
- "2003.h5",
72
- "2004.h5",
73
- "2005.h5",
74
- "2006.h5",
75
- "2007.h5",
76
- "2008.h5",
77
- "2009.h5",
78
- "2010.h5",
79
- "2011.h5",
80
- "2012.h5",
81
- "2013.h5",
82
- "2014.h5",
83
- "2015.h5",
84
- "2016.h5",
85
- "2017.h5",
86
- "2018.h5",
87
- "2019.h5",
88
- "2020.h5",
89
- "2021.h5",
90
- "2022.h5",
91
- "2023.h5",
92
- ]
93
  _URLS = {
94
- "station": [f"{_REPO}/{x}" for x in _FILES],
95
- "event": [f"{_REPO}/{x}" for x in _FILES],
96
- "station_train": [f"{_REPO}/{x}" for x in _FILES[:-1]],
97
- "event_train": [f"{_REPO}/{x}" for x in _FILES[:-1]],
98
- "station_test": [f"{_REPO}/{x}" for x in _FILES[-1:]],
99
- "event_test": [f"{_REPO}/{x}" for x in _FILES[-1:]],
100
  }
101
 
102
 
103
- class BatchBuilderConfig(datasets.BuilderConfig):
104
- """
105
- yield a batch of event-based sample, so the number of sample stations can vary among batches
106
- Batch Config for QuakeFlow_NC
107
- """
108
-
109
- def __init__(self, **kwargs):
110
- super().__init__(**kwargs)
111
-
112
-
113
  # TODO: Name of the dataset usually matches the script name with CamelCase instead of snake_case
114
  class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
115
  """QuakeFlow_NC: A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format."""
116
 
117
  VERSION = datasets.Version("1.1.0")
118
 
119
- nt = 8192
120
-
121
  # This is an example of a dataset with multiple configurations.
122
  # If you don't want/need to define several sub-sets in your dataset,
123
  # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
@@ -129,80 +79,22 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
129
  # You will be able to load one or the other configurations in the following list with
130
  # data = datasets.load_dataset('my_dataset', 'first_domain')
131
  # data = datasets.load_dataset('my_dataset', 'second_domain')
132
-
133
- # default config, you can change batch_size and num_stations_list when use `datasets.load_dataset`
134
  BUILDER_CONFIGS = [
135
- datasets.BuilderConfig(
136
- name="station", version=VERSION, description="yield station-based samples one by one of whole dataset"
137
- ),
138
- datasets.BuilderConfig(
139
- name="event", version=VERSION, description="yield event-based samples one by one of whole dataset"
140
- ),
141
- datasets.BuilderConfig(
142
- name="station_train",
143
- version=VERSION,
144
- description="yield station-based samples one by one of training dataset",
145
- ),
146
- datasets.BuilderConfig(
147
- name="event_train", version=VERSION, description="yield event-based samples one by one of training dataset"
148
- ),
149
- datasets.BuilderConfig(
150
- name="station_test", version=VERSION, description="yield station-based samples one by one of test dataset"
151
- ),
152
- datasets.BuilderConfig(
153
- name="event_test", version=VERSION, description="yield event-based samples one by one of test dataset"
154
- ),
155
  ]
156
 
157
- DEFAULT_CONFIG_NAME = (
158
- "station_test" # It's not mandatory to have a default configuration. Just use one if it make sense.
159
- )
160
 
161
  def _info(self):
162
  # TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
163
- if (
164
- (self.config.name == "station")
165
- or (self.config.name == "station_train")
166
- or (self.config.name == "station_test")
167
- ):
168
- features = datasets.Features(
169
- {
170
- "id": datasets.Value("string"),
171
- "event_id": datasets.Value("string"),
172
- "station_id": datasets.Value("string"),
173
- "waveform": datasets.Array2D(shape=(3, self.nt), dtype="float32"),
174
- "phase_time": datasets.Sequence(datasets.Value("string")),
175
- "phase_index": datasets.Sequence(datasets.Value("int32")),
176
- "phase_type": datasets.Sequence(datasets.Value("string")),
177
- "phase_polarity": datasets.Sequence(datasets.Value("string")),
178
- "begin_time": datasets.Value("string"),
179
- "end_time": datasets.Value("string"),
180
- "event_time": datasets.Value("string"),
181
- "event_time_index": datasets.Value("int32"),
182
- "event_location": datasets.Sequence(datasets.Value("float32")),
183
- "station_location": datasets.Sequence(datasets.Value("float32")),
184
- },
185
- )
186
- elif (self.config.name == "event") or (self.config.name == "event_train") or (self.config.name == "event_test"):
187
- features = datasets.Features(
188
- {
189
- "event_id": datasets.Value("string"),
190
- "waveform": datasets.Array3D(shape=(None, 3, self.nt), dtype="float32"),
191
- "phase_time": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
192
- "phase_index": datasets.Sequence(datasets.Sequence(datasets.Value("int32"))),
193
- "phase_type": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
194
- "phase_polarity": datasets.Sequence(datasets.Sequence(datasets.Value("string"))),
195
- "begin_time": datasets.Value("string"),
196
- "end_time": datasets.Value("string"),
197
- "event_time": datasets.Value("string"),
198
- "event_time_index": datasets.Value("int32"),
199
- "event_location": datasets.Sequence(datasets.Value("float32")),
200
- "station_location": datasets.Sequence(datasets.Sequence(datasets.Value("float32"))),
201
- },
202
- )
203
- else:
204
- raise ValueError(f"config.name = {self.config.name} is not in BUILDER_CONFIGS")
205
-
206
  return datasets.DatasetInfo(
207
  # This is the description that will appear on the datasets page.
208
  description=_DESCRIPTION,
@@ -228,135 +120,102 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
228
  # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
229
  urls = _URLS[self.config.name]
230
  # files = dl_manager.download(urls)
231
- if "bucket" not in self.storage_options:
232
- files = dl_manager.download_and_extract(urls)
233
- else:
234
- files = [f"{self.storage_options['bucket']}/{x}" for x in _FILES]
235
- # files = [f"/nfs/quakeflow_dataset/NC/quakeflow_nc/waveform_h5/{x}" for x in _FILES][-3:]
236
- print("Files:\n", "\n".join(sorted(files)))
237
- print(self.storage_options)
238
-
239
- if self.config.name == "station" or self.config.name == "event":
240
- return [
241
- datasets.SplitGenerator(
242
- name=datasets.Split.TRAIN,
243
- # These kwargs will be passed to _generate_examples
244
- gen_kwargs={"filepath": files[:-1], "split": "train"},
245
- ),
246
- datasets.SplitGenerator(
247
- name=datasets.Split.TEST,
248
- gen_kwargs={"filepath": files[-1:], "split": "test"},
249
- ),
250
- ]
251
- elif self.config.name == "station_train" or self.config.name == "event_train":
252
- return [
253
- datasets.SplitGenerator(
254
- name=datasets.Split.TRAIN,
255
- gen_kwargs={"filepath": files, "split": "train"},
256
- ),
257
- ]
258
- elif self.config.name == "station_test" or self.config.name == "event_test":
259
- return [
260
- datasets.SplitGenerator(
261
- name=datasets.Split.TEST,
262
- gen_kwargs={"filepath": files, "split": "test"},
263
- ),
264
- ]
265
- else:
266
- raise ValueError("config.name is not in BUILDER_CONFIGS")
267
 
268
  # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
269
  def _generate_examples(self, filepath, split):
270
  # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
271
  # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
272
-
 
273
  for file in filepath:
274
- print(f"\nReading {file}")
275
- with fsspec.open(file, "rb") as fs:
276
- with h5py.File(fs, "r") as fp:
277
- event_ids = list(fp.keys())
278
- for event_id in event_ids:
279
- event = fp[event_id]
280
- event_attrs = event.attrs
281
- begin_time = event_attrs["begin_time"]
282
- end_time = event_attrs["end_time"]
283
- event_location = [
284
- event_attrs["longitude"],
285
- event_attrs["latitude"],
286
- event_attrs["depth_km"],
287
- ]
288
- event_time = event_attrs["event_time"]
289
- event_time_index = event_attrs["event_time_index"]
290
- station_ids = list(event.keys())
291
- if len(station_ids) == 0:
292
- continue
293
- if (
294
- (self.config.name == "station")
295
- or (self.config.name == "station_train")
296
- or (self.config.name == "station_test")
297
- ):
298
- waveform = np.zeros([3, self.nt], dtype="float32")
299
-
300
- for i, station_id in enumerate(station_ids):
301
- waveform[:, : self.nt] = event[station_id][:, : self.nt]
302
- attrs = event[station_id].attrs
303
- phase_type = attrs["phase_type"]
304
- phase_time = attrs["phase_time"]
305
- phase_index = attrs["phase_index"]
306
- phase_polarity = attrs["phase_polarity"]
307
- station_location = [attrs["longitude"], attrs["latitude"], -attrs["elevation_m"] / 1e3]
308
-
309
- yield f"{event_id}/{station_id}", {
310
- "id": f"{event_id}/{station_id}",
311
- "event_id": event_id,
312
- "station_id": station_id,
313
- "waveform": waveform,
314
- "phase_time": phase_time,
315
- "phase_index": phase_index,
316
- "phase_type": phase_type,
317
- "phase_polarity": phase_polarity,
318
- "begin_time": begin_time,
319
- "end_time": end_time,
320
- "event_time": event_time,
321
- "event_time_index": event_time_index,
322
- "event_location": event_location,
323
- "station_location": station_location,
324
- }
325
-
326
- elif (
327
- (self.config.name == "event")
328
- or (self.config.name == "event_train")
329
- or (self.config.name == "event_test")
330
- ):
331
-
332
- waveform = np.zeros([len(station_ids), 3, self.nt], dtype="float32")
333
- phase_type = []
334
- phase_time = []
335
- phase_index = []
336
- phase_polarity = []
337
- station_location = []
338
-
339
- for i, station_id in enumerate(station_ids):
340
- waveform[i, :, : self.nt] = event[station_id][:, : self.nt]
341
- attrs = event[station_id].attrs
342
- phase_type.append(list(attrs["phase_type"]))
343
- phase_time.append(list(attrs["phase_time"]))
344
- phase_index.append(list(attrs["phase_index"]))
345
- phase_polarity.append(list(attrs["phase_polarity"]))
346
- station_location.append(
347
- [attrs["longitude"], attrs["latitude"], -attrs["elevation_m"] / 1e3]
348
- )
349
- yield event_id, {
350
- "event_id": event_id,
351
- "waveform": waveform,
352
- "phase_time": phase_time,
353
- "phase_index": phase_index,
354
- "phase_type": phase_type,
355
- "phase_polarity": phase_polarity,
356
- "begin_time": begin_time,
357
- "end_time": end_time,
358
- "event_time": event_time,
359
- "event_time_index": event_time_index,
360
- "event_location": event_location,
361
- "station_location": station_location,
362
- }
 
17
  """QuakeFlow_NC: A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format."""
18
 
19
 
20
+ import csv
21
+ import json
22
+ import os
 
23
  import h5py
24
  import numpy as np
25
  import torch
26
+ import fsspec
27
+ from glob import glob
28
+ from typing import Dict, List, Optional, Tuple, Union
29
+
30
+ import datasets
31
+
32
 
33
  # TODO: Add BibTeX citation
34
  # Find for instance the citation on arxiv or on the dataset repo/website
35
  _CITATION = """\
36
  @InProceedings{huggingface:dataset,
37
+ title = {A great new dataset},
38
+ author={huggingface, Inc.
39
+ },
40
+ year={2020}
41
  }
42
  """
43
 
 
56
  # TODO: Add link to the official dataset URLs here
57
  # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
58
  # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
59
+ _REPO = "https://huggingface.co/datasets/AI4EPS/quakeflow_nc/resolve/main/data"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  _URLS = {
61
+ "NCEDC": [f"{_REPO}/ncedc_event_dataset_{i:03d}.h5" for i in range(37)]
 
 
 
 
 
62
  }
63
 
64
 
 
 
 
 
 
 
 
 
 
 
65
  # TODO: Name of the dataset usually matches the script name with CamelCase instead of snake_case
66
  class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
67
  """QuakeFlow_NC: A dataset of earthquake waveforms organized by earthquake events and based on the HDF5 format."""
68
 
69
  VERSION = datasets.Version("1.1.0")
70
 
 
 
71
  # This is an example of a dataset with multiple configurations.
72
  # If you don't want/need to define several sub-sets in your dataset,
73
  # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
 
79
  # You will be able to load one or the other configurations in the following list with
80
  # data = datasets.load_dataset('my_dataset', 'first_domain')
81
  # data = datasets.load_dataset('my_dataset', 'second_domain')
 
 
82
  BUILDER_CONFIGS = [
83
+ datasets.BuilderConfig(name="NCEDC", version=VERSION, description="This part of my dataset covers a first domain"),
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  ]
85
 
86
+ DEFAULT_CONFIG_NAME = "NCEDC" # It's not mandatory to have a default configuration. Just use one if it make sense.
 
 
87
 
88
  def _info(self):
89
  # TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
90
+ features=datasets.Features(
91
+ {
92
+ "waveform": datasets.Array3D(shape=(3, self.nt, self.num_stations), dtype='float32'),
93
+ "phase_pick": datasets.Array3D(shape=(3, self.nt, self.num_stations), dtype='float32'),
94
+ "event_location": [datasets.Value("float32")],
95
+ "station_location": datasets.Array2D(shape=(self.num_stations, 3), dtype="float32"),
96
+ }
97
+ )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
  return datasets.DatasetInfo(
99
  # This is the description that will appear on the datasets page.
100
  description=_DESCRIPTION,
 
120
  # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
121
  urls = _URLS[self.config.name]
122
  # files = dl_manager.download(urls)
123
+ files = dl_manager.download_and_extract(urls)
124
+ # files = ["./data/ncedc_event_dataset_000.h5"]
125
+
126
+ return [
127
+ datasets.SplitGenerator(
128
+ name=datasets.Split.TRAIN,
129
+ # These kwargs will be passed to _generate_examples
130
+ gen_kwargs={
131
+ "filepath": files,
132
+ "split": "train",
133
+ },
134
+ ),
135
+ # datasets.SplitGenerator(
136
+ # name=datasets.Split.VALIDATION,
137
+ # # These kwargs will be passed to _generate_examples
138
+ # gen_kwargs={
139
+ # "filepath": os.path.join(data_dir, "dev.jsonl"),
140
+ # "split": "dev",
141
+ # },
142
+ # ),
143
+ # datasets.SplitGenerator(
144
+ # name=datasets.Split.TEST,
145
+ # # These kwargs will be passed to _generate_examples
146
+ # gen_kwargs={
147
+ # "filepath": os.path.join(data_dir, "test.jsonl"),
148
+ # "split": "test"
149
+ # },
150
+ # ),
151
+ ]
152
+
153
+ degree2km = 111.32
154
+ nt = 8192
155
+ feature_nt = 512
156
+ feature_scale = int(nt / feature_nt)
157
+ sampling_rate=100.0
158
+ num_stations = 10
159
 
160
  # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
161
  def _generate_examples(self, filepath, split):
162
  # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
163
  # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
164
+ num_stations = self.num_stations
165
+
166
  for file in filepath:
167
+ with h5py.File(file, "r") as fp:
168
+ # for event_id in sorted(list(fp.keys())):
169
+ for event_id in fp.keys():
170
+ event = fp[event_id]
171
+ station_ids = list(event.keys())
172
+ if len(station_ids) < num_stations:
173
+ continue
174
+ else:
175
+ station_ids = np.random.choice(station_ids, num_stations, replace=False)
176
+
177
+ waveforms = np.zeros([3, self.nt, len(station_ids)])
178
+ phase_pick = np.zeros_like(waveforms)
179
+ attrs = event.attrs
180
+ event_location = [attrs["longitude"], attrs["latitude"], attrs["depth_km"], attrs["event_time_index"]]
181
+ station_location = []
182
+
183
+ for i, sta_id in enumerate(station_ids):
184
+ # trace_id = event_id + "/" + sta_id
185
+
186
+ waveforms[:, :, i] = event[sta_id][:,:self.nt]
187
+ attrs = event[sta_id].attrs
188
+ p_picks = attrs["phase_index"][attrs["phase_type"] == "P"]
189
+ s_picks = attrs["phase_index"][attrs["phase_type"] == "S"]
190
+ phase_pick[:, :, i] = generate_label([p_picks, s_picks], nt=self.nt)
191
+
192
+ station_location.append([attrs["longitude"], attrs["latitude"], -attrs["elevation_m"]/1e3])
193
+
194
+ std = np.std(waveforms, axis=1, keepdims=True)
195
+ std[std == 0] = 1.0
196
+ waveforms = (waveforms - np.mean(waveforms, axis=1, keepdims=True)) / std
197
+ waveforms = waveforms.astype(np.float32)
198
+
199
+ yield event_id, {
200
+ "waveform": torch.from_numpy(waveforms).float(),
201
+ "phase_pick": torch.from_numpy(phase_pick).float(),
202
+ "event_location": event_location,
203
+ "station_location": station_location,
204
+ }
205
+
206
+
207
+
208
+ def generate_label(phase_list, label_width=[150, 150], nt=8192):
209
+
210
+ target = np.zeros([len(phase_list) + 1, nt], dtype=np.float32)
211
+
212
+ for i, (picks, w) in enumerate(zip(phase_list, label_width)):
213
+ for phase_time in picks:
214
+ t = np.arange(nt) - phase_time
215
+ gaussian = np.exp(-(t**2) / (2 * (w / 6) ** 2))
216
+ gaussian[gaussian < 0.1] = 0.0
217
+ target[i + 1, :] += gaussian
218
+
219
+ target[0:1, :] = np.maximum(0, 1 - np.sum(target[1:, :], axis=0, keepdims=True))
220
+
221
+ return target
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
upload.py DELETED
@@ -1,11 +0,0 @@
1
- from huggingface_hub import HfApi
2
-
3
- api = HfApi()
4
-
5
- # Upload all the content from the local folder to your remote Space.
6
- # By default, files are uploaded at the root of the repo
7
- api.upload_folder(
8
- folder_path="./",
9
- repo_id="AI4EPS/quakeflow_nc",
10
- repo_type="space",
11
- )
 
 
 
 
 
 
 
 
 
 
 
 
waveform.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:77fb8b0bb040e1412a183a217dcbc1aa03ceb86b42db39ac62afe922a1673889
3
- size 20016390
 
 
 
 
waveform_h5/1987.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8afb94aafbf79db2848ae9c2006385c782493a97e6c71c1b8abf97c5d53bfc9d
3
- size 7744528
 
 
 
 
waveform_h5/1988.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c1398baca3f539e52744f83625b1dbb6f117a32b8d7e97f6af02a1f452f0dedd
3
- size 46126800
 
 
 
 
waveform_h5/1989.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:533cd50fe365de8c050f0ffd4a90b697dc6b90cb86c8199ec0172316eab2ddaa
3
- size 48255208
 
 
 
 
waveform_h5/1990.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f5a282a9a8c47cf65d144368085470940660faeb0e77cea59fff16af68020d26
3
- size 60092656
 
 
 
 
waveform_h5/1991.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ba897d96eb92e8684b52a206e94a500abfe0192930f971ce7b1319c0638d452
3
- size 62332336
 
 
 
 
waveform_h5/1992.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d00021f46956bf43192f8c59405e203f823f1f4202c720efa52c5029e8e880b8
3
- size 67360896
 
 
 
 
waveform_h5/1993.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:eec41dd0aa7b88c81fa9f9b5dbcaab80e1c7bc8f6c144bd81761941278c57b4f
3
- size 706087936
 
 
 
 
waveform_h5/1994.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b1cd002f20573636eaf101a30c5bac477edda201aba3af68be358756543ed48a
3
- size 609524864
 
 
 
 
waveform_h5/1995.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:948f19d71520a0dd25574be300f70e62c383e319b07a7d7182fca1dcfa9d61ee
3
- size 1728452872
 
 
 
 
waveform_h5/1996.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:23654b6f9c3a4c5a0aa56ed13ba04e943a94b458a51ac80ec1d418e9aa132840
3
- size 1752242680
 
 
 
 
waveform_h5/1997.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d1c0f4c8146fc8ff27c8a47a942b967a97bd2835346203e6de74ca55dd522616
3
- size 2661543208
 
 
 
 
waveform_h5/1998.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1afac9c1a33424b739d26261ac2e9a4520be9c86c57bae4c8fe1a7a422356e45
3
- size 2070489120
 
 
 
 
waveform_h5/1999.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2f2595a1919a5435148cdcf2cfa1501ce5edb53878d471500b13936f0f6f558c
3
- size 2300297608
 
 
 
 
waveform_h5/2000.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:250fd52d9f8dd17a8bfb58a3ecfef25d62b0a1adf67f6fe6f2b446e9f72caf7a
3
- size 434865160
 
 
 
 
waveform_h5/2001.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d70dea6156b32057760f91742f7a05a336e4f63b1f793408b5e7aad6a15551e5
3
- size 919203704
 
 
 
 
waveform_h5/2002.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f88c4c5960741a8d354db4a7324d56ef8750ab93aa1d9b11fc80d0c497d8d6ae
3
- size 2445812792
 
 
 
 
waveform_h5/2003.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:943d649f1a8a0e3989d2458be68fbf041058a581c4c73f8de39f1d50d3e7b35c
3
- size 3618485352
 
 
 
 
waveform_h5/2004.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ed1ba66e10ba5c165568ac13950a1728927ba49b33903a0df42c3d9965a16807
3
- size 6158740712
 
 
 
 
waveform_h5/2005.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c816d75b172148763b19e60c1469c106c1af1f906843c3d6d94e603e02c2b6cb
3
- size 2994468240
 
 
 
 
waveform_h5/2006.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:521e6b0ce262461f87b4b0a78ac6403cfbb597d6ace36e17f92354c456a30447
3
- size 2189511664
 
 
 
 
waveform_h5/2007.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ae6654c213fb4838d6a732b2c8d936bd799005b2a189d64f2d74e3767c0c503a
3
- size 4393926088
 
 
 
 
waveform_h5/2008.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d8163aee689448c260032df9b0ab9132a5b46f0fee88a4c1ca8f4492ec5534d6
3
- size 3964283536
 
 
 
 
waveform_h5/2009.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6702c2d3951ddf1034f1886a79e8c5a00dfa47c88c84048edc528f047a2337b5
3
- size 4162296168
 
 
 
 
waveform_h5/2010.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2f2de7c07f088a32ea7ae71c2107dfd121780a47d3e3f23e5c98ddb482c6ce71
3
- size 4547184704
 
 
 
 
waveform_h5/2011.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:520d62f3a94f1b4889f583196676fe2eccb6452807461afc93432dca930d6052
3
- size 5633641952
 
 
 
 
waveform_h5/2012.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:98b90529df4cbff7f21cd233d482454eaeac77b81117720ca7fe6c2697819071
3
- size 9520058832
 
 
 
 
waveform_h5/2013.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e6f1030ff4ebe488ef9072ec984c91024a8be4ecdbe7e9af47c6e65de942c2fe
3
- size 8380878704
 
 
 
 
waveform_h5/2014.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a63f5e6d7d5bca552dcc99053753603dfa3109a6a080f8402f843ef688927d4c
3
- size 12088815344
 
 
 
 
waveform_h5/2015.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:42be6994ad27eb8aee241f5edfb4ed0ee69aa3460397325cc858224ba9dd9721
3
- size 8536767520
 
 
 
 
waveform_h5/2016.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6e706aefd38170da41196974fc92e457d0dc56948a63640a37cea4a86a297843
3
- size 9287201016
 
 
 
 
waveform_h5/2017.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:e20f8e5a3f5ec8927e5d44e722987461ef08c9ceb33ab982038528e9000d5323
3
- size 8627205152
 
 
 
 
waveform_h5/2018.h5 DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ad6e83734ff1e24ad91b17cb6656766861ae9fb30413948579d762acc092e66a
3
- size 7158598240