Datasets:

DOI:
License:
zhuwq0 kylewhy commited on
Commit
de7f72c
1 Parent(s): d489a12

update EQNet event example generator and readme (#4)

Browse files

- update event examples and readme (29c9ad14402ae5818e14f7c1317b7825d3952c83)


Co-authored-by: kylewhy <kylewhy@users.noreply.huggingface.co>

Files changed (2) hide show
  1. README.md +78 -108
  2. quakeflow_nc.py +102 -30
README.md CHANGED
@@ -5,7 +5,7 @@ license: mit
5
  # Quakeflow_NC
6
 
7
  ## Introduction
8
- This dataset is part of the data from [NCEDC (Northern California Earthquake Data Center)](https://ncedc.org/index.html) and is organized as several HDF5 files. The dataset structure is shown below: (File [ncedc_event_dataset_000.h5.txt](./ncedc_event_dataset_000.h5.txt) shows the structure of the firsr shard of the dataset, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
9
 
10
  Cite the NCEDC:
11
  "NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC."
@@ -14,63 +14,45 @@ Acknowledge the NCEDC:
14
  "Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC."
15
 
16
  ```
17
- Group: / len:10000
18
- |- Group: /nc100012 len:5
19
- | |-* begin_time = 1987-05-08T00:15:48.890
20
- | |-* depth_km = 7.04
21
- | |-* end_time = 1987-05-08T00:17:48.890
22
- | |-* event_id = nc100012
23
- | |-* event_time = 1987-05-08T00:16:14.700
24
- | |-* event_time_index = 2581
25
- | |-* latitude = 37.5423
26
- | |-* longitude = -118.4412
27
- | |-* magnitude = 1.1
28
  | |-* magnitude_type = D
29
- | |-* num_stations = 5
30
- | |- Dataset: /nc100012/NC.MRS..EH (shape:(3, 12000))
31
  | | |- (dtype=float32)
32
- | | | |-* azimuth = 265.0
33
- | | | |-* component = ['Z']
34
- | | | |-* distance_km = 39.1
35
  | | | |-* dt_s = 0.01
36
- | | | |-* elevation_m = 3680.0
37
- | | | |-* emergence_angle = 93.0
38
- | | | |-* event_id = ['nc100012' 'nc100012']
39
- | | | |-* latitude = 37.5107
40
  | | | |-* location =
41
- | | | |-* longitude = -118.8822
42
  | | | |-* network = NC
43
- | | | |-* phase_index = [3274 3802]
44
  | | | |-* phase_polarity = ['U' 'N']
45
- | | | |-* phase_remark = ['IP' 'S']
46
- | | | |-* phase_score = [1 1]
47
- | | | |-* phase_time = ['1987-05-08T00:16:21.630' '1987-05-08T00:16:26.920']
48
  | | | |-* phase_type = ['P' 'S']
49
- | | | |-* snr = [0. 0. 1.98844361]
50
- | | | |-* station = MRS
51
  | | | |-* unit = 1e-6m/s
52
- | |- Dataset: /nc100012/NN.BEN.N1.EH (shape:(3, 12000))
53
  | | |- (dtype=float32)
54
- | | | |-* azimuth = 329.0
55
- | | | |-* component = ['Z']
56
- | | | |-* distance_km = 22.5
57
- | | | |-* dt_s = 0.01
58
- | | | |-* elevation_m = 2476.0
59
- | | | |-* emergence_angle = 102.0
60
- | | | |-* event_id = ['nc100012' 'nc100012']
61
- | | | |-* latitude = 37.7154
62
- | | | |-* location = N1
63
- | | | |-* longitude = -118.5741
64
- | | | |-* network = NN
65
- | | | |-* phase_index = [3010 3330]
66
- | | | |-* phase_polarity = ['U' 'N']
67
- | | | |-* phase_remark = ['IP' 'S']
68
- | | | |-* phase_score = [0 0]
69
- | | | |-* phase_time = ['1987-05-08T00:16:18.990' '1987-05-08T00:16:22.190']
70
- | | | |-* phase_type = ['P' 'S']
71
- | | | |-* snr = [0. 0. 7.31356192]
72
- | | | |-* station = BEN
73
- | | | |-* unit = 1e-6m/s
74
  ......
75
  ```
76
 
@@ -79,6 +61,7 @@ Group: / len:10000
79
  ### Requirements
80
  - datasets
81
  - h5py
 
82
  - torch (for PyTorch)
83
 
84
  ### Usage
@@ -90,57 +73,57 @@ import torch
90
  from torch.utils.data import Dataset, IterableDataset, DataLoader
91
  from datasets import load_dataset
92
  ```
93
- We have 2 configurations for the dataset: `NCEDC` and `NCEDC_full_size`. They all return event-based samples one by one. But `NCEDC` returns samples with 10 stations each, while `NCEDC_full_size` return samples with stations same as the original data.
94
-
95
- The sample of `NCEDC` is a dictionary with the following keys:
96
- - `waveform`: the waveform with shape `(3, nt, n_sta)`, the first dimension is 3 components, the second dimension is the number of time samples, the third dimension is the number of stations
97
- - `phase_pick`: the probability of the phase pick with shape `(3, nt, n_sta)`, the first dimension is noise, P and S
 
 
 
 
 
 
 
 
98
  - `event_location`: the event location with shape `(4,)`, including latitude, longitude, depth and time
99
- - `station_location`: the station location with shape `(n_sta, 3)`, the first dimension is latitude, longitude and depth
100
 
101
- Because Huggingface datasets only support dynamic size on first dimension, so the sample of `NCEDC_full_size` is a dictionary with the following keys:
102
- - `waveform`: the waveform with shape `(n_sta, 3, nt)`,
103
- - `phase_pick`: the probability of the phase pick with shape `(n_sta, 3, nt)`
104
- - `event_location`: the event location with shape `(4,)`
105
- - `station_location`: the station location with shape `(n_sta, 3)`, the first dimension is latitude, longitude and depth
 
 
106
 
107
- The default configuration is `NCEDC`. You can specify the configuration by argument `name`. For example:
108
  ```python
109
  # load dataset
110
  # ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
111
  # So we recommend to directly load the dataset and convert it into iterable later
112
  # The dataset is very large, so you need to wait for some time at the first time
113
 
114
- # to load "NCEDC"
115
- quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="train")
116
  # or
117
- quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC", split="train")
118
 
119
- # to load "NCEDC_full_size"
120
- quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC_full_size", split="train")
121
  ```
122
 
123
- If you want to use the first several shards of the dataset, you can download the script `quakeflow_nc.py` and change the code as below:
124
- ```python
125
- # change the 37 to the number of shards you want
126
- _URLS = {
127
- "NCEDC": [f"{_REPO}/ncedc_event_dataset_{i:03d}.h5" for i in range(37)]
128
- }
129
- ```
130
- Then you can use the dataset like this (Don't forget to specify the argument `name`):
131
- ```python
132
- # don't forget to specify the script path
133
- quakeflow_nc = datasets.load_dataset("path_to_script/quakeflow_nc.py", split="train")
134
- quakeflow_nc
135
- ```
136
-
137
- #### Usage for `NCEDC`
138
  Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
139
  ```python
140
- quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="NCEDC", split="train")
141
- quakeflow_nc = quakeflow_nc.to_iterable_dataset()
 
 
142
  # because add examples formatting to get tensors when using the "torch" format
143
- # has not been implemented yet, we need to manually add the formatting
 
 
144
  quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
145
  try:
146
  isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
@@ -155,7 +138,7 @@ for example in quakeflow_nc:
155
  print(key, example[key].shape, example[key].dtype)
156
  break
157
 
158
- dataloader = DataLoader(quakeflow_nc, batch_size=4)
159
 
160
  for batch in dataloader:
161
  print("\nDataloader test\n")
@@ -165,48 +148,35 @@ for batch in dataloader:
165
  break
166
  ```
167
 
168
- #### Usage for `NCEDC_full_size`
169
 
170
  Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):
171
  ```python
172
- quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="train", name="NCEDC_full_size")
173
 
174
  # for PyTorch DataLoader, we need to divide the dataset into several shards
175
  num_workers=4
176
  quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
177
- # because add examples formatting to get tensors when using the "torch" format
178
- # has not been implemented yet, we need to manually add the formatting
179
  quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
180
- def reorder_keys(example):
181
- example["waveform"] = example["waveform"].permute(1,2,0).contiguous()
182
- example["phase_pick"] = example["phase_pick"].permute(1,2,0).contiguous()
183
- return example
184
-
185
- quakeflow_nc = quakeflow_nc.map(reorder_keys)
186
-
187
  try:
188
  isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
189
  except:
190
  raise Exception("quakeflow_nc is not an IterableDataset")
191
 
192
- data_loader = DataLoader(
193
- quakeflow_nc,
194
- batch_size=1,
195
- num_workers=num_workers,
196
- )
197
-
198
- for batch in quakeflow_nc:
199
  print("\nIterable test\n")
200
- print(batch.keys())
201
- for key in batch.keys():
202
- print(key, batch[key].shape, batch[key].dtype)
203
  break
204
 
205
- for batch in data_loader:
 
 
206
  print("\nDataloader test\n")
207
  print(batch.keys())
208
  for key in batch.keys():
209
- batch[key] = batch[key].squeeze(0)
210
  print(key, batch[key].shape, batch[key].dtype)
211
  break
212
  ```
 
5
  # Quakeflow_NC
6
 
7
  ## Introduction
8
+ This dataset is part of the data (1970-2020) from [NCEDC (Northern California Earthquake Data Center)](https://ncedc.org/index.html) and is organized as several HDF5 files. The dataset structure is shown below, and you can find more information about the format at [AI4EPS](https://ai4eps.github.io/homepage/ml4earth/seismic_event_format1/))
9
 
10
  Cite the NCEDC:
11
  "NCEDC (2014), Northern California Earthquake Data Center. UC Berkeley Seismological Laboratory. Dataset. doi:10.7932/NCEDC."
 
14
  "Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC."
15
 
16
  ```
17
+ Group: / len:16227
18
+ |- Group: /nc71111584 len:2
19
+ | |-* begin_time = 2020-01-02T07:01:19.620
20
+ | |-* depth_km = 3.69
21
+ | |-* end_time = 2020-01-02T07:03:19.620
22
+ | |-* event_id = nc71111584
23
+ | |-* event_time = 2020-01-02T07:01:48.240
24
+ | |-* event_time_index = 2862
25
+ | |-* latitude = 37.6545
26
+ | |-* longitude = -118.8798
27
+ | |-* magnitude = -0.15
28
  | |-* magnitude_type = D
29
+ | |-* num_stations = 2
30
+ | |- Dataset: /nc71111584/NC.MCB..HH (shape:(3, 12000))
31
  | | |- (dtype=float32)
32
+ | | | |-* azimuth = 233.0
33
+ | | | |-* component = ['E' 'N' 'Z']
34
+ | | | |-* distance_km = 1.9
35
  | | | |-* dt_s = 0.01
36
+ | | | |-* elevation_m = 2391.0
37
+ | | | |-* emergence_angle = 159.0
38
+ | | | |-* event_id = ['nc71111584' 'nc71111584']
39
+ | | | |-* latitude = 37.6444
40
  | | | |-* location =
41
+ | | | |-* longitude = -118.8968
42
  | | | |-* network = NC
43
+ | | | |-* phase_index = [3000 3101]
44
  | | | |-* phase_polarity = ['U' 'N']
45
+ | | | |-* phase_remark = ['IP' 'ES']
46
+ | | | |-* phase_score = [1 2]
47
+ | | | |-* phase_time = ['2020-01-02T07:01:49.620' '2020-01-02T07:01:50.630']
48
  | | | |-* phase_type = ['P' 'S']
49
+ | | | |-* snr = [2.82143 3.055604 1.8412642]
50
+ | | | |-* station = MCB
51
  | | | |-* unit = 1e-6m/s
52
+ | |- Dataset: /nc71111584/NC.MCB..HN (shape:(3, 12000))
53
  | | |- (dtype=float32)
54
+ | | | |-* azimuth = 233.0
55
+ | | | |-* component = ['E' 'N' 'Z']
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ......
57
  ```
58
 
 
61
  ### Requirements
62
  - datasets
63
  - h5py
64
+ - fsspec
65
  - torch (for PyTorch)
66
 
67
  ### Usage
 
73
  from torch.utils.data import Dataset, IterableDataset, DataLoader
74
  from datasets import load_dataset
75
  ```
76
+ We have 6 configurations for the dataset:
77
+ - "station"
78
+ - "event"
79
+ - "station_train"
80
+ - "event_train"
81
+ - "station_test"
82
+ - "event_test"
83
+
84
+ "station" yields station-based samples one by one, while "event" yields event-based samples one by one. The configurations with no suffix are the full dataset, while the configurations with suffix "_train" and "_test" only have corresponding split of the full dataset. Train split contains data from 1970 to 2019, while test split contains data in 2020.
85
+
86
+ The sample of `station` is a dictionary with the following keys:
87
+ - `data`: the waveform with shape `(3, nt)`, the default time length is 8192
88
+ - `phase_pick`: the probability of the phase pick with shape `(3, nt)`, the first dimension is noise, P and S
89
  - `event_location`: the event location with shape `(4,)`, including latitude, longitude, depth and time
90
+ - `station_location`: the station location with shape `(3,)`, including latitude, longitude and depth
91
 
92
+ The sample of `event` is a dictionary with the following keys:
93
+ - `data`: the waveform with shape `(n_station, 3, nt)`, the default time length is 8192
94
+ - `phase_pick`: the probability of the phase pick with shape `(n_station, 3, nt)`, the first dimension is noise, P and S
95
+ - `event_center`: the probability of the event time with shape `(n_station, feature_nt)`, default feature time length is 512
96
+ - `event_location`: the space-time coordinates of the event with shape `(n_staion, 4, feature_nt)`
97
+ - `event_location_mask`: the probability mask of the event time with shape `(n_station, feature_nt)`
98
+ - `station_location`: the space coordinates of the station with shape `(n_station, 3)`, including latitude, longitude and depth
99
 
100
+ The default configuration is `station_test`. You can specify the configuration by argument `name`. For example:
101
  ```python
102
  # load dataset
103
  # ATTENTION: Streaming(Iterable Dataset) is difficult to support because of the feature of HDF5
104
  # So we recommend to directly load the dataset and convert it into iterable later
105
  # The dataset is very large, so you need to wait for some time at the first time
106
 
107
+ # to load "station_test" with test split
108
+ quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", split="test")
109
  # or
110
+ quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
111
 
112
+ # to load "event" with train split
113
+ quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="event", split="train")
114
  ```
115
 
116
+ #### Usage for `station`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
  Then you can change the dataset into PyTorch format iterable dataset, and view the first sample:
118
  ```python
119
+ quakeflow_nc = load_dataset("AI4EPS/quakeflow_nc", name="station_test", split="test")
120
+ # for PyTorch DataLoader, we need to divide the dataset into several shards
121
+ num_workers=4
122
+ quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
123
  # because add examples formatting to get tensors when using the "torch" format
124
+ # has not been implemented yet, we need to manually add the formatting when using iterable dataset
125
+ # if you want to use dataset directly, just use
126
+ # quakeflow_nc.with_format("torch")
127
  quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
128
  try:
129
  isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
 
138
  print(key, example[key].shape, example[key].dtype)
139
  break
140
 
141
+ dataloader = DataLoader(quakeflow_nc, batch_size=4, num_workers=num_workers)
142
 
143
  for batch in dataloader:
144
  print("\nDataloader test\n")
 
148
  break
149
  ```
150
 
151
+ #### Usage for `event`
152
 
153
  Then you can change the dataset into PyTorch format dataset, and view the first sample (Don't forget to reorder the keys):
154
  ```python
155
+ quakeflow_nc = datasets.load_dataset("AI4EPS/quakeflow_nc", split="test", name="event_test")
156
 
157
  # for PyTorch DataLoader, we need to divide the dataset into several shards
158
  num_workers=4
159
  quakeflow_nc = quakeflow_nc.to_iterable_dataset(num_shards=num_workers)
 
 
160
  quakeflow_nc = quakeflow_nc.map(lambda x: {key: torch.from_numpy(np.array(value, dtype=np.float32)) for key, value in x.items()})
 
 
 
 
 
 
 
161
  try:
162
  isinstance(quakeflow_nc, torch.utils.data.IterableDataset)
163
  except:
164
  raise Exception("quakeflow_nc is not an IterableDataset")
165
 
166
+ # print the first sample of the iterable dataset
167
+ for example in quakeflow_nc:
 
 
 
 
 
168
  print("\nIterable test\n")
169
+ print(example.keys())
170
+ for key in example.keys():
171
+ print(key, example[key].shape, example[key].dtype)
172
  break
173
 
174
+ dataloader = DataLoader(quakeflow_nc, batch_size=1, num_workers=num_workers)
175
+
176
+ for batch in dataloader:
177
  print("\nDataloader test\n")
178
  print(batch.keys())
179
  for key in batch.keys():
 
180
  print(key, batch[key].shape, batch[key].dtype)
181
  break
182
  ```
quakeflow_nc.py CHANGED
@@ -153,25 +153,30 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
153
  or (self.config.name == "station_train")
154
  or (self.config.name == "station_test")
155
  ):
156
- features = datasets.Features(
157
  {
158
- "waveform": datasets.Array2D(shape=(3, self.nt), dtype="float32"),
159
- "phase_pick": datasets.Array2D(shape=(3, self.nt), dtype="float32"),
160
  "event_location": datasets.Sequence(datasets.Value("float32")),
161
  "station_location": datasets.Sequence(datasets.Value("float32")),
162
- }
163
- )
164
-
165
- elif (self.config.name == "event") or (self.config.name == "event_train") or (self.config.name == "event_test"):
166
- features = datasets.Features(
 
 
 
167
  {
168
- "waveform": datasets.Array3D(shape=(None, 3, self.nt), dtype="float32"),
169
- "phase_pick": datasets.Array3D(shape=(None, 3, self.nt), dtype="float32"),
170
- "event_location": datasets.Sequence(datasets.Value("float32")),
 
 
171
  "station_location": datasets.Array2D(shape=(None, 3), dtype="float32"),
172
  }
173
  )
174
-
175
  return datasets.DatasetInfo(
176
  # This is the description that will appear on the datasets page.
177
  description=_DESCRIPTION,
@@ -262,9 +267,9 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
262
  attrs["depth_km"],
263
  attrs["event_time_index"],
264
  ]
265
-
266
  for i, sta_id in enumerate(station_ids):
267
- waveforms[:, : self.nt] = event[sta_id][:, : self.nt]
268
  # waveforms[:, : self.nt] = event[sta_id][: self.nt, :].T
269
  attrs = event[sta_id].attrs
270
  p_picks = attrs["phase_index"][attrs["phase_type"] == "P"]
@@ -273,44 +278,111 @@ class QuakeFlow_NC(datasets.GeneratorBasedBuilder):
273
  station_location = [attrs["longitude"], attrs["latitude"], -attrs["elevation_m"] / 1e3]
274
 
275
  yield f"{event_id}/{sta_id}", {
276
- "waveform": torch.from_numpy(waveforms).float(),
277
  "phase_pick": torch.from_numpy(phase_pick).float(),
278
  "event_location": torch.from_numpy(np.array(event_location)).float(),
279
  "station_location": torch.from_numpy(np.array(station_location)).float(),
280
  }
281
 
 
282
  elif (
283
  (self.config.name == "event")
284
  or (self.config.name == "event_train")
285
  or (self.config.name == "event_test")
286
  ):
 
 
 
 
 
 
 
 
 
 
 
 
287
  waveforms = np.zeros([len(station_ids), 3, self.nt], dtype="float32")
288
  phase_pick = np.zeros_like(waveforms)
289
- attrs = event.attrs
290
- event_location = [
291
- attrs["longitude"],
292
- attrs["latitude"],
293
- attrs["depth_km"],
294
- attrs["event_time_index"],
295
- ]
296
- station_location = []
297
 
298
  for i, sta_id in enumerate(station_ids):
299
- waveforms[i, :, : self.nt] = event[sta_id][:, : self.nt]
300
- # waveforms[i, :, : self.nt] = event[sta_id][: self.nt, :].T
301
  attrs = event[sta_id].attrs
302
  p_picks = attrs["phase_index"][attrs["phase_type"] == "P"]
303
  s_picks = attrs["phase_index"][attrs["phase_type"] == "S"]
304
  phase_pick[i, :, :] = generate_label([p_picks, s_picks], nt=self.nt)
305
- station_location.append(
306
- [attrs["longitude"], attrs["latitude"], -attrs["elevation_m"] / 1e3]
 
 
 
 
 
 
 
 
 
307
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
308
 
309
  yield event_id, {
310
- "waveform": torch.from_numpy(waveforms).float(),
311
  "phase_pick": torch.from_numpy(phase_pick).float(),
312
- "event_location": torch.from_numpy(np.array(event_location)).float(),
313
- "station_location": torch.from_numpy(np.array(station_location)).float(),
 
 
314
  }
315
 
316
 
 
153
  or (self.config.name == "station_train")
154
  or (self.config.name == "station_test")
155
  ):
156
+ features=datasets.Features(
157
  {
158
+ "data": datasets.Array2D(shape=(3, self.nt), dtype='float32'),
159
+ "phase_pick": datasets.Array2D(shape=(3, self.nt), dtype='float32'),
160
  "event_location": datasets.Sequence(datasets.Value("float32")),
161
  "station_location": datasets.Sequence(datasets.Value("float32")),
162
+ })
163
+
164
+ elif (
165
+ (self.config.name == "event")
166
+ or (self.config.name == "event_train")
167
+ or (self.config.name == "event_test")
168
+ ):
169
+ features=datasets.Features(
170
  {
171
+ "data": datasets.Array3D(shape=(None, 3, self.nt), dtype='float32'),
172
+ "phase_pick": datasets.Array3D(shape=(None, 3, self.nt), dtype='float32'),
173
+ "event_center" : datasets.Array2D(shape=(None, self.feature_nt), dtype='float32'),
174
+ "event_location": datasets.Array3D(shape=(None, 4, self.feature_nt), dtype='float32'),
175
+ "event_location_mask": datasets.Array2D(shape=(None, self.feature_nt), dtype='float32'),
176
  "station_location": datasets.Array2D(shape=(None, 3), dtype="float32"),
177
  }
178
  )
179
+
180
  return datasets.DatasetInfo(
181
  # This is the description that will appear on the datasets page.
182
  description=_DESCRIPTION,
 
267
  attrs["depth_km"],
268
  attrs["event_time_index"],
269
  ]
270
+
271
  for i, sta_id in enumerate(station_ids):
272
+ waveforms[:, : self.nt] = event[sta_id][:, :self.nt]
273
  # waveforms[:, : self.nt] = event[sta_id][: self.nt, :].T
274
  attrs = event[sta_id].attrs
275
  p_picks = attrs["phase_index"][attrs["phase_type"] == "P"]
 
278
  station_location = [attrs["longitude"], attrs["latitude"], -attrs["elevation_m"] / 1e3]
279
 
280
  yield f"{event_id}/{sta_id}", {
281
+ "data": torch.from_numpy(waveforms).float(),
282
  "phase_pick": torch.from_numpy(phase_pick).float(),
283
  "event_location": torch.from_numpy(np.array(event_location)).float(),
284
  "station_location": torch.from_numpy(np.array(station_location)).float(),
285
  }
286
 
287
+
288
  elif (
289
  (self.config.name == "event")
290
  or (self.config.name == "event_train")
291
  or (self.config.name == "event_test")
292
  ):
293
+ event_attrs = event.attrs
294
+
295
+ # avoid stations with P arrival equals S arrival
296
+ is_sick = False
297
+ for sta_id in station_ids:
298
+ attrs = event[sta_id].attrs
299
+ if attrs["phase_index"][attrs["phase_type"] == "P"] == attrs["phase_index"][attrs["phase_type"] == "S"]:
300
+ is_sick = True
301
+ break
302
+ if is_sick:
303
+ continue
304
+
305
  waveforms = np.zeros([len(station_ids), 3, self.nt], dtype="float32")
306
  phase_pick = np.zeros_like(waveforms)
307
+ event_center = np.zeros([len(station_ids), self.nt])
308
+ event_location = np.zeros([len(station_ids), 4, self.nt])
309
+ event_location_mask = np.zeros([len(station_ids), self.nt])
310
+ station_location = np.zeros([len(station_ids), 3])
 
 
 
 
311
 
312
  for i, sta_id in enumerate(station_ids):
313
+ # trace_id = event_id + "/" + sta_id
314
+ waveforms[i, :, :] = event[sta_id][:, :self.nt]
315
  attrs = event[sta_id].attrs
316
  p_picks = attrs["phase_index"][attrs["phase_type"] == "P"]
317
  s_picks = attrs["phase_index"][attrs["phase_type"] == "S"]
318
  phase_pick[i, :, :] = generate_label([p_picks, s_picks], nt=self.nt)
319
+
320
+ ## TODO: how to deal with multiple phases
321
+ # center = (attrs["phase_index"][::2] + attrs["phase_index"][1::2])/2.0
322
+ ## assuming only one event with both P and S picks
323
+ c0 = ((p_picks) + (s_picks)) / 2.0 # phase center
324
+ c0_width = ((s_picks - p_picks) * self.sampling_rate / 200.0).max() if p_picks!=s_picks else 50
325
+ dx = round(
326
+ (event_attrs["longitude"] - attrs["longitude"])
327
+ * np.cos(np.radians(event_attrs["latitude"]))
328
+ * self.degree2km,
329
+ 2,
330
  )
331
+ dy = round(
332
+ (event_attrs["latitude"] - attrs["latitude"])
333
+ * self.degree2km,
334
+ 2,
335
+ )
336
+ dz = round(
337
+ event_attrs["depth_km"] + attrs["elevation_m"] / 1e3,
338
+ 2,
339
+ )
340
+
341
+ event_center[i, :] = generate_label(
342
+ [
343
+ # [c0 / self.feature_scale],
344
+ c0,
345
+ ],
346
+ label_width=[
347
+ c0_width,
348
+ ],
349
+ # label_width=[
350
+ # 10,
351
+ # ],
352
+ # nt=self.feature_nt,
353
+ nt=self.nt,
354
+ )[1, :]
355
+ mask = event_center[i, :] >= 0.5
356
+ event_location[i, 0, :] = (
357
+ np.arange(self.nt) - event_attrs["event_time_index"]
358
+ ) / self.sampling_rate
359
+ # event_location[0, :, i] = (np.arange(self.feature_nt) - 3000 / self.feature_scale) / self.sampling_rate
360
+ # print(event_location[i, 1:, mask].shape, event_location.shape, event_location[i][1:, mask].shape)
361
+ event_location[i][1:, mask] = np.array([dx, dy, dz])[:, np.newaxis]
362
+ event_location_mask[i, :] = mask
363
+
364
+ ## station location
365
+ station_location[i, 0] = round(
366
+ attrs["longitude"]
367
+ * np.cos(np.radians(attrs["latitude"]))
368
+ * self.degree2km,
369
+ 2,
370
+ )
371
+ station_location[i, 1] = round(attrs["latitude"] * self.degree2km, 2)
372
+ station_location[i, 2] = round(-attrs["elevation_m"]/1e3, 2)
373
+
374
+ std = np.std(waveforms, axis=1, keepdims=True)
375
+ std[std == 0] = 1.0
376
+ waveforms = (waveforms - np.mean(waveforms, axis=1, keepdims=True)) / std
377
+ waveforms = waveforms.astype(np.float32)
378
 
379
  yield event_id, {
380
+ "data": torch.from_numpy(waveforms).float(),
381
  "phase_pick": torch.from_numpy(phase_pick).float(),
382
+ "event_center": torch.from_numpy(event_center[:, ::self.feature_scale]).float(),
383
+ "event_location": torch.from_numpy(event_location[:, :, ::self.feature_scale]).float(),
384
+ "event_location_mask": torch.from_numpy(event_location_mask[:, ::self.feature_scale]).float(),
385
+ "station_location": torch.from_numpy(station_location).float(),
386
  }
387
 
388