lukawskikacper commited on
Commit
8be024f
1 Parent(s): 901bf7c

First version of the dataset

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +98 -0
  2. buoy-python/README.md +9 -0
  3. buoy-python/StandardizeAndClean.py +168 -0
  4. buoy-python/YearsLessThan2StdDev.py +64 -0
  5. buoy-python/ZScore2023.py +58 -0
  6. buoy-python/requirements.txt +8 -0
  7. full_2023_remove_flawed.parquet +3 -0
  8. full_2023_remove_flawed_rows.csv +3 -0
  9. full_years_remove_flawed.parquet +3 -0
  10. full_years_remove_flawed_rows.csv +3 -0
  11. orig_downloads/2023/42002_Apr.txt +0 -0
  12. orig_downloads/2023/42002_Aug.txt +0 -0
  13. orig_downloads/2023/42002_Feb.txt +0 -0
  14. orig_downloads/2023/42002_Jan.txt +0 -0
  15. orig_downloads/2023/42002_Jul.txt +0 -0
  16. orig_downloads/2023/42002_Jun.txt +0 -0
  17. orig_downloads/2023/42002_Mar.txt +0 -0
  18. orig_downloads/2023/42002_May.txt +0 -0
  19. orig_downloads/2023/42002_Sep.txt +0 -0
  20. orig_downloads/2023/csv/42002_Apr.csv +0 -0
  21. orig_downloads/2023/csv/42002_Aug.csv +0 -0
  22. orig_downloads/2023/csv/42002_Feb.csv +0 -0
  23. orig_downloads/2023/csv/42002_Jan.csv +0 -0
  24. orig_downloads/2023/csv/42002_Jul.csv +0 -0
  25. orig_downloads/2023/csv/42002_Jun.csv +0 -0
  26. orig_downloads/2023/csv/42002_Mar.csv +0 -0
  27. orig_downloads/2023/csv/42002_May.csv +0 -0
  28. orig_downloads/2023/csv/42002_Sep.csv +0 -0
  29. orig_downloads/2023/csv/fixed_42002_Apr.csv +0 -0
  30. orig_downloads/2023/csv/fixed_42002_Aug.csv +0 -0
  31. orig_downloads/2023/csv/fixed_42002_Feb.csv +0 -0
  32. orig_downloads/2023/csv/fixed_42002_Jan.csv +0 -0
  33. orig_downloads/2023/csv/fixed_42002_Jul.csv +0 -0
  34. orig_downloads/2023/csv/fixed_42002_Jun.csv +0 -0
  35. orig_downloads/2023/csv/fixed_42002_Mar.csv +0 -0
  36. orig_downloads/2023/csv/fixed_42002_May.csv +0 -0
  37. orig_downloads/2023/csv/fixed_42002_Sep.csv +0 -0
  38. orig_downloads/42002_1980.txt +0 -0
  39. orig_downloads/42002_1981.txt +0 -0
  40. orig_downloads/42002_1982.txt +0 -0
  41. orig_downloads/42002_1983.txt +0 -0
  42. orig_downloads/42002_1984.txt +0 -0
  43. orig_downloads/42002_1985.txt +0 -0
  44. orig_downloads/42002_1986.txt +0 -0
  45. orig_downloads/42002_1987.txt +0 -0
  46. orig_downloads/42002_1988.txt +0 -0
  47. orig_downloads/42002_1989.txt +0 -0
  48. orig_downloads/42002_1990.txt +0 -0
  49. orig_downloads/42002_1991.txt +0 -0
  50. orig_downloads/42002_1992.txt +0 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - English
4
+ license:
5
+ - cc-by-4.0
6
+ multilinguality:
7
+ - monolingual
8
+ pretty_name: NOAA Buoy meterological data
9
+ size_categories:
10
+ - 100K<n<1M
11
+ source_datasets:
12
+ - original
13
+ tags: []
14
+ task_categories:
15
+ - feature-extraction
16
+ - tabular-classification
17
+ - time-series-forecasting
18
+ ---
19
+
20
+ # Dataset Card for {{ pretty_name | default("Dataset Name", true) }}
21
+
22
+ NOAA Buoy Data was downloaded, processed, and cleaned for tasks pertaining to tabular data. The data consists of meteorological measurements. There are two datasets
23
+
24
+ 1. From 1980 through 2022 (denoted with "years" in file names)
25
+ 1. From Jan 2023 through end of Sept 2023 (denoted with "2023" in file names)
26
+
27
+ The original intended use is for anomaly detection in tabular data.
28
+
29
+ ## Dataset Details
30
+
31
+ ### Dataset Description
32
+
33
+ This dataset contains weather buoy data to be used in a tabular embedding scenarios.
34
+ Buoy 42002 was chosen because it had many years of historical data and was still actively collecting information
35
+
36
+ Here is the buoy's page and its historical data page.
37
+ https://www.ndbc.noaa.gov/station_page.php?station=42002
38
+ https://www.ndbc.noaa.gov/station_history.php?station=42002
39
+
40
+ Only standard meteorological data and ocean data was downloaded. Downloaded started at 1980, which is the first full year of collecting wave information.
41
+
42
+ ### Data Fields
43
+
44
+ {'TSTMP': 'timestamp',
45
+ '#YY': '#yr',
46
+ ' MM': 'mo',
47
+ 'DD': 'dy',
48
+ 'hh': 'hr',
49
+ 'mm': 'mn',
50
+ 'WDIR': 'degT',
51
+ 'WSPD': 'm/s',
52
+ ' GST': 'm/s',
53
+ ' WVHT': 'm',
54
+ 'DPD': 'sec',
55
+ 'APD': 'sec',
56
+ 'MWD   ': 'degT',
57
+ 'PRES': 'hPa',
58
+ ' ATMP': 'degC',
59
+ ' WTMP': 'degC'
60
+ }
61
+
62
+ ## Dataset Creation
63
+
64
+ ### Curation Rationale
65
+
66
+ The original data has inconsistent delimiters, different and inappropriate missing data values, and was not harmonized across years. Pre-2023 was edited in the same way as the previous data
67
+ but kept separate to allow for train and inference.
68
+
69
+ ### Source Data
70
+
71
+ #### Initial Data Collection and Normalization
72
+ Data Downloaded on Oct 12 2023
73
+
74
+ All code used to transform the data can be found in the buoy-python directory. This is NOT production code and the focus was on correct results and minimizing time spent writing cleaning code.
75
+
76
+ 1. #YY, MM, DD, hh, mm were concatenated to create a timestamp and stored in a new column.
77
+ 1. From 1980 until 2005 there was no recording of minutes. Minutes for those years was set to 00.
78
+ 1. All missing data was set to a blank value rather than an actual number
79
+ 1. Remove all rows without wave data from all the data sets ( missing value in WVHT and DPD)
80
+ 1. Columns MWD, DEWP, VIS, and TIDE were removed because of consistent missing values
81
+ 1. From 2005 -> 2006 Wind direction goes from being called WD to WDIR
82
+ 1. From 2006 -> 2007 Header goes from just 1 line with variable names to 2 lines with the second line being units.
83
+
84
+ These steps were used to create full_2023_remove_flawed_rows, the 2023 months, and full_years_remove_flawed_rows the previous data going back to 1980
85
+
86
+ Since the original purpose of this data was anomoly detection. The two data sets above received further processing.
87
+
88
+ 1. All data values were converted to Z-scores (file named zscore_2023)
89
+ 1. For 1980 - 2022, all rows with 2 or more fields with Z-scores > 2 were removed from the dataset (file named trimmed_zscores_years )
90
+
91
+ ## Uses
92
+
93
+ ### Direct Use
94
+
95
+ Primary use is working with tabular data and embeddings, particularly for anomaly detection
96
+
97
+
98
+
buoy-python/README.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ np.mean
2
+ https://numpy.org/doc/stable/reference/generated/numpy.mean.html
3
+
4
+ np.std
5
+
6
+ Var Name: #YY MM DD hh mm WDIR WSPD GST WVHT DPD APD MWD PRES ATMP WTMP DEWP VIS TIDE
7
+ Units: #yr mo dy hr mn degT m/s m/s m sec sec degT hPa degC degC degC mi ft
8
+
9
+ DEWP, VIS, & TIDE are always missing, so they were removed from the data set
buoy-python/StandardizeAndClean.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import csv
2
+ import time
3
+ from time import strptime
4
+ from datetime import datetime
5
+ from pathlib import Path
6
+
7
+ # UGLY - the non 2023 functions should be more generic given a certain start location - that way we don't have to repeat
8
+ # logic
9
+
10
+ # Function for Years
11
+ YEARS_LOCATION = "../orig_downloads/csv"
12
+ LOCATION_2023 = "../orig_downloads/2023/csv"
13
+
14
+ YEARS_PATH = Path(YEARS_LOCATION)
15
+ YEARS_PATH_2023 = Path(LOCATION_2023)
16
+
17
+ FINAL_BIG_FILE = "../full_years_remove_flawed_rows.csv"
18
+ FINAL_BIG_FILE_2023 = "../full_2023_remove_flawed_rows.csv"
19
+
20
+ HEADER = "#YY,MM,DD,hh,mm,WDIR,WSPD,GST,WVHT,DPD,APD,MWD,PRES,ATMP,WTMP,DEWP,VIS,TIDE\n"
21
+ FINAL_HEADER = ["TSTMP", "#YY","MM","DD", "hh","mm","WDIR","WSPD","GST","WVHT","DPD","APD","MWD","PRES","ATMP","WTMP"]
22
+
23
+
24
+ # Deal with the difference between files and get them standardized
25
+ def standardize():
26
+ for read_path in YEARS_PATH.rglob('*.csv'):
27
+ out_file_name = "fixed_" + read_path.name
28
+ write_path = str(read_path).replace(read_path.name, out_file_name)
29
+ with open(read_path, newline='') as read_file, open(write_path, 'w', newline='\n') as write_file:
30
+ year = read_path.name[6:10]
31
+ year = int(year)
32
+ if year <= 2006:
33
+ # First write the new header line
34
+ read_file.readline()
35
+
36
+ write_file.write(HEADER)
37
+ for line in read_file:
38
+ line = line.strip()
39
+ if line[len(line)-1] == ",":
40
+ line_array = line[:-1].split(',')
41
+ else:
42
+ line_array = line.split(',')
43
+
44
+ # pre 1999 we need to make the year 4 digits
45
+ if year <= 1998:
46
+ line_array[0] = "19" + (line_array[0])
47
+
48
+ # Add tide with a value of 99.00 for all years pre 2000
49
+ if year < 2000:
50
+ line_array.append('99.0')
51
+
52
+ # Add 0 in for mm pre 2005 (header and values)
53
+ if year < 2005:
54
+ line_array.insert(4, '0')
55
+
56
+ # Changes are done, write the line
57
+ write_file.write(','.join(line_array) + "\n")
58
+ if year > 2006:
59
+
60
+ # Remove second header line from 2007 onwards
61
+ read_file.readline()
62
+ read_file.readline()
63
+
64
+ # Add the first line back and just write the rest of the lines
65
+ write_file.write(HEADER)
66
+ for line in read_file:
67
+ line = line.strip()
68
+ if line[len(line)-1] == ",":
69
+ line = line[0:-1]
70
+ write_file.write(line + "\n")
71
+
72
+ # Now remove the columns we don't want and erase rows with a lot of missing values in columns we care about
73
+ def winnow_down(big_file_name, read_location):
74
+
75
+ # need to be become missing data
76
+ nine9_0 = {"WVHT", "WSPD", "GST", "DPD", "APD"}
77
+ nine99_0 = {"ATMP", "WTMP"}
78
+ nine99 = {"WDIR", "MWD"}
79
+ if_all_missing = {"DPD","APD"}
80
+ remove_me = {"DEWP", "VIS", "TIDE"}
81
+
82
+
83
+ # Set up the file to write to
84
+ with open(big_file_name, 'w', newline='') as file:
85
+ fieldnames = FINAL_HEADER
86
+ output_csvfile = csv.DictWriter(file, fieldnames=fieldnames)
87
+
88
+ output_csvfile.writeheader()
89
+ for read_path in read_location.rglob('fixed_*.csv'):
90
+ print(read_path)
91
+ with open(read_path, newline='') as csv_file:
92
+ csv_reader = csv.DictReader(csv_file)
93
+
94
+ # row is not an ordered dict
95
+ for row in csv_reader:
96
+
97
+ # Check to see if we are missing key data - if so delete the row and move along
98
+ delete_row = 0.0
99
+ if row["WSPD"] == "99.0":
100
+ delete_row = delete_row + 1.0
101
+ if row["WVHT"] == "99.0" or row["WVHT"] == "99.00":
102
+ delete_row = delete_row + 1.0
103
+ if row["WTMP"] == "999.0":
104
+ delete_row = delete_row + 1.0
105
+ # if DPD and APD are missing along with any of the above then we remove
106
+ for key in if_all_missing:
107
+ if row[key] == "99.0" or row[key] == "99.00":
108
+ delete_row = delete_row + 0.5
109
+
110
+
111
+ if delete_row >= 2.0:
112
+ # Two strikes you are out and we go on to the next row
113
+ continue
114
+
115
+ # Remove observations at least 2 of these columns with null values in wspd (99.0) wvht (99.0) and wtmp (999.0)
116
+ # WD MWD = 999, GST DPD APD = 99.0, PRES = 9999.0, ATMP WTMP = 999.0
117
+ # For those left we need to convert these to missing(just a blank)
118
+ for key in nine99:
119
+ if row[key] == '999':
120
+ row[key] = ''
121
+ for key in nine9_0:
122
+ if row[key] == '99.0' or row[key] == '99.00':
123
+ row[key] = ''
124
+ for key in nine99_0:
125
+ if row[key] == '999.0':
126
+ row[key] = ''
127
+ if row["PRES"] == '9999.0':
128
+ row["PRES"] = ''
129
+
130
+ # remove columns DEMP, VIS, TIDE
131
+ for key in remove_me:
132
+ del row[key]
133
+
134
+ # Finally we need to convert Y, M, D, m into a timestamp and that will be the key
135
+ # Buoy 42002 is in Lousiana, UTC -5
136
+ timestamp_string = row["#YY"] + "-" + row["MM"] + "-" + row["DD"] + " " + row["hh"] + ":" + row["mm"] + "-" + "-0500"
137
+ row["TSTMP"] = datetime.strptime(timestamp_string, "%Y-%m-%d %H:%M-%z")
138
+
139
+ # Ok we are ready to write a new row to our database
140
+ output_csvfile.writerow(row)
141
+
142
+ # Function for 2023
143
+ def standardize2023():
144
+ for read_path in YEARS_PATH_2023.rglob('*.csv'):
145
+ out_file_name = "fixed_" + read_path.name
146
+ write_path = str(read_path).replace(read_path.name, out_file_name)
147
+ with open(read_path, newline='') as read_file, open(write_path, 'w', newline='\n') as write_file:
148
+ # Remove second header line from 2007 onwards
149
+ read_file.readline()
150
+ read_file.readline()
151
+
152
+ # Add the first line back and just write the rest of the lines
153
+ write_file.write(HEADER)
154
+ for line in read_file:
155
+ line = line.strip()
156
+ if line[len(line)-1] == ",":
157
+ line = line[0:-1]
158
+ write_file.write(line + "\n")
159
+
160
+
161
+
162
+ if __name__ == '__main__':
163
+ print("start")
164
+ #standardize()
165
+ winnow_down(FINAL_BIG_FILE, YEARS_PATH)
166
+ #standardize2023()
167
+ winnow_down(FINAL_BIG_FILE_2023, YEARS_PATH_2023)
168
+ print("finished")
buoy-python/YearsLessThan2StdDev.py ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import csv
2
+ from pathlib import Path
3
+ import numpy as np
4
+ import pandas as pd
5
+
6
+ from collections import defaultdict
7
+
8
+ FULL_DATA_SET_STRING = "../full_years_remove_flawed_rows.csv"
9
+ FULL_DATA_SET_PATH = Path(FULL_DATA_SET_STRING)
10
+ OUT_DATASET_STRING = "../trimmed_full_years_for_db.parquet"
11
+ OUT_DATASET_PATH = Path(OUT_DATASET_STRING)
12
+ OUT_FULL_DATASET_STRING = "../full_years_remove_flawed.parquet"
13
+ OUT_FULL_DATASET_PATH = Path(OUT_FULL_DATASET_STRING)
14
+
15
+ NUMERIC_FIELDS = ["WSPD","GST","WVHT","DPD","APD","PRES","ATMP","WTMP"]
16
+
17
+
18
+ def load_data(data_path):
19
+ print("Loading data")
20
+
21
+ with open(data_path, newline='') as csv_file:
22
+
23
+ loaded_np_data = pd.read_csv(csv_file)
24
+
25
+ print("Writing out the full Parquet file")
26
+ loaded_np_data.to_parquet(OUT_FULL_DATASET_PATH)
27
+
28
+ print("Applying Sin() to the two degrees columns")
29
+ loaded_np_data["WDIR"] = np.sin(np.deg2rad(loaded_np_data["WDIR"]))
30
+ loaded_np_data["MWD"] = np.sin(np.deg2rad(loaded_np_data["MWD"]))
31
+
32
+ print("calculating z-scores")
33
+ for var in NUMERIC_FIELDS:
34
+ var_mean = np.mean(loaded_np_data[var])
35
+ var_std = np.std(loaded_np_data[var])
36
+
37
+ var_zscore = (loaded_np_data[var] - var_mean)/var_std
38
+ loaded_np_data[var] = var_zscore
39
+
40
+ print("finding outlier rows")
41
+
42
+ # calculate the rows to keep
43
+ # for each column, is the z-score larger than 2 = loaded_np_data[NUMERIC_FIELDS].le(2)
44
+ # are there less 2 columns meeting the condition above = keep the row
45
+ output_np_data = loaded_np_data[loaded_np_data[NUMERIC_FIELDS].gt(2).sum(axis=1).lt(2)]
46
+
47
+ print("exporting to parquet")
48
+ output_np_data.set_index("TSTMP")
49
+ output_np_data.to_parquet(OUT_DATASET_PATH)
50
+
51
+
52
+ if __name__ == '__main__':
53
+ print("Start")
54
+
55
+ # Load data
56
+ all_data = load_data(FULL_DATA_SET_PATH)
57
+
58
+ # Calculate mean and std dev for each non-date column
59
+ # Going to need to sin(X) for any circular numbers (WDIR & MWD)
60
+
61
+
62
+ # Write out data removing rows
63
+ # Probably want to write out the sin(X) for any circular numbers
64
+ print("finished")
buoy-python/ZScore2023.py ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import csv
2
+ from pathlib import Path
3
+ import numpy as np
4
+ import pandas as pd
5
+
6
+ from collections import defaultdict
7
+
8
+ FULL_DATA_SET_STRING = "../full_2023_remove_flawed_rows.csv"
9
+ FULL_DATA_SET_PATH = Path(FULL_DATA_SET_STRING)
10
+ OUT_DATASET_STRING = "../zscore_2023.parquet"
11
+ OUT_DATASET_PATH = Path(OUT_DATASET_STRING)
12
+ OUT_FULL_DATASET_STRING = "../full_2023_remove_flawed.parquet"
13
+ OUT_FULL_DATASET_PATH = Path(OUT_FULL_DATASET_STRING)
14
+
15
+ NUMERIC_FIELDS = ["WSPD","GST","WVHT","DPD","APD","PRES","ATMP","WTMP"]
16
+
17
+
18
+ def load_data(data_path):
19
+ print("Loading data")
20
+
21
+ with open(data_path, newline='') as csv_file:
22
+
23
+ loaded_np_data = pd.read_csv(csv_file)
24
+
25
+ print("Writing out the full Parquet file")
26
+ loaded_np_data.to_parquet(OUT_FULL_DATASET_PATH)
27
+
28
+ print("Applying Sin() to the two degrees columns")
29
+ loaded_np_data["WDIR"] = np.sin(np.deg2rad(loaded_np_data["WDIR"]))
30
+ loaded_np_data["MWD"] = np.sin(np.deg2rad(loaded_np_data["MWD"]))
31
+
32
+ print("calculating z-scores")
33
+ for var in NUMERIC_FIELDS:
34
+ var_mean = np.mean(loaded_np_data[var])
35
+ var_std = np.std(loaded_np_data[var])
36
+
37
+ var_zscore = (loaded_np_data[var] - var_mean)/var_std
38
+ loaded_np_data[var] = var_zscore
39
+
40
+
41
+ print("exporting to parquet")
42
+ loaded_np_data.set_index("TSTMP")
43
+ loaded_np_data.to_parquet(OUT_DATASET_PATH)
44
+
45
+
46
+ if __name__ == '__main__':
47
+ print("Start")
48
+
49
+ # Load data
50
+ all_data = load_data(FULL_DATA_SET_PATH)
51
+
52
+ # Calculate mean and std dev for each non-date column
53
+ # Going to need to sin(X) for any circular numbers (WDIR & MWD)
54
+
55
+
56
+ # Write out data removing rows
57
+ # Probably want to write out the sin(X) for any circular numbers
58
+ print("finished")
buoy-python/requirements.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ pip~=21.3.1
2
+ wheel~=0.37.1
3
+ pytz~=2023.3.post1
4
+ numpy~=1.26.0
5
+ setuptools~=60.2.0
6
+ pandas~=2.1.1
7
+ MarkupSafe~=2.1.1
8
+ python-dateutil~=2.8.2
full_2023_remove_flawed.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ac086d30766c6a26530bf6f067cd4e9ea2c4066cee3eaf851739d3ba805b718
3
+ size 225573
full_2023_remove_flawed_rows.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46f800b7108bd4c9e24b36d9e038188f679553d8c0feb37d2d5e90ad24ee13cb
3
+ size 1173803
full_years_remove_flawed.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b859b371b89f0591319ad85675de2ef1be9a5ecbaaf91887eb7bd0fbe3cce82
3
+ size 4730863
full_years_remove_flawed_rows.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98043c07483b90f409831eafd758d5f125440118520ec140a02b0c1a95eb3c94
3
+ size 30354345
orig_downloads/2023/42002_Apr.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/42002_Aug.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/42002_Feb.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/42002_Jan.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/42002_Jul.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/42002_Jun.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/42002_Mar.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/42002_May.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/42002_Sep.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/42002_Apr.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/42002_Aug.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/42002_Feb.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/42002_Jan.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/42002_Jul.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/42002_Jun.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/42002_Mar.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/42002_May.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/42002_Sep.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/fixed_42002_Apr.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/fixed_42002_Aug.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/fixed_42002_Feb.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/fixed_42002_Jan.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/fixed_42002_Jul.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/fixed_42002_Jun.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/fixed_42002_Mar.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/fixed_42002_May.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/2023/csv/fixed_42002_Sep.csv ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1980.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1981.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1982.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1983.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1984.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1985.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1986.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1987.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1988.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1989.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1990.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1991.txt ADDED
The diff for this file is too large to render. See raw diff
 
orig_downloads/42002_1992.txt ADDED
The diff for this file is too large to render. See raw diff