Regarding putting unzipped data on Amazon Open Data
Hi - I have been using this dataset along with two other datasets. I want to have it easy to use and so I converted all files to .npy, and save everything in .meta to one parquet entry and everything in metadata.json to another parquet entry. I am using this with other dataset. So I had a questions:
For ease of use and hot having to download the .gz every time on different systems - I am thinking of saving my files to Amazon OpenData. Is it okay if I refer them back to your GitHub page for documentation of the data (documentation is needed on Amazon Open Data and you guys curated the dataset so I thought this is the easiest way for the correct attribution), apart from the code for accessing data which I will add.
Hi Crinistad, glad you're finding SSL4EO-L useful!
We would be very happy if you would link to our paper for a documentation source and ask people who use your version of the dataset to cite our paper.
Of course, the dataset is released under CC0, so technically, you aren't even legally obligated to do that.
yeah - I think it just creating indexes via the parquet.
"We would be very happy if you would link to our paper for a documentation source and ask people who use your version of the dataset to cite our paper." - Thank you for agreeing - I will keep you posted. It is only right and I can't provide a better documentation than what you already have.
Hi - I had an additional question regarding metadata.tar.gz in oli_sr - could you confirm if the metadata file is complete - I get this error
gzip: stdin: unexpected end of file
-rw-rw---- ssehgal4/dali 7952 2023-05-10 00:38 ssl4eo_l_oli_sr/0947063/LC08_161074_20220413/metadata.json
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
when untarring the file
I get the same error, the file must have gotten corrupted. It only contains ~50K out of 250K locations. Unfortunately, we no longer have the original data downloaded. It should be possible to rerun https://github.com/microsoft/torchgeo/blob/main/experiments/ssl4eo/landsat/download_oli_sr.sh and redownload the data if you really need the metadata. Just compute the centroid of each file in the OLI-SR dataset and give that list to download_ssl4eo.py. This is kind of a bit of work, but let me know if you really need this metadata and I can try to reproduce it. I want to make sure the results of our paper are as easy to reproduce as possible. Sorry for the trouble you're experiencing.
do i need to redownload the whole data? I already have the data - just need metadata for this one - "Just compute the centroid of each file in the OLI-SR dataset " - do you have the code for this. I can run that.. I am not completely sure I understand what you mean. Do the centroid can be calculated using the bounds and then we have centroid x and y. Do you mean to get metadata for each of those x, y using the code file above?
You could modify download_ssl4eo.py
so that it only downloads the metadata and not the raster files. This would save you a lot of space. For computing the centroid of each file, you would use a for-loop over all files and load them with rasterio (or GDAL). By centroid, I just mean the x, y (lon, lat) coordinates of the central pixel of the image. I don't have code for this, but could write it and possibly run it if you really need the metadata for some reason.
Oh cool -- I will do that.. should I push the updated data here via a PR?
Yes, that would be great!
Okay - I have opened a PR with an uncorrupted version. I had the uncorrupted version the file on a cluster so din have to do much.
Hi - Also, I was checking - so Torchgeo also has an ssl4eo with S1/S2 data. I am guessing that is a different dataset entirely? Because not all locations match with sampled_locations. I wanted to extend this data to use s1/s2 for the same sampled locations. I am planning to just create a separate dataset, and was wondering if that is okay or if I should instead just add another folder in sloe called sentinel?
Yes, different dataset entirely. You will find that it is extremely difficult to make a parallel corpus between Landsat and Sentinel due to differences in resolution, temporal frequency, and cloud cover. We couldn't even create a completely parallel corpus across TM, ETM+, and OLI/TIRS.
Ah Fair enough -- I am gonna try and use your code to get as much data as possible for the same locations., to make it a bit easier for comparison. I think I am fine with a not completely parallel dataset (eg. Allow differences in resolution and a bit flexibility in the time snapshots and want to maintain your cloud cover threshold. ) - thank you for the heads up.
Yes, different dataset entirely. You will find that it is extremely difficult to make a parallel corpus between Landsat and Sentinel due to differences in resolution, temporal frequency, and cloud cover. We couldn't even create a completely parallel corpus across TM, ETM+, and OLI/TIRS.
You are right.. its hard to get same locations, even with the relaxations :|
From our paper:
All TOA and SR datasets represent a parallel corpus (the TOA and SR images are taken at the same locations and dates). Due to differences in collection years and cloud coverage/nodata pixels, it was
not possible to create a parallel corpus between sensors. However, approximately 50% of TM and ETM+, 40% of TM and OLI/TIRS, and 40% of ETM+ and OLI/TIRS images are sampled from the same location.
For SSL4EO-S12, it was actually easier, because you only have to worry about clouds for Sentinel-2, not Sentinel-1. So that is a completely parallel corpus if I remember correctly.
Yeah- I am trying to download for sentinel-1 first. It is still a bit hard because of orbit times etc. I think it’s same - date ranges are important because else you get error that in this date range - there is no data whatsoever for this lat lon. I did it for 58k and nothing got downloaded so I stopped and thought maybe should doubt check. S1/S2 ofcourse can be completely parallel I meant that it is hard to get it parallel with Landsat without exploring
HI -- Additionally _ i think the oli_sr file might be corrupt - 2024-04-21 10:20:42,303 - WARNING - CPLE_AppDefined in LZWDecode:LZWDecode: Strip 188 not terminated with EOI code
>>> import rasterio
>>> f = rasterio.open("ssl4eo-L/ssl4eo_l_oli_sr/data/ssl4eo_l_oli_sr/0922385/LC08_154039_20210916/all_bands.tif", "r")
>>> data = f.read()
Warning 1: LZWDecode:LZWDecode: Strip 188 not terminated with EOI code
ERROR 1: LZWDecode:Not enough data at scanline 188 (short 1188 bytes)
ERROR 1: TIFFReadEncodedStrip() failed.
ERROR 1: ssl4eo-L/ssl4eo_l_oli_sr/data/ssl4eo_l_oli_sr/0922385/LC08_154039_20210916/all_bands.tif, band 1: IReadBlock failed at X offset 0, Y offset 188: TIFFReadEncodedStrip() failed.
Traceback (most recent call last):
File "rasterio/_io.pyx", line 975, in rasterio._io.DatasetReaderBase._read
File "rasterio/_io.pyx", line 213, in rasterio._io.io_multi_band
File "rasterio/_err.pyx", line 195, in rasterio._err.exc_wrap_int
rasterio._err.CPLE_AppDefinedError: ssl4eo-L/ssl4eo_l_oli_sr/data/ssl4eo_l_oli_sr/0922385/LC08_154039_20210916/all_bands.tif, band 1: IReadBlock failed at X offset 0, Y offset 188: TIFFReadEncodedStrip() failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "rasterio/_io.pyx", line 651, in rasterio._io.DatasetReaderBase.read
File "rasterio/_io.pyx", line 978, in rasterio._io.DatasetReaderBase._read
rasterio.errors.RasterioIOError: Read or write failed. ssl4eo-L/ssl4eo_l_oli_sr/data/ssl4eo_l_oli_sr/0922385/LC08_154039_20210916/all_bands.tif, band 1: IReadBlock failed at X offset 0, Y offset 188: TIFFReadEncodedStrip() failed.
Would it possible for you to repush the files you have if possible?
I have no files, I deleted all files locally and only saved the ones you see on HF.
Oh okay— let me try downloading it from scratch on my end and then re -test.
I have downloaded this data -- however, I set end index as END_INDEX=999999, cause i thought it would just sample from 0 to 999999, and automatically stop once it reached 250000. However, it hasn't stopped and now I have more locations than that.. Can I just stop it or would it break anything.. I haven't updated anything in the code save that index. Could you please let me know
You can just stop everything, nothing will break.