Integrating with huggingFace API
Hi - I was wondering if there is any work being done on integrating this with huggingFace API. I think the dataset is very useful, however downloading it would be a bit hard. So maybe allowing streaming functionality from huggingFace datasets api would be very helpful here?
Additionally, I was also wondering what other datasets do you think can be added in - like will we have sentinel3 at some point in the future? or would that not be a priority?
Hi - I was wondering if there is any work being done on integrating this with huggingFace API. I think the dataset is very useful, however downloading it would be a bit hard. So maybe allowing streaming functionality from huggingFace datasets api would be very helpful here?
Oh nvm - I found a notebook showing how to do this - Thank you
No worries! Glad you're interested in the dataset!
HF API works by default and is demonstrated in the notebook like you say. One important thing to note, is that shuffling is not really well supported, since it is not optimised well in the HF datasets library for datasets with large size of individual samples like this one (works well for text datasets though).
The dataset will indeed be too large for many people's storage capabilities. I would say the fastest and most effective way of using the dataset is to download a subset by specifying time and locations of interest that fit into local storage. An example of that is also shown in the example notebooks on our GitHub.
Additionally, I was also wondering what other datasets do you think can be added in - like will we have sentinel3 at some point in the future? or would that not be a priority?
We actually have an upcoming meeting about the potential expansions of MajorTOM: https://docs.google.com/forms/d/e/1FAIpQLSeDNOg_8gfaAXx8FSuPmHiuMe_YymfdLPDuBferpS-IJc6U8Q/viewform
While we are working on several expansions ourselves (haven't planned Sentinel-3 yet though), we are seeing MajorTOM as a community effort and really hope that other people can join the effort in building compatible EO datasets and also that way influence what sensors and data sources are included in the project. All of this we are planning to discuss during the community meet up, but if you can't make it, you can contact myself or @aliFrancis to discuss your needs or potential contributions!
signed up and would be there.. can also help with contributions. Makes sense it being community effort - there are way too many satellites.
HI - I had an additional question about S1. In S1, you have used composites which clearly help in visualisation. However, when modelling do you still recommend using composites? just wondering if you have any idea regarding if forming S1 composites has any impact? I thought it might be redundant info but not 100% sure
It depends. My instinct is to generally avoid more channels than original (here, VV and VH) when training deep networks - especially, when the composites are linear mappings from the original channels. The logarithm operation should have considerable impact though, but it depends on the exact system circumstance whether this is really beneficial or not.
yeah -- thank you!