blumenstiel commited on
Commit
5089ae8
Β·
1 Parent(s): 5323997

Add V2 link

Browse files
Files changed (2) hide show
  1. README.md +1 -1
  2. app.py +2 -0
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  title: Prithvi 100M Demo
3
- emoji: πŸ†
4
  colorFrom: gray
5
  colorTo: blue
6
  sdk: docker
 
1
  ---
2
  title: Prithvi 100M Demo
3
+ emoji: 🌎
4
  colorFrom: gray
5
  colorTo: blue
6
  sdk: docker
app.py CHANGED
@@ -377,6 +377,8 @@ with gr.Blocks() as demo:
377
  gr.Markdown(value='''Prithvi is a first-of-its-kind temporal Vision transformer pretrained by the IBM and NASA team on continental US Harmonised Landsat Sentinel 2 (HLS) data. Particularly, the model adopts a self-supervised encoder developed with a ViT architecture and Masked AutoEncoder learning strategy, with a MSE as a loss function. The model includes spatial attention across multiple patchies and also temporal attention for each patch. More info about the model and its weights are available [here](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M).\n
378
  This demo showcases the image reconstracting over three timestamps, with the user providing a set of three HLS images and the model randomly masking out some proportion of the images and then reconstructing them based on the not masked portion of the images.\n
379
  The user needs to provide three HLS geotiff images, including the following channels in reflectance units: Blue, Green, Red, Narrow NIR, SWIR, SWIR 2.
 
 
380
  ''')
381
  with gr.Row():
382
  with gr.Column():
 
377
  gr.Markdown(value='''Prithvi is a first-of-its-kind temporal Vision transformer pretrained by the IBM and NASA team on continental US Harmonised Landsat Sentinel 2 (HLS) data. Particularly, the model adopts a self-supervised encoder developed with a ViT architecture and Masked AutoEncoder learning strategy, with a MSE as a loss function. The model includes spatial attention across multiple patchies and also temporal attention for each patch. More info about the model and its weights are available [here](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M).\n
378
  This demo showcases the image reconstracting over three timestamps, with the user providing a set of three HLS images and the model randomly masking out some proportion of the images and then reconstructing them based on the not masked portion of the images.\n
379
  The user needs to provide three HLS geotiff images, including the following channels in reflectance units: Blue, Green, Red, Narrow NIR, SWIR, SWIR 2.
380
+
381
+ Check out our newest model: [Prithvi-EO-2.0-Demo](https://huggingface.co/spaces/ibm-nasa-geospatial/Prithvi-EO-2.0-Demo).
382
  ''')
383
  with gr.Row():
384
  with gr.Column():