Datasets:
jordancaraballo
commited on
Commit
·
a5f64cb
1
Parent(s):
acf4d58
Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,104 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
3 |
---
|
4 |
-
|
5 |
# Satvision Pretraining Dataset - Small
|
6 |
|
7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
---
|
|
|
6 |
# Satvision Pretraining Dataset - Small
|
7 |
|
8 |
+
- **Developed by:** NASA GSFC CISTO Data Science Group
|
9 |
+
- **Model type:** Pre-trained visual transformer model
|
10 |
+
- **License:** Apache license 2.0
|
11 |
+
|
12 |
+
This dataset repository houses the pretraining data for the Satvision pretrained transformers.
|
13 |
+
This dataset was constructed using [webdatasets](https://github.com/webdataset/webdataset) to
|
14 |
+
limit the number of inodes used in HPC systems with limited shared storage. Each file has 100000
|
15 |
+
tiles, with pairs of image input and annotation. The data has been further compressed to ease
|
16 |
+
the download from HuggingFace.
|
17 |
+
|
18 |
+
SatelliteVision-Base (SatVis-B) is a pre-trained vision transformer based on the SwinV2 mode architecture.
|
19 |
+
The model is pre-trained on global MODIS surface reflectance data from which 1.99 million image chips were used. SatVis-B is pre-trained using
|
20 |
+
the masked-image-modeling (MIM) contrastive pre-training strategy. The MIM pre-training approach utilizes random
|
21 |
+
masking of the input geospatial image chip, using a linear layer to regress the raw pixel values of the masked
|
22 |
+
area with an l1 loss serving as the loss function.
|
23 |
+
|
24 |
+
Resolution of the pre-training MODIS chips was `128x128` with a window size of `16x16`. SatViz-B was pre-trained
|
25 |
+
for `800` epochs on 8x A100 GPUs and 12x V100 GPUs.
|
26 |
+
|
27 |
+
|
28 |
+
### SatVision Transformer
|
29 |
+
|
30 |
+
**Pre-trained models pre-trained on MODIS-Small dataset**
|
31 |
+
|
32 |
+
| name | pre-train epochs | pre-train resolution | #params | pre-trained model |
|
33 |
+
| :---: | :---: | :---: | :---: | :---: |
|
34 |
+
| SatVision-Base | 800 | 128x128 | 84.5m | [checkpoint](https://huggingface.co/nasa-cisto-data-science-group/satvision-base/blob/main/ckpt_epoch_800.pth)/[config](https://github.com/nasa-nccs-hpda/pytorch-caney/blob/develop/examples/satvision/mim_pretrain_swinv2_satvision_base_192_window12_800ep.yaml) |
|
35 |
+
|
36 |
+
## Getting Started with SatVision-Base
|
37 |
+
|
38 |
+
- **Training repository:** https://github.com/nasa-nccs-hpda/pytorch-caney
|
39 |
+
- **Pre-training dataset repository:** Coming soon!
|
40 |
+
|
41 |
+
### Installation
|
42 |
+
|
43 |
+
If you have singularity installed
|
44 |
+
```bash
|
45 |
+
$ git clone git@github.com:nasa-nccs-hpda/pytorch-caney.git
|
46 |
+
$ singularity build --sandbox pytorch-caney.sif docker://nasanccs/pytorch-caney:latest
|
47 |
+
# To shell into the container
|
48 |
+
$ singularity shell --nv -B <mounts> pytorch-caney.sif
|
49 |
+
```
|
50 |
+
|
51 |
+
Anaconda installation
|
52 |
+
```bash
|
53 |
+
$ git clone git@github.com:nasa-nccs-hpda/pytorch-caney.git
|
54 |
+
$ conda create -n satvision-env python==3.9
|
55 |
+
```
|
56 |
+
|
57 |
+
### Fine-tuning Satvision-Base
|
58 |
+
- Create config file [example config](https://github.com/nasa-nccs-hpda/pytorch-caney/blob/finetuning/examples/satvision/finetune_satvision_base_landcover5class_192_window12_100ep.yaml)
|
59 |
+
- Download checkpoint from this HF model repo
|
60 |
+
- `$ git clone git@github.com:nasa-nccs-hpda/pytorch-caney.git`
|
61 |
+
- Add a new pytorch dataset in pytorch-caney/pytorch_caney/data/datasets/
|
62 |
+
- Add new pytorch dataset to dict in pytorch-caney/pytorch_caney/data/datamodules/finetune_datamodule.py
|
63 |
+
```bash
|
64 |
+
torchrun --nproc_per_node <NGPUS> pytorch-caney/pytorch_caney/pipelines/finetuning/finetune.py --cfg <config-file> --pretrained <path-to-pretrained> --dataset <dataset-name (key for new dataset)> --data-paths <path-to-data-dir> --batch-size <batch-size> --output <output-dir> --enable-amp
|
65 |
+
```
|
66 |
+
|
67 |
+
### Pre-training with pytorch-caney
|
68 |
+
|
69 |
+
## Pre-training with SatVision-Base with Masked Image Modeling and pytorch-caney
|
70 |
+
|
71 |
+
To pre-train the swinv2 base model with masked image modeling pre-training, run:
|
72 |
+
```bash
|
73 |
+
torchrun --nproc_per_node <NGPUS> pytorch-caney/pytorch_caney/pipelines/pretraining/mim.py --cfg <config-file> --dataset <dataset-name> --data-paths <path-to-data-subfolder-1> --batch-size <batch-size> --output <output-dir> --enable-amp
|
74 |
+
```
|
75 |
+
|
76 |
+
For example to run on a compute node with 4 GPUs and a batch size of 128 on the MODIS SatVision pre-training dataset with a base swinv2 model, run:
|
77 |
+
|
78 |
+
```bash
|
79 |
+
singularity shell --nv -B <mounts> /path/to/container/pytorch-caney-container
|
80 |
+
Singularity> export PYTHONPATH=$PWD:$PWD/pytorch-caney
|
81 |
+
Singularity> torchrun --nproc_per_node 4 pytorch-caney/pytorch_caney/pipelines/pretraining/mim.py --cfg pytorch-caney/examples/satvision/mim_pretrain_swinv2_satvision_base_192_window12_800ep.yaml --dataset MODIS --data-paths /explore/nobackup/projects/ilab/data/satvision/pretraining/training_* --batch-size 128 --output . --enable-amp
|
82 |
+
```
|
83 |
+
|
84 |
+
## SatVision-Base Pre-Training Datasets
|
85 |
+
|
86 |
+
| name | bands | resolution | #chips | meters-per-pixel |
|
87 |
+
| :---: | :---: | :---: | :---: | :---: |
|
88 |
+
| MODIS-Small | 7 | 128x128 | 1,994,131 | 500m |
|
89 |
+
|
90 |
+
## Citing SatVision-Base
|
91 |
+
|
92 |
+
If this model helped your research, please cite `satvision-base` in your publications.
|
93 |
+
|
94 |
+
```
|
95 |
+
@misc{satvision-base,
|
96 |
+
author = {Carroll, Mark and Li, Jian and Spradlin, Caleb and Caraballo-Vega, Jordan},
|
97 |
+
doi = {10.57967/hf/1017},
|
98 |
+
month = aug,
|
99 |
+
title = {{satvision-base}},
|
100 |
+
url = {https://huggingface.co/nasa-cisto-data-science-group/satvision-base},
|
101 |
+
repository-code = {https://github.com/nasa-nccs-hpda/pytorch-caney}
|
102 |
+
year = {2023}
|
103 |
+
}
|
104 |
+
```
|