sgiordano commited on
Commit
ac66bf5
1 Parent(s): 835a4b6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -19
README.md CHANGED
@@ -85,11 +85,11 @@ pipeline_tag: image-segmentation
85
  <br>
86
 
87
  <div style="border:1px solid black; padding:25px; background-color:#FDFFF4 ; padding-top:10px; padding-bottom:1px;">
88
- <h1>FLAIR-INC_rgbie_12cl_resnet34-unet</h1>
89
- <p>The general characteristics of this specific model <strong>FLAIR-INC_rgbie_12cl_resnet34-unet</strong> are :</p>
90
  <ul style="list-style-type:disc;">
91
  <li>Trained with the FLAIR-INC dataset</li>
92
- <li>RGBIE images (true colours + infrared + elevation)</li>
93
  <li>U-Net with a Resnet-34 encoder</li>
94
  <li>12 class nomenclature : [building, pervious surface, impervious surface, bare soil, water, coniferous, deciduous, brushwood, vineyard, herbaceous, agricultural land, plowed land]</li>
95
  </ul>
@@ -120,11 +120,6 @@ _**Multi-domain model**_ :
120
  The FLAIR-INC dataset that was used for training is composed of 75 radiometric domains. In the case of aerial images, domain shifts are frequent and are mainly due to : the date of acquisition of the aerial survey (from april to november), the spatial domain (equivalent to a french department administrative division) and downstream radiometric processing.
121
  By construction (sampling 75 domains) the model is robust to these shifts, and can be applied to any images of the ([BD ORTHO® product](https://geoservices.ign.fr/bdortho)).
122
 
123
- _**Specification for the Elevation channel**_ :
124
- The fifth dimension of the RGBIE images is the Elevation (height of building and vegetation). This information is encoded in a 8-bit encoding format.
125
- When decoded to [0,255] ints, a difference of 1 should coresponds to 0.2 meters step of elevation difference.
126
-
127
-
128
  _**Land Cover classes of prediction**_ :
129
  The orginial class nomenclature of the FLAIR Dataset encompasses 19 classes (See the [FLAIR dataset](https://huggingface.co/datasets/IGNF/FLAIR) page for details).
130
  This model was trained to be coherent withe the FLAIR#1 scientific challenge in which contestants were evaluated of the first 12 classes of the nomenclature. Classes with label greater than 12 were desactivated during training.
@@ -135,11 +130,11 @@ As a result, the logits produced by the model are of size 19x1, but classes n°
135
  ## Bias, Risks, Limitations and Recommendations
136
 
137
  _**Using the model on input images with other spatial resolution**_ :
138
- The FLAIR-INC_rgbie_12cl_resnet34-unet model was trained with fixed scale conditions. All patches used for training are derived from aerial images with 0.2 meters spatial resolution. Only flip and rotate augmentations were performed during the training process.
139
  No data augmentation method concerning scale change was used during training. The user should pay attention that generalization issues can occur while applying this model to images that have different spatial resolutions.
140
 
141
  _**Using the model for other remote sensing sensors**_ :
142
- The FLAIR-INC_rgbie_12cl_resnet34-unet was trained with aerial images of the ([BD ORTHO® product](https://geoservices.ign.fr/bdortho)) that encopass very specific radiometric image processing.
143
  Using the model on other type of aerial images or satellite images may imply the use of transfer learning or domain adaptation techniques.
144
 
145
  _**Using the model on other spatial areas**_ :
@@ -160,7 +155,7 @@ Fine-tuning and prediction tasks are detailed in the README file.
160
 
161
  ### Training Data
162
 
163
- 218 400 patches of 512 x 512 pixels were used to train the **FLAIR-INC_rgbie_12cl_resnet34-unet** model.
164
  The train/validation split was performed patchwise to obtain a 80% / 20% distribution between train and validation.
165
  Annotation was performed at the _zone_ level (~100 patches per _zone_). Spatial independancy between patches is guaranted as patches from the same _zone_ were assigned to the same set (TRAIN or VALIDATION).
166
  The following number of patches were used for train and validation :
@@ -185,8 +180,6 @@ Statistics of the TRAIN+VALIDATION set :
185
  | Red Channel (R) | 105.08 |52.17 |
186
  | Green Channel (G) | 110.87 |45.38 |
187
  | Blue Channel (B) | 101.82 |44.00 |
188
- | Infrared Channel (I) | 106.38 |39.69 |
189
- | Elevation Channel (E) | 53.26 |79.30 |
190
 
191
 
192
  #### Training Hyperparameters
@@ -198,8 +191,8 @@ Statistics of the TRAIN+VALIDATION set :
198
  * HorizontalFlip(p=0.5)
199
  * RandomRotate90(p=0.5)
200
  * Input normalization (mean=0 | std=1):
201
- * norm_means: [105.08, 110.87, 101.82, 106.38, 53.26]
202
- * norm_stds: [52.17, 45.38, 44, 39.69, 79.3]
203
  * Seed: 2022
204
  * Batch size: 10
205
  * Number of epochs : 200
@@ -212,7 +205,7 @@ Statistics of the TRAIN+VALIDATION set :
212
 
213
  #### Speeds, Sizes, Times
214
 
215
- The FLAIR-INC_rgbie_12cl_resnet34-unet model was trained on a HPC/AI resources provided by GENCI-IDRIS (Grant 2022-A0131013803).
216
  16 V100 GPUs were used ( 4 nodes, 4 GPUS per node). With this configuration the approximate learning time is 6 minutes per epoch.
217
 
218
  FLAIR-INC_rgbie_12cl_resnet34-unet was obtained for num_epoch=84 with corresponding val_loss=0.57.
@@ -220,9 +213,9 @@ FLAIR-INC_rgbie_12cl_resnet34-unet was obtained for num_epoch=84 with correspond
220
 
221
  <div style="position: relative; text-align: center;">
222
  <p style="margin: 0;">TRAIN loss</p>
223
- <img src="FLAIR-INC_rgbie_12cl_resnet34-unet_train-loss.png" alt="TRAIN loss" style="width: 60%; display: block; margin: 0 auto;"/>
224
  <p style="margin: 0;">VALIDATION loss</p>
225
- <img src="FLAIR-INC_rgbie_12cl_resnet34-unet_val-loss.png" alt="VALIDATION loss" style="width: 60%; display: block; margin: 0 auto;"/>
226
  </div>
227
 
228
 
@@ -241,7 +234,7 @@ The choice of a separate TEST set instead of cross validation was made to be coh
241
 
242
  #### Metrics
243
 
244
- With the evaluation protocol, the **FLAIR-INC_rgbie_12cl_resnet34-unet** have been evaluated to **OA=76.509%** and **mIoU=62.716%**.
245
 
246
  The following table give the class-wise metrics :
247
 
 
85
  <br>
86
 
87
  <div style="border:1px solid black; padding:25px; background-color:#FDFFF4 ; padding-top:10px; padding-bottom:1px;">
88
+ <h1>FLAIR-INC_rgb_12cl_resnet34-unet</h1>
89
+ <p>The general characteristics of this specific model <strong>FLAIR-INC_rgb_12cl_resnet34-unet</strong> are :</p>
90
  <ul style="list-style-type:disc;">
91
  <li>Trained with the FLAIR-INC dataset</li>
92
+ <li>RGB images (true colours)</li>
93
  <li>U-Net with a Resnet-34 encoder</li>
94
  <li>12 class nomenclature : [building, pervious surface, impervious surface, bare soil, water, coniferous, deciduous, brushwood, vineyard, herbaceous, agricultural land, plowed land]</li>
95
  </ul>
 
120
  The FLAIR-INC dataset that was used for training is composed of 75 radiometric domains. In the case of aerial images, domain shifts are frequent and are mainly due to : the date of acquisition of the aerial survey (from april to november), the spatial domain (equivalent to a french department administrative division) and downstream radiometric processing.
121
  By construction (sampling 75 domains) the model is robust to these shifts, and can be applied to any images of the ([BD ORTHO® product](https://geoservices.ign.fr/bdortho)).
122
 
 
 
 
 
 
123
  _**Land Cover classes of prediction**_ :
124
  The orginial class nomenclature of the FLAIR Dataset encompasses 19 classes (See the [FLAIR dataset](https://huggingface.co/datasets/IGNF/FLAIR) page for details).
125
  This model was trained to be coherent withe the FLAIR#1 scientific challenge in which contestants were evaluated of the first 12 classes of the nomenclature. Classes with label greater than 12 were desactivated during training.
 
130
  ## Bias, Risks, Limitations and Recommendations
131
 
132
  _**Using the model on input images with other spatial resolution**_ :
133
+ The FLAIR-INC_rgb_12cl_resnet34-unet model was trained with fixed scale conditions. All patches used for training are derived from aerial images with 0.2 meters spatial resolution. Only flip and rotate augmentations were performed during the training process.
134
  No data augmentation method concerning scale change was used during training. The user should pay attention that generalization issues can occur while applying this model to images that have different spatial resolutions.
135
 
136
  _**Using the model for other remote sensing sensors**_ :
137
+ The FLAIR-INC_rgb_12cl_resnet34-unet was trained with aerial images of the ([BD ORTHO® product](https://geoservices.ign.fr/bdortho)) that encopass very specific radiometric image processing.
138
  Using the model on other type of aerial images or satellite images may imply the use of transfer learning or domain adaptation techniques.
139
 
140
  _**Using the model on other spatial areas**_ :
 
155
 
156
  ### Training Data
157
 
158
+ 218 400 patches of 512 x 512 pixels were used to train the **FLAIR-INC_rgb_12cl_resnet34-unet** model.
159
  The train/validation split was performed patchwise to obtain a 80% / 20% distribution between train and validation.
160
  Annotation was performed at the _zone_ level (~100 patches per _zone_). Spatial independancy between patches is guaranted as patches from the same _zone_ were assigned to the same set (TRAIN or VALIDATION).
161
  The following number of patches were used for train and validation :
 
180
  | Red Channel (R) | 105.08 |52.17 |
181
  | Green Channel (G) | 110.87 |45.38 |
182
  | Blue Channel (B) | 101.82 |44.00 |
 
 
183
 
184
 
185
  #### Training Hyperparameters
 
191
  * HorizontalFlip(p=0.5)
192
  * RandomRotate90(p=0.5)
193
  * Input normalization (mean=0 | std=1):
194
+ * norm_means: [105.08, 110.87, 101.82]
195
+ * norm_stds: [52.17, 45.38, 44]
196
  * Seed: 2022
197
  * Batch size: 10
198
  * Number of epochs : 200
 
205
 
206
  #### Speeds, Sizes, Times
207
 
208
+ The FLAIR-INC_rgb_12cl_resnet34-unet model was trained on a HPC/AI resources provided by GENCI-IDRIS (Grant 2022-A0131013803).
209
  16 V100 GPUs were used ( 4 nodes, 4 GPUS per node). With this configuration the approximate learning time is 6 minutes per epoch.
210
 
211
  FLAIR-INC_rgbie_12cl_resnet34-unet was obtained for num_epoch=84 with corresponding val_loss=0.57.
 
213
 
214
  <div style="position: relative; text-align: center;">
215
  <p style="margin: 0;">TRAIN loss</p>
216
+ <img src="FLAIR-INC_rgb_12cl_resnet34-unet_train-loss.png" alt="TRAIN loss" style="width: 60%; display: block; margin: 0 auto;"/>
217
  <p style="margin: 0;">VALIDATION loss</p>
218
+ <img src="FLAIR-INC_rgb_12cl_resnet34-unet_val-loss.png" alt="VALIDATION loss" style="width: 60%; display: block; margin: 0 auto;"/>
219
  </div>
220
 
221
 
 
234
 
235
  #### Metrics
236
 
237
+ With the evaluation protocol, the **FLAIR-INC_rgb_12cl_resnet34-unet** have been evaluated to **OA=76.509%** and **mIoU=62.716%**.
238
 
239
  The following table give the class-wise metrics :
240