Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,7 @@ size_categories:
|
|
10 |
# Datset Card for FLAIR land-cover semantic segmentation
|
11 |
|
12 |
## Context & Data
|
|
|
13 |
|
14 |
The hereby FLAIR (#2) dataset is sampled countrywide and is composed of over 20 billion annotated pixels of very high resolution aerial imagery at 0.2 m spatial resolution, acquired over three years and different months (spatio-temporal domains).
|
15 |
Aerial imagery patches consist of 5 channels (RVB-Near Infrared-Elevation) and have corresponding annotation (with 19 semantic classes or 13 for the baselines).
|
@@ -168,8 +169,8 @@ The dataset covers 50 spatial domains, encompassing 916 areas spanning 817 km².
|
|
168 |
<br><br>
|
169 |
|
170 |
## Dataset Structure
|
|
|
171 |
|
172 |
-
### Spatio-Temporal Distribution
|
173 |
The FLAIR dataset consists of 77 762 patches. Each patch includes a high-resolution aerial image (512x512) at 0.2 m, a yearly satellite image time series (40x40 by default by wider areas are provided) with a spatial resolution of 10 m
|
174 |
and associated cloud and snow masks, and pixel-precise elevation and land cover annotations at 0.2 m resolution (512x512).
|
175 |
|
@@ -177,10 +178,36 @@ and associated cloud and snow masks, and pixel-precise elevation and land cover
|
|
177 |
|
178 |
|
179 |
### Band order
|
180 |
-
Aerial : 1. Red; 2. Green; 3. Blue; 4. NIR; 5. nDSM <br/>
|
181 |
-
Satellite : 1. Blue (B2 490nm); 2. Green (B3 560nm); 3. Red (B4 665nm); 4. Red-Edge (B5 705nm); 5. Red-Edge2 (B6 470nm);
|
182 |
-
6. Red-Edge3 (B7 783nm); 7. NIR (B8 842nm); 8. NIR-Red-Edge (B8a 865nm); 9. SWIR (B11 1610nm); 10. SWIR2 (B12 2190nm)
|
183 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
184 |
|
185 |
### Annotations
|
186 |
Each pixel has been manually annotated by photo-interpretation of the 20 cm resolution aerial imagery, carried out by a team supervised by geography experts from the IGN.
|
@@ -193,7 +220,7 @@ This arrangement ensures a balanced distribution of semantic classes, radiometri
|
|
193 |
Consequently, every split accurately reflects the landscape diversity inherent to metropolitan France.
|
194 |
It is important to mention that the patches come with meta-data permitting alternative splitting schemes, for example focused on domain shifts.
|
195 |
|
196 |
-
Official split: <br/>
|
197 |
|
198 |
<div style="display: flex; flex-wrap: nowrap; align-items: center">
|
199 |
<div style="flex: 40%;">
|
@@ -222,6 +249,8 @@ Official split: <br/>
|
|
222 |
<br><br>
|
223 |
|
224 |
## Baseline code
|
|
|
|
|
225 |
We propose the U-T&T model, a two-branch architecture that combines spatial and temporal information from very high-resolution aerial images and high-resolution satellite images into a single output. The U-Net architecture is employed for the spatial/texture branch, using a ResNet34 backbone model pre-trained on ImageNet. For the spatio-temporal branch,
|
226 |
the U-TAE architecture incorporates a Temporal self-Attention Encoder (TAE) to explore the spatial and temporal characteristics of the Sentinel-2 time series data,
|
227 |
applying attention masks at different resolutions during decoding. This model allows for the fusion of learned information from both sources,
|
@@ -231,46 +260,48 @@ U-T&T code repository 📁 : https://github.com/IGNF/FLAIR-2-AI-Challenge <
|
|
231 |
|
232 |
<th><font color="#c7254e"><b>IMPORTANT!</b></font></th> <b>The structure of the current dataset differs from the one that comes with the GitHub repository.</b>
|
233 |
To work with the current dataset, you need to replace the <font color=‘#D7881C’><em>src/load_data.py</em></font> file with the one provided here.
|
234 |
-
You also need to add the following content to
|
235 |
-
|
236 |
-
<code>
|
237 |
-
|
238 |
-
HF_data_path : " " # Path to unzipped HF dataset
|
239 |
-
|
240 |
-
domains_train : ["D006_2020","D007_2020","D008_2019","D009_2019","D013_2020","D016_2020","D017_2018","D021_2020","D023_2020","D030_2021","D032_2019","D033_2021","D034_2021","D035_2020","D038_2021","D041_2021","D044_2020","D046_2019","D049_2020","D051_2019","D052_2019","D055_2018","D060_2021","D063_2019","D070_2020","D072_2019","D074_2020","D078_2021","D080_2021","D081_2020","D086_2020","D091_2021"]
|
241 |
|
242 |
-
|
243 |
-
|
244 |
-
|
245 |
-
|
246 |
-
|
|
|
247 |
|
248 |
|
249 |
<br><br>
|
250 |
|
251 |
|
252 |
## Reference
|
|
|
|
|
253 |
Please include a citation to the following article if you use the FLAIR dataset:
|
254 |
|
255 |
```
|
256 |
-
@
|
257 |
title={FLAIR: a Country-Scale Land Cover Semantic Segmentation Dataset From Multi-Source Optical Imagery},
|
258 |
author={Anatol Garioud and Nicolas Gonthier and Loic Landrieu and Apolline De Wit and Marion Valette and Marc Poupée and Sébastien Giordano and Boris Wattrelos},
|
259 |
year={2023},
|
260 |
-
|
261 |
-
|
262 |
-
primaryClass={cs.CV}
|
263 |
}
|
264 |
```
|
265 |
|
266 |
## Acknowledgment
|
|
|
|
|
267 |
This work was performed using HPC/AI resources from GENCI-IDRIS (Grant 2022-A0131013803). This work was supported by the project "Copernicus / FPCUP” of the European Union, by the French Space Agency (CNES) and by Connect by CNES.<br>
|
268 |
|
269 |
|
|
|
|
|
270 |
|
|
|
271 |
|
272 |
|
273 |
## Dataset license
|
|
|
274 |
|
275 |
The "OPEN LICENCE 2.0/LICENCE OUVERTE" is a license created by the French government specifically for the purpose of facilitating the dissemination of open data by public administration.<br/>
|
276 |
This licence is governed by French law.<br/>
|
|
|
10 |
# Datset Card for FLAIR land-cover semantic segmentation
|
11 |
|
12 |
## Context & Data
|
13 |
+
<hr style='margin-top:-1em' />
|
14 |
|
15 |
The hereby FLAIR (#2) dataset is sampled countrywide and is composed of over 20 billion annotated pixels of very high resolution aerial imagery at 0.2 m spatial resolution, acquired over three years and different months (spatio-temporal domains).
|
16 |
Aerial imagery patches consist of 5 channels (RVB-Near Infrared-Elevation) and have corresponding annotation (with 19 semantic classes or 13 for the baselines).
|
|
|
169 |
<br><br>
|
170 |
|
171 |
## Dataset Structure
|
172 |
+
<hr style='margin-top:-1em' />
|
173 |
|
|
|
174 |
The FLAIR dataset consists of 77 762 patches. Each patch includes a high-resolution aerial image (512x512) at 0.2 m, a yearly satellite image time series (40x40 by default by wider areas are provided) with a spatial resolution of 10 m
|
175 |
and associated cloud and snow masks, and pixel-precise elevation and land cover annotations at 0.2 m resolution (512x512).
|
176 |
|
|
|
178 |
|
179 |
|
180 |
### Band order
|
|
|
|
|
|
|
181 |
|
182 |
+
<div style="display: flex;">
|
183 |
+
<div style="width: 15%;margin-right: 1;"">
|
184 |
+
Aerial
|
185 |
+
<ul>
|
186 |
+
<li>1. Red</li>
|
187 |
+
<li>2. Green</li>
|
188 |
+
<li>3. Blue</li>
|
189 |
+
<li>4. NIR</li>
|
190 |
+
<li>5. nDSM</li>
|
191 |
+
</ul>
|
192 |
+
</div>
|
193 |
+
|
194 |
+
<div style="width: 25%;">
|
195 |
+
Satellite
|
196 |
+
<ul>
|
197 |
+
<li>1. Blue (B2 490nm)</li>
|
198 |
+
<li>2. Green (B3 560nm)</li>
|
199 |
+
<li>3. Red (B4 665nm)</li>
|
200 |
+
<li>4. Red-Edge (B5 705nm)</li>
|
201 |
+
<li>5. Red-Edge2 (B6 470nm)</li>
|
202 |
+
<li>6. Red-Edge3 (B7 783nm)</li>
|
203 |
+
<li>7. NIR (B8 842nm)</li>
|
204 |
+
<li>8. NIR-Red-Edge (B8a 865nm)</li>
|
205 |
+
<li>9. SWIR (B11 1610nm)</li>
|
206 |
+
<li>10. SWIR2 (B12 2190nm)</li>
|
207 |
+
</ul>
|
208 |
+
</div>
|
209 |
+
|
210 |
+
</div>
|
211 |
|
212 |
### Annotations
|
213 |
Each pixel has been manually annotated by photo-interpretation of the 20 cm resolution aerial imagery, carried out by a team supervised by geography experts from the IGN.
|
|
|
220 |
Consequently, every split accurately reflects the landscape diversity inherent to metropolitan France.
|
221 |
It is important to mention that the patches come with meta-data permitting alternative splitting schemes, for example focused on domain shifts.
|
222 |
|
223 |
+
Official domain split: <br/>
|
224 |
|
225 |
<div style="display: flex; flex-wrap: nowrap; align-items: center">
|
226 |
<div style="flex: 40%;">
|
|
|
249 |
<br><br>
|
250 |
|
251 |
## Baseline code
|
252 |
+
<hr style='margin-top:-1em' />
|
253 |
+
|
254 |
We propose the U-T&T model, a two-branch architecture that combines spatial and temporal information from very high-resolution aerial images and high-resolution satellite images into a single output. The U-Net architecture is employed for the spatial/texture branch, using a ResNet34 backbone model pre-trained on ImageNet. For the spatio-temporal branch,
|
255 |
the U-TAE architecture incorporates a Temporal self-Attention Encoder (TAE) to explore the spatial and temporal characteristics of the Sentinel-2 time series data,
|
256 |
applying attention masks at different resolutions during decoding. This model allows for the fusion of learned information from both sources,
|
|
|
260 |
|
261 |
<th><font color="#c7254e"><b>IMPORTANT!</b></font></th> <b>The structure of the current dataset differs from the one that comes with the GitHub repository.</b>
|
262 |
To work with the current dataset, you need to replace the <font color=‘#D7881C’><em>src/load_data.py</em></font> file with the one provided here.
|
263 |
+
You also need to add the following content to the <font color=‘#D7881C’><em>flair-2-config.yml</em></font> file under the <em><b>data</b></em> tag: <br>
|
|
|
|
|
|
|
|
|
|
|
|
|
264 |
|
265 |
+
```
|
266 |
+
HF_data_path : " " # Path to unzipped HF dataset
|
267 |
+
domains_train : ["D006_2020","D007_2020","D008_2019","D009_2019","D013_2020","D016_2020","D017_2018","D021_2020","D023_2020","D030_2021","D032_2019","D033_2021","D034_2021","D035_2020","D038_2021","D041_2021","D044_2020","D046_2019","D049_2020","D051_2019","D052_2019","D055_2018","D060_2021","D063_2019","D070_2020","D072_2019","D074_2020","D078_2021","D080_2021","D081_2020","D086_2020","D091_2021"]
|
268 |
+
domains_val : ["D004_2021","D014_2020","D029_2021","D031_2019","D058_2020","D066_2021","D067_2021","D077_2021"]
|
269 |
+
domains_test : ["D015_2020","D022_2021","D026_2020","D036_2020","D061_2020","D064_2021","D068_2021","D069_2020","D071_2020","D084_2021"]
|
270 |
+
```
|
271 |
|
272 |
|
273 |
<br><br>
|
274 |
|
275 |
|
276 |
## Reference
|
277 |
+
<hr style='margin-top:-1em' />
|
278 |
+
|
279 |
Please include a citation to the following article if you use the FLAIR dataset:
|
280 |
|
281 |
```
|
282 |
+
@inproceedings{garioud2023flair,
|
283 |
title={FLAIR: a Country-Scale Land Cover Semantic Segmentation Dataset From Multi-Source Optical Imagery},
|
284 |
author={Anatol Garioud and Nicolas Gonthier and Loic Landrieu and Apolline De Wit and Marion Valette and Marc Poupée and Sébastien Giordano and Boris Wattrelos},
|
285 |
year={2023},
|
286 |
+
booktitle={Advances in Neural Information Processing Systems (NeurIPS) 2023},
|
287 |
+
doi={https://doi.org/10.48550/arXiv.2310.13336},
|
|
|
288 |
}
|
289 |
```
|
290 |
|
291 |
## Acknowledgment
|
292 |
+
<hr style='margin-top:-1em' />
|
293 |
+
|
294 |
This work was performed using HPC/AI resources from GENCI-IDRIS (Grant 2022-A0131013803). This work was supported by the project "Copernicus / FPCUP” of the European Union, by the French Space Agency (CNES) and by Connect by CNES.<br>
|
295 |
|
296 |
|
297 |
+
## Contact
|
298 |
+
<hr style='margin-top:-1em' />
|
299 |
|
300 |
+
If you have any questions, issues or feedback, you can contact us at: ai-challenge@ign.fr
|
301 |
|
302 |
|
303 |
## Dataset license
|
304 |
+
<hr style='margin-top:-1em' />
|
305 |
|
306 |
The "OPEN LICENCE 2.0/LICENCE OUVERTE" is a license created by the French government specifically for the purpose of facilitating the dissemination of open data by public administration.<br/>
|
307 |
This licence is governed by French law.<br/>
|