Datasets:
Tasks:
Image Segmentation
Modalities:
Image
Languages:
English
Tags:
Cloud Detection
Cloud Segmentation
Remote Sensing Images
Satellite Images
HRC-WHU
CloudSEN12-High
License:
XavierJiezou
commited on
Commit
•
2ca7972
1
Parent(s):
ca58e21
Update README.md
Browse files
README.md
CHANGED
@@ -33,118 +33,145 @@ configs:
|
|
33 |
|
34 |
This dataset card aims to describe the datasets used in the Cloud-Adapter, a collection of high-resolution satellite images and semantic segmentation masks for cloud detection and related tasks.
|
35 |
|
36 |
-
##
|
37 |
|
38 |
-
```
|
39 |
-
|
40 |
-
|
41 |
-
# You can install it using pip if it's not already installed:
|
42 |
-
# pip install datasets
|
43 |
-
|
44 |
-
from datasets import load_dataset
|
45 |
-
from PIL import Image
|
46 |
-
|
47 |
-
# Step 2: Load the Cloud-Adapter dataset
|
48 |
-
# Replace "XavierJiezou/Cloud-Adapter" with the dataset repository name on Hugging Face
|
49 |
-
dataset = load_dataset("XavierJiezou/Cloud-Adapter")
|
50 |
|
51 |
-
|
52 |
-
|
53 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
|
55 |
-
|
56 |
-
# Each example contains an image and a corresponding annotation (segmentation mask)
|
57 |
-
train_data = dataset["train"]
|
58 |
|
59 |
-
|
60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
|
62 |
-
#
|
63 |
-
|
64 |
-
|
65 |
|
66 |
-
#
|
67 |
-
|
68 |
-
|
69 |
-
annotation = sample["annotation"]
|
70 |
|
71 |
-
# Display the image
|
72 |
print("Displaying the image...")
|
73 |
image.show()
|
74 |
|
75 |
-
|
76 |
-
print("Displaying the segmentation mask...")
|
77 |
annotation.show()
|
78 |
-
|
79 |
-
# Step 7: Use in a machine learning pipeline
|
80 |
-
# You can integrate this dataset into your ML pipeline by iterating over the splits
|
81 |
-
for sample in train_data:
|
82 |
-
image = sample["image"]
|
83 |
-
annotation = sample["annotation"]
|
84 |
-
# Process or feed `image` and `annotation` into your ML model here
|
85 |
-
|
86 |
-
# Additional Info: Dataset splits
|
87 |
-
# - dataset["train"]: Training split
|
88 |
-
# - dataset["val"]: Validation split
|
89 |
-
# - dataset["test"]: Testing split
|
90 |
```
|
91 |
|
92 |
-
##
|
93 |
-
|
94 |
-
The dataset contains the following splits:
|
95 |
-
- `train`: Training images and corresponding segmentation masks.
|
96 |
-
- `val`: Validation images and corresponding segmentation masks.
|
97 |
-
- `test`: Testing images and corresponding segmentation masks.
|
98 |
-
|
99 |
-
Each data point includes:
|
100 |
-
- `image`: The input satellite image (PNG or JPG format).
|
101 |
-
- `annotation`: The segmentation mask (PNG format).
|
102 |
-
|
103 |
-
## Dataset Creation
|
104 |
-
|
105 |
-
### Curation Rationale
|
106 |
-
|
107 |
-
This dataset was created to facilitate the reproduction of Cloud-Adapter.
|
108 |
-
|
109 |
-
### Source Data
|
110 |
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
-
|
115 |
-
- Annotations were verified for accuracy and class consistency.
|
116 |
-
|
117 |
-
#### Who are the source data producers?
|
118 |
-
|
119 |
-
The dataset combines data from various remote sensing sources. Specific producers are as follows:
|
120 |
-
- WHU (gf12ms, hrc)
|
121 |
-
- Cloudsen12 dataset
|
122 |
-
- L8 Biome dataset
|
123 |
|
124 |
## Citation
|
125 |
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
147 |
|
148 |
-
##
|
149 |
|
150 |
For questions, please contact Xavier Jiezou at xuechaozou (at) foxmail (dot) com.
|
|
|
33 |
|
34 |
This dataset card aims to describe the datasets used in the Cloud-Adapter, a collection of high-resolution satellite images and semantic segmentation masks for cloud detection and related tasks.
|
35 |
|
36 |
+
## Install
|
37 |
|
38 |
+
```bash
|
39 |
+
pip install huggingface-hub
|
40 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
+
## Usage
|
43 |
+
|
44 |
+
```bash
|
45 |
+
# Step 1: Download datasets
|
46 |
+
huggingface-cli download --repo-type dataset XavierJiezou/Cloud-Adapter --local-dir data --include hrc_whu.zip
|
47 |
+
huggingface-cli download --repo-type dataset XavierJiezou/Cloud-Adapter --local-dir data --include gf12ms_whu_gf1.zip
|
48 |
+
huggingface-cli download --repo-type dataset XavierJiezou/Cloud-Adapter --local-dir data --include gf12ms_whu_gf2.zip
|
49 |
+
huggingface-cli download --repo-type dataset XavierJiezou/Cloud-Adapter --local-dir data --include cloudsen12_high_l1c.zip
|
50 |
+
huggingface-cli download --repo-type dataset XavierJiezou/Cloud-Adapter --local-dir data --include cloudsen12_high_l2a.zip
|
51 |
+
huggingface-cli download --repo-type dataset XavierJiezou/Cloud-Adapter --local-dir data --include l8_biome.zip
|
52 |
+
|
53 |
+
# Step 2: Extract datasets
|
54 |
+
unzip hrc_whu.zip -d hrc_whu
|
55 |
+
unzip gf12ms_whu_gf1.zip -d gf12ms_whu_gf1
|
56 |
+
unzip gf12ms_whu_gf2.zip -d gf12ms_whu_gf2
|
57 |
+
unzip cloudsen12_high_l1c.zip -d cloudsen12_high_l1c
|
58 |
+
unzip cloudsen12_high_l2a.zip -d cloudsen12_high_l2a
|
59 |
+
unzip l8_biome.zip -d l8_biome
|
60 |
+
```
|
61 |
|
62 |
+
## Example
|
|
|
|
|
63 |
|
64 |
+
```python
|
65 |
+
import os
|
66 |
+
import zipfile
|
67 |
+
from huggingface_hub import hf_hub_download
|
68 |
+
|
69 |
+
# Define the dataset repository
|
70 |
+
repo_id = "XavierJiezou/Cloud-Adapter"
|
71 |
+
# Select the zip file of the dataset to download
|
72 |
+
zip_files = [
|
73 |
+
"hrc_whu.zip",
|
74 |
+
# "gf12ms_whu_gf1.zip",
|
75 |
+
# "gf12ms_whu_gf2.zip",
|
76 |
+
# "cloudsen12_high_l1c.zip",
|
77 |
+
# "cloudsen12_high_l2a.zip",
|
78 |
+
# "l8_biome.zip",
|
79 |
+
]
|
80 |
+
|
81 |
+
# Define a directory to extract the datasets
|
82 |
+
output_dir = "cloud_adapter_paper_data"
|
83 |
+
|
84 |
+
# Ensure the output directory exists
|
85 |
+
os.makedirs(output_dir, exist_ok=True)
|
86 |
+
|
87 |
+
# Step 1: Download and extract each ZIP file
|
88 |
+
for zip_file in zip_files:
|
89 |
+
print(f"Downloading {zip_file}...")
|
90 |
+
# Download the ZIP file from Hugging Face Hub
|
91 |
+
zip_path = hf_hub_download(repo_id=repo_id, filename=zip_file, repo_type="dataset")
|
92 |
+
|
93 |
+
# Extract the ZIP file
|
94 |
+
extract_path = os.path.join(output_dir, zip_file.replace(".zip", ""))
|
95 |
+
with zipfile.ZipFile(zip_path, "r") as zip_ref:
|
96 |
+
print(f"Extracting {zip_file} to {extract_path}...")
|
97 |
+
zip_ref.extractall(extract_path)
|
98 |
+
|
99 |
+
# Step 2: Explore the extracted datasets
|
100 |
+
# Example: Load and display the contents of the "hrc_whu" dataset
|
101 |
+
dataset_path = os.path.join(output_dir, "hrc_whu")
|
102 |
+
train_images_path = os.path.join(dataset_path, "img_dir", "train")
|
103 |
+
train_annotations_path = os.path.join(dataset_path, "ann_dir", "train")
|
104 |
+
|
105 |
+
# Display some files in the training set
|
106 |
+
print("Training Images:", os.listdir(train_images_path)[:5])
|
107 |
+
print("Training Annotations:", os.listdir(train_annotations_path)[:5])
|
108 |
+
|
109 |
+
# Example: Load and display an image and its annotation
|
110 |
+
from PIL import Image
|
111 |
|
112 |
+
# Load an example image and annotation
|
113 |
+
image_path = os.path.join(train_images_path, os.listdir(train_images_path)[0])
|
114 |
+
annotation_path = os.path.join(train_annotations_path, os.listdir(train_annotations_path)[0])
|
115 |
|
116 |
+
# Open and display the image
|
117 |
+
image = Image.open(image_path)
|
118 |
+
annotation = Image.open(annotation_path)
|
|
|
119 |
|
|
|
120 |
print("Displaying the image...")
|
121 |
image.show()
|
122 |
|
123 |
+
print("Displaying the annotation...")
|
|
|
124 |
annotation.show()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
125 |
```
|
126 |
|
127 |
+
## Source Data
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
128 |
|
129 |
+
- hrc_whu: https://github.com/dr-lizhiwei/HRC_WHU
|
130 |
+
- gf12ms_whu: https://github.com/whu-ZSC/GF1-GF2MS-WHU
|
131 |
+
- cloudsen12_high: https://huggingface.co/datasets/csaybar/CloudSEN12-high
|
132 |
+
- l8_biome: https://landsat.usgs.gov/landsat-8-cloud-cover-assessment-validation-data
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
133 |
|
134 |
## Citation
|
135 |
|
136 |
+
```
|
137 |
+
@article{hrc_whu,
|
138 |
+
title = {Deep learning based cloud detection for medium and high resolution remote sensing images of different sensors},
|
139 |
+
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
|
140 |
+
volume = {150},
|
141 |
+
pages = {197-212},
|
142 |
+
year = {2019},
|
143 |
+
author = {Zhiwei Li and Huanfeng Shen and Qing Cheng and Yuhao Liu and Shucheng You and Zongyi He},
|
144 |
+
}
|
145 |
+
|
146 |
+
@article{gf12ms_whu,
|
147 |
+
author={Zhu, Shaocong and Li, Zhiwei and Shen, Huanfeng},
|
148 |
+
journal={IEEE Transactions on Geoscience and Remote Sensing},
|
149 |
+
title={Transferring Deep Models for Cloud Detection in Multisensor Images via Weakly Supervised Learning},
|
150 |
+
year={2024},
|
151 |
+
volume={62},
|
152 |
+
pages={1-18},
|
153 |
+
}
|
154 |
+
|
155 |
+
@article{cloudsen12_high,
|
156 |
+
title={CloudSEN12, a global dataset for semantic understanding of cloud and cloud shadow in Sentinel-2},
|
157 |
+
author={Aybar, Cesar and Ysuhuaylas, Luis and Loja, Jhomira and Gonzales, Karen and Herrera, Fernando and Bautista, Lesly and Yali, Roy and Flores, Angie and Diaz, Lissette and Cuenca, Nicole and others},
|
158 |
+
journal={Scientific data},
|
159 |
+
volume={9},
|
160 |
+
number={1},
|
161 |
+
pages={782},
|
162 |
+
year={2022},
|
163 |
+
}
|
164 |
+
|
165 |
+
@article{l8_biome,
|
166 |
+
title = {Cloud detection algorithm comparison and validation for operational Landsat data products},
|
167 |
+
journal = {Remote Sensing of Environment},
|
168 |
+
volume = {194},
|
169 |
+
pages = {379-390},
|
170 |
+
year = {2017},
|
171 |
+
author = {Steve Foga and Pat L. Scaramuzza and Song Guo and Zhe Zhu and Ronald D. Dilley and Tim Beckmann and Gail L. Schmidt and John L. Dwyer and M. {Joseph Hughes} and Brady Laue}
|
172 |
+
}
|
173 |
+
```
|
174 |
|
175 |
+
## Contact
|
176 |
|
177 |
For questions, please contact Xavier Jiezou at xuechaozou (at) foxmail (dot) com.
|