File size: 4,190 Bytes
b589012
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13fc469
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b589012
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
license: cc-by-nc-4.0
dataset_info:
  features:
  - name: image
    dtype: image
  - name: annotation
    dtype: image
  splits:
  - name: train
    num_bytes: 8683872818.848
    num_examples: 45728
  - name: val
    num_bytes: 1396718238.836
    num_examples: 15358
  - name: test
    num_bytes: 1516829621.65
    num_examples: 4623
  download_size: 12492798567
  dataset_size: 11597420679.334
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: val
    path: data/val-*
  - split: test
    path: data/test-*
---

# Dataset Card for Cloud-Adapter

This dataset card aims to describe the datasets used in the Cloud-Adapter, a collection of high-resolution satellite images and semantic segmentation masks for cloud detection and related tasks.

## Uses

```python
# Step 1: Install the datasets library
# Ensure you have the `datasets` library installed
# You can install it using pip if it's not already installed:
# pip install datasets

from datasets import load_dataset
from PIL import Image

# Step 2: Load the Cloud-Adapter dataset
# Replace "XavierJiezou/Cloud-Adapter" with the dataset repository name on Hugging Face
dataset = load_dataset("XavierJiezou/Cloud-Adapter")

# Step 3: Explore the dataset splits
# The dataset contains three splits: "train", "val", and "test"
print("Available splits:", dataset.keys())

# Step 4: Access individual examples
# Each example contains an image and a corresponding annotation (segmentation mask)
train_data = dataset["train"]

# View the number of samples in the training set
print("Number of training samples:", len(train_data))

# Step 5: Access a single data sample
# Each data sample has two keys: "image" and "annotation"
sample = train_data[0]

# Step 6: Display the image and annotation
# Use PIL to open and display the image and annotation
image = sample["image"]
annotation = sample["annotation"]

# Display the image
print("Displaying the image...")
image.show()

# Display the annotation
print("Displaying the segmentation mask...")
annotation.show()

# Step 7: Use in a machine learning pipeline
# You can integrate this dataset into your ML pipeline by iterating over the splits
for sample in train_data:
    image = sample["image"]
    annotation = sample["annotation"]
    # Process or feed `image` and `annotation` into your ML model here

# Additional Info: Dataset splits
# - dataset["train"]: Training split
# - dataset["val"]: Validation split
# - dataset["test"]: Testing split
```

## Dataset Structure

The dataset contains the following splits:
- `train`: Training images and corresponding segmentation masks.
- `val`: Validation images and corresponding segmentation masks.
- `test`: Testing images and corresponding segmentation masks.

Each data point includes:
- `image`: The input satellite image (PNG or JPG format).
- `annotation`: The segmentation mask (PNG format).

## Dataset Creation

### Curation Rationale

This dataset was created to facilitate the reproduction of Cloud-Adapter.

### Source Data

#### Data Collection and Processing

The dataset combines multiple sub-datasets, each processed to ensure consistency in format and organization:
- Images and annotations were organized into `train`, `val`, and `test` splits.
- Annotations were verified for accuracy and class consistency.

#### Who are the source data producers?

The dataset combines data from various remote sensing sources. Specific producers are as follows:
- WHU (gf12ms, hrc)
- Cloudsen12 dataset
- L8 Biome dataset

## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

**APA:**

Xavier Jiezou. (2024). *Cloud-Adapter: A Semantic Segmentation Dataset for Remote Sensing Cloud Detection*. Retrieved from https://huggingface.co/datasets/XavierJiezou/Cloud-Adapter.

## Glossary [optional]

[More Information Needed]

## More Information

[More Information Needed]

## Dataset Card Authors

This dataset card was authored by Xavier Jiezou.

## Dataset Card Contact

For questions, please contact Xavier Jiezou at xuechaozou (at) foxmail (dot) com.