Datasets:
Tasks:
Image Classification
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
remote-sensing
earth-observation
geospatial
satellite-imagery
land-cover-classification
google-earth
License:
metadata
language: en
license: unknown
task_categories:
- image-classification
paperswithcode_id: patternnet
pretty_name: PatternNet
tags:
- remote-sensing
- earth-observation
- geospatial
- satellite-imagery
- land-cover-classification
- google-earth
PatternNet
The PatternNet dataset is a dataset for remote sensing scene classification and image retrieval.
Description
PatternNet is a large-scale high-resolution remote sensing dataset collected for remote sensing image retrieval. There are 38 classes and each class has 800 images of size 256×256 pixels. The images in PatternNet are collected from Google Earth imagery or via the Google Map API for some US cities. The following table shows the classes and the corresponding spatial resolutions. The figure shows some example images from each class.
- Total Number of Images: 30400
- Bands: 3 (RGB)
- Image Resolution: 256x256m
- Land Cover Classes: 38
- Classes: airplane, baseball_field, basketball_court, beach, bridge, cemetery, chaparral, christmas_tree_farm, closed_road, coastal_mansion, crosswalk, dense_residential, ferry_terminal, football_field, forest, freeway, golf_course, harbor, intersection, mobile_home_park, nursing_home, oil_gas_field, oil_well, overpass, parking_lot, parking_space, railway, river, runway, runway_marking, shipping_yard, solar_panel, sparse_residential, storage_tank, swimming_pool, tennis_court, transformer_station, wastewater_treatment_plant
Usage
To use this dataset, simply use datasets.load_dataset("blanchon/PatternNet")
.
from datasets import load_dataset
PatternNet = load_dataset("blanchon/PatternNet")
Citation
If you use the EuroSAT dataset in your research, please consider citing the following publication:
@article{li2017patternnet,
title = {PatternNet: Visual Pattern Mining with Deep Neural Network},
author = {Hongzhi Li and Joseph G. Ellis and Lei Zhang and Shih-Fu Chang},
journal = {International Conference on Multimedia Retrieval},
year = {2017},
doi = {10.1145/3206025.3206039},
bibSource = {Semantic Scholar https://www.semanticscholar.org/paper/e7c75e485651bf3ccf37dd8dd39f6665419d73bd}
}