Datasets:
task_categories:
- image-segmentation
- image-classification
language:
- en
tags:
- agritech
- hyperspectral
- spectroscopy
- fruit
- sub-class classification
- detection
size_categories:
- 10K<n<100K
license: mit
Living Optics Orchard Dataset
Overview
This dataset contains 435 images of captured in one of the UK's largest orchards, using the Living Optics Camera.
The data consists of RGB images, sparse spectral samples and instance segmentation masks.
The dataset is derived from 44 unique raw files corresponding to 435 frames. Therefore, multiple frames could originate from the same raw file. This structure emphasized the need for a split strategy that avoided data leakage. To ensure robust evaluation, the dataset was divided using an 8:2 split, with splitting performed at the raw file level rather than the frame level. This strategy guaranteed that all frames associated with a specific raw file were confined to either the training set or the test set, eliminating the risk of overlapping information between the two sets. The dataset contains 3,785 instances of Royal Gala Apples, 2,523 instances of Pears, and 73 instances of Cox Apples, summing to a total of 6,381 labelled instances.
The spectra which do not lie within a labelled segmentation mask can be used for negative sampling when training classifiers.
Additional unlabelled data is available upon request.
Classes
The training dataset contains 3 classes:
- 🍎 cox apple - 3,605 total spectral samples
- 🍎 royal gala apple - 13,282 total spectral samples
- 🍐 pear - 34,398 total spectral samples
The remaining 1,855,755 spectra are unlabelled and can be considered a single "background " class.
Requirements
Download instructions
Command line
mkdir -p hyperspectral-orchard
huggingface-cli download LivingOptics/hyperspectral-orchard --repo-type dataset --local-dir hyperspectral-orchard
Python
from huggingface_hub import snapshot_download
dataset_path = snapshot_download(repo_id="LivingOptics/hyperspectral-orchard", repo_type="dataset")
print(dataset_path)
Usage
import os.path as op
import numpy.typing as npt
from typing import List, Dict, Generator
from lo.data.tools import Annotation, LODataItem, LOJSONDataset, draw_annotations
from lo.data.dataset_visualisation import get_object_spectra, plot_labelled_spectra
from lo.sdk.api.acquisition.io.open import open as lo_open
# Load the dataset
path_to_download = op.expanduser("~/Downloads/hyperspectral-orchard")
dataset = LOJSONDataset(path_to_download)
# Get the training data as an iterator
training_data: List[LODataItem] = dataset.load("train")
# Inspect the data
lo_data_item: LODataItem
for lo_data_item in training_data[:3]:
draw_annotations(lo_data_item)
ann: Annotation
for ann in lo_data_item.annotations:
print(ann.class_name, ann.category, ann.subcategories)
# Plot the spectra for each class
fig, ax = plt.subplots(1)
object_spectra_dict = {}
class_numbers_to_labels = {0: "background_class"}
for lo_data_item in training_data:
object_spectra_dict, class_numbers_to_labels = get_object_spectra(
lo_data_item, object_spectra_dict, class_numbers_to_labels
)
plot_labelled_spectra(object_spectra_dict, class_numbers_to_labels, ax)
plt.show()
See our Spatial Spectral ML project for an example of how to train and run a segmentation and spectral classification algoirthm using this dataset.