The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Accepted to AAAI 2025
OmniCount-191 is a first-of-its-kind dataset with multi-label object counts, including points, bounding boxes, and VQA annotations.
Dataset Details
Dataset Description
Omnicount-191 is a dataset that caters to a broad spectrum of visual categories and instances featuring various visual categories with multiple instances and classes per image. The current datasets, primarily designed for object counting, focusing on singular object categories like humans and vehicles, fall short for multi-label object counting tasks. Despite the presence of multi-class datasets like MS COCO their utility is limited for counting due to the sparse nature of object appearance. Addressing this gap, we created a new dataset with 30,230 images spanning 191 diverse categories, including kitchen utensils, office supplies, vehicles, and animals. This dataset, featuring a wide range of object instance counts per image ranging from 1 to 160 and an average count of 10, bridges the existing void and establishes a benchmark for assessing counting models in varied scenarios.
- Curated by: Anindya Mondal, Sauradip Nag, Xiatian Zhu, Anjan Dutta
- License: [OpenRAIL]
Dataset Sources
- Paper :OmniCount: Multi-label Object Counting with Semantic-Geometric Priors
- Demo : https://mondalanindya.github.io/OmniCount/
Uses
Direct Use
Object Counting
Out-of-Scope Use
Visual Question Answering (VQA), Object Detection (OD)
Data Collection and Processing
The data collection process for OmniCount-191 involved a team of 13 members who manually curated images from the web, released under Creative Commons (CC) licenses. The images were sourced using relevant keywords such as “Aerial Images”, “Supermarket Shelf”, “Household Fruits”, and “Many Birds and Animals”. Initially, 40,000 images were considered, from which 30,230 images were selected based on the following criteria:
- Object instances: Each image must contain at least five object instances, aiming to challenge object enumeration in complex scenarios;
- Image quality: High-resolution images were selected to ensure clear object identification and counting;
- Severe occlusion: We excluded images with significant occlusion to maintain accuracy in object counting;
- Object dimensions: Images with objects too small or too distant for accurate counting or annotation were removed, ensuring all objects are adequately sized for analysis. The selected images were annotated using the Labelbox annotation platform.
Statistics
The OmniCount-191 benchmark presents images with small, densely packed objects from multiple classes, reflecting real-world object counting scenarios. This dataset encompasses 30,230 images, with dimensions averaging 700 × 580 pixels. Each image contains an average of 10 objects, totaling 302,300 objects, with individual images ranging from 1 to 160 objects. To ensure diversity, the dataset is split into training and testing sets, with no overlap in object categories – 118 categories for training and 73 for testing, corresponding to a 60%-40% split. This results in 26,978 images for training and 3,252 for testing.
Splits
We have prepared dedicated splits within the OmniCount-191 dataset to facilitate the assessment of object counting models under zero-shot and few-shot learning conditions. Please refer to the technical report (Sec. 9.1, 9.2) for more detais.
Citation
BibTeX:
@article{mondal2024omnicount,
title={OmniCount: Multi-label Object Counting with Semantic-Geometric Priors},
author={Mondal, Anindya and Nag, Sauradip and Zhu, Xiatian and Dutta, Anjan},
journal={arXiv preprint arXiv:2403.05435},
year={2024}
}
Dataset Card Authors
Anindya Mondal, Sauradip Nag, Xiatian Zhu, Anjan Dutta
Dataset Card Contact
{a[dot]mondal, s[dot]nag, xiatian[dot]zhu, anjan[dot]dutta}[at]surrey[dot]ac[dot]uk
License
Object counting has legitimate commercial applications in urban planning, event logistics, and consumer behavior analysis. However, said technology concurrently facilitates human surveillance capabilities, which unscrupulous actors may intentionally or unintentionally misappropriate for nefarious purposes. As such, we must exercise reasoned skepticism towards any downstream deployment of our research that enables the monitoring of individuals without proper legal safeguards and ethical constraints. Therefore, in an effort to mitigate foreseeable misuse and uphold principles of privacy and civil liberties, we will hereby release all proprietary source code pursuant to the Open RAIL-S License, which expressly prohibits exploitative applications through robust contractual obligations and liabilities.
- Downloads last month
- 41