Datasets:
Tasks:
Image Classification
Modalities:
Image
Languages:
English
Size:
10K<n<100K
Libraries:
FiftyOne
License:
Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,8 @@ tags:
|
|
12 |
- fiftyone
|
13 |
- image
|
14 |
- image-classification
|
15 |
-
|
|
|
16 |
|
17 |
|
18 |
|
@@ -20,13 +21,14 @@ dataset_summary: '
|
|
20 |
|
21 |
|
22 |
|
23 |
-
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 15588
|
|
|
24 |
|
25 |
|
26 |
## Installation
|
27 |
|
28 |
|
29 |
-
If you haven'
|
30 |
|
31 |
|
32 |
```bash
|
@@ -48,9 +50,9 @@ dataset_summary: '
|
|
48 |
|
49 |
# Load the dataset
|
50 |
|
51 |
-
# Note: other available arguments include '
|
52 |
|
53 |
-
dataset = fouh.load_from_hub("
|
54 |
|
55 |
|
56 |
# Launch the App
|
@@ -58,8 +60,6 @@ dataset_summary: '
|
|
58 |
session = fo.launch_app(dataset)
|
59 |
|
60 |
```
|
61 |
-
|
62 |
-
'
|
63 |
---
|
64 |
|
65 |
# Dataset Card for Office-Home
|
@@ -90,7 +90,7 @@ import fiftyone.utils.huggingface as fouh
|
|
90 |
|
91 |
# Load the dataset
|
92 |
# Note: other available arguments include 'max_samples', etc
|
93 |
-
dataset = fouh.load_from_hub("
|
94 |
|
95 |
# Launch the App
|
96 |
session = fo.launch_app(dataset)
|
@@ -101,54 +101,22 @@ session = fo.launch_app(dataset)
|
|
101 |
|
102 |
### Dataset Description
|
103 |
|
104 |
-
|
105 |
|
106 |
|
107 |
|
108 |
-
- **Curated by:** [
|
109 |
-
- **Funded by [optional]:** [More Information Needed]
|
110 |
-
- **Shared by [optional]:** [More Information Needed]
|
111 |
- **Language(s) (NLP):** en
|
112 |
- **License:** other
|
113 |
|
114 |
-
### Dataset Sources
|
115 |
-
|
116 |
-
<!-- Provide the basic links for the dataset. -->
|
117 |
-
|
118 |
-
- **Repository:** [More Information Needed]
|
119 |
-
- **Paper [optional]:** [More Information Needed]
|
120 |
-
- **Demo [optional]:** [More Information Needed]
|
121 |
-
|
122 |
-
## Uses
|
123 |
-
|
124 |
-
<!-- Address questions around how the dataset is intended to be used. -->
|
125 |
-
|
126 |
-
### Direct Use
|
127 |
-
|
128 |
-
<!-- This section describes suitable use cases for the dataset. -->
|
129 |
-
|
130 |
-
[More Information Needed]
|
131 |
-
|
132 |
-
### Out-of-Scope Use
|
133 |
-
|
134 |
-
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
|
135 |
|
136 |
-
|
|
|
137 |
|
138 |
-
## Dataset Structure
|
139 |
-
|
140 |
-
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
141 |
-
|
142 |
-
[More Information Needed]
|
143 |
|
144 |
## Dataset Creation
|
145 |
|
146 |
-
### Curation Rationale
|
147 |
-
|
148 |
-
<!-- Motivation for the creation of this dataset. -->
|
149 |
-
|
150 |
-
[More Information Needed]
|
151 |
-
|
152 |
### Source Data
|
153 |
|
154 |
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
@@ -156,75 +124,56 @@ session = fo.launch_app(dataset)
|
|
156 |
#### Data Collection and Processing
|
157 |
|
158 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
|
|
159 |
|
160 |
-
[More Information Needed]
|
161 |
-
|
162 |
-
#### Who are the source data producers?
|
163 |
-
|
164 |
-
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
165 |
-
|
166 |
-
[More Information Needed]
|
167 |
-
|
168 |
-
### Annotations [optional]
|
169 |
|
170 |
-
|
|
|
|
|
|
|
|
|
|
|
171 |
|
172 |
-
|
173 |
|
174 |
-
|
175 |
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
187 |
-
|
188 |
-
[More Information Needed]
|
189 |
-
|
190 |
-
## Bias, Risks, and Limitations
|
191 |
-
|
192 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
193 |
-
|
194 |
-
[More Information Needed]
|
195 |
-
|
196 |
-
### Recommendations
|
197 |
-
|
198 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
199 |
|
200 |
-
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
|
201 |
|
202 |
-
## Citation
|
203 |
|
204 |
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
205 |
|
206 |
**BibTeX:**
|
207 |
|
208 |
-
|
209 |
-
|
210 |
-
|
211 |
-
|
212 |
-
|
213 |
-
|
214 |
-
|
215 |
-
|
216 |
-
|
217 |
-
|
218 |
-
[More Information Needed]
|
219 |
-
|
220 |
-
## More Information [optional]
|
221 |
|
222 |
-
|
|
|
|
|
|
|
|
|
|
|
223 |
|
224 |
-
## Dataset Card Authors [optional]
|
225 |
|
226 |
-
[More Information Needed]
|
227 |
|
228 |
-
## Dataset Card
|
229 |
|
230 |
-
[
|
|
|
12 |
- fiftyone
|
13 |
- image
|
14 |
- image-classification
|
15 |
+
- domain-adaptation
|
16 |
+
dataset_summary: >
|
17 |
|
18 |
|
19 |
|
|
|
21 |
|
22 |
|
23 |
|
24 |
+
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 15588
|
25 |
+
samples.
|
26 |
|
27 |
|
28 |
## Installation
|
29 |
|
30 |
|
31 |
+
If you haven't already, install FiftyOne:
|
32 |
|
33 |
|
34 |
```bash
|
|
|
50 |
|
51 |
# Load the dataset
|
52 |
|
53 |
+
# Note: other available arguments include 'max_samples', etc
|
54 |
|
55 |
+
dataset = fouh.load_from_hub("Voxel51/Office-Home")
|
56 |
|
57 |
|
58 |
# Launch the App
|
|
|
60 |
session = fo.launch_app(dataset)
|
61 |
|
62 |
```
|
|
|
|
|
63 |
---
|
64 |
|
65 |
# Dataset Card for Office-Home
|
|
|
90 |
|
91 |
# Load the dataset
|
92 |
# Note: other available arguments include 'max_samples', etc
|
93 |
+
dataset = fouh.load_from_hub("Voxel51/Office-Home")
|
94 |
|
95 |
# Launch the App
|
96 |
session = fo.launch_app(dataset)
|
|
|
101 |
|
102 |
### Dataset Description
|
103 |
|
104 |
+
The Office-Home dataset has been created to evaluate domain adaptation algorithms for object recognition using deep learning. It consists of images from 4 different domains: Artistic images, Clip Art, Product images and Real-World images. For each domain, the dataset contains images of 65 object categories found typically in Office and Home settings.
|
105 |
|
106 |
|
107 |
|
108 |
+
- **Curated by:** [Jose Eusebio](https://www.linkedin.com/in/jmeusebio)
|
|
|
|
|
109 |
- **Language(s) (NLP):** en
|
110 |
- **License:** other
|
111 |
|
112 |
+
### Dataset Sources
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
113 |
|
114 |
+
- **Homepage:** https://www.hemanthdv.org/officeHomeDataset.html
|
115 |
+
- **Paper:** [Deep Hashing Network for Unsupervised Domain Adaptation](https://openaccess.thecvf.com/content_cvpr_2017/papers/Venkateswara_Deep_Hashing_Network_CVPR_2017_paper.pdf)
|
116 |
|
|
|
|
|
|
|
|
|
|
|
117 |
|
118 |
## Dataset Creation
|
119 |
|
|
|
|
|
|
|
|
|
|
|
|
|
120 |
### Source Data
|
121 |
|
122 |
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
|
|
124 |
#### Data Collection and Processing
|
125 |
|
126 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
127 |
+
The images in the dataset were collected using a python web-crawler that crawled through several search engines and online image directories. This initial run searched for around 120 different objects and produced over 100,000 images across the different categories and domains. These images were then filtered to ensure that the desired object was in the picture. Categories were also filtered to make sure that each category had at least a certain number of images. The latest version of the dataset contains around 15,500 images from 65 different categories.
|
128 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
|
130 |
+
| Domain | Min: # | Min: # | Min: # | Acc. |
|
131 |
+
|---------|--------|-------------------------|----------------------------|------------------------|
|
132 |
+
| Art | 15 | 117 \(\times\) 85 pix. | 4384 \(\times\) 2686 pix. | 44.99 \(\pm\) 1.85 |
|
133 |
+
| Clipart | 39 | 18 \(\times\) 18 pix. | 2400 \(\times\) 2400 pix. | 53.95 \(\pm\) 1.45 |
|
134 |
+
| Product | 38 | 75 \(\times\) 63 pix. | 2560 \(\times\) 2560 pix. | 66.41 \(\pm\) 1.18 |
|
135 |
+
| Product | 23 | 88 \(\times\) 80 pix. | 6500 \(\times\) 4900 pix. | 59.70 \(\pm\) 1.04 |
|
136 |
|
137 |
+
Caption: Statistics for the Office-Home dataset. Min: # is the minimum number of images of each object for the specified domain. Min: Size and Max: Size are the minimum and maximum image sizes in the domain. Acc: is the classification accuracy using a linear SVM (LIBLINEAR) classifier with 5-fold cross-validation using deep features extracted from the VGG-F deep network.
|
138 |
|
139 |
+
The 65 object categories in the dataset are:
|
140 |
|
141 |
+
```plaintext
|
142 |
+
Alarm Clock, Backpack, Batteries, Bed, Bike, Bottle, Bucket, Calculator, Calendar, Candles,
|
143 |
+
Chair, Clipboards, Computer, Couch, Curtains, Desk Lamp, Drill, Eraser, Exit Sign, Fan,
|
144 |
+
File Cabinet, Flipflops, Flowers, Folder, Fork, Glasses, Hammer, Helmet, Kettle, Keyboard,
|
145 |
+
Knives, Lamp Shade, Laptop, Marker, Monitor, Mop, Mouse, Mug, Notebook, Oven, Pan,
|
146 |
+
Paper Clip, Pen, Pencil, Postit Notes, Printer, Push Pin, Radio, Refrigerator, ruler,
|
147 |
+
Scissors, Screwdriver, Shelf, Sink, Sneakers, Soda, Speaker, Spoon, Table, Telephone,
|
148 |
+
Toothbrush, Toys, Trash Can, TV, Webcam
|
149 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
150 |
|
|
|
151 |
|
152 |
+
## Citation
|
153 |
|
154 |
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
155 |
|
156 |
**BibTeX:**
|
157 |
|
158 |
+
```bibtex
|
159 |
+
@inproceedings{venkateswara2017deep,
|
160 |
+
title={Deep hashing network for unsupervised domain adaptation},
|
161 |
+
author={Venkateswara, Hemanth and Eusebio, Jose and Chakraborty, Shayok and Panchanathan, Sethuraman},
|
162 |
+
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
|
163 |
+
pages={5018--5027},
|
164 |
+
year={2017}
|
165 |
+
}
|
166 |
+
```
|
|
|
|
|
|
|
|
|
167 |
|
168 |
+
## Fair Use Notice
|
169 |
+
This dataset contains some copyrighted material whose use has not been specifically authorized by the copyright owners.
|
170 |
+
In an effort to advance scientific research, we make this material available for academic research. We believe this constitutes a fair use of any such copyrighted material as provided for in section 107 of the US Copyright Law.
|
171 |
+
In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit for non-commercial research and educational purposes.
|
172 |
+
For more information on fair use please click here. If you wish to use copyrighted material on this site or in our dataset for purposes of your own that
|
173 |
+
go beyond non-commercial research and academic purposes, you must obtain permission directly from the copyright owner. (adapted from [Christopher Thomas](http://people.cs.pitt.edu/~chris/photographer/))
|
174 |
|
|
|
175 |
|
|
|
176 |
|
177 |
+
## Dataset Card Author
|
178 |
|
179 |
+
[Jacob Marks](https://huggingface.co/jamarks)
|