jamarks commited on
Commit
5ba5b5a
1 Parent(s): a4efc35

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -97
README.md CHANGED
@@ -49,7 +49,7 @@ dataset_summary: '
49
 
50
  # Note: other available arguments include ''max_samples'', etc
51
 
52
- dataset = fouh.load_from_hub("jamarks/Stanford-Dogs-Imbalanced")
53
 
54
 
55
  # Launch the App
@@ -89,7 +89,7 @@ import fiftyone.utils.huggingface as fouh
89
 
90
  # Load the dataset
91
  # Note: other available arguments include 'max_samples', etc
92
- dataset = fouh.load_from_hub("jamarks/Stanford-Dogs-Imbalanced")
93
 
94
  # Launch the App
95
  session = fo.launch_app(dataset)
@@ -100,8 +100,28 @@ session = fo.launch_app(dataset)
100
 
101
  ### Dataset Description
102
 
103
- <!-- Provide a longer summary of what this dataset is. -->
104
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
105
 
106
 
107
  - **Curated by:** [More Information Needed]
@@ -110,120 +130,66 @@ session = fo.launch_app(dataset)
110
  - **Language(s) (NLP):** en
111
  - **License:** [More Information Needed]
112
 
113
- ### Dataset Sources [optional]
114
 
115
  <!-- Provide the basic links for the dataset. -->
116
 
117
- - **Repository:** [More Information Needed]
118
- - **Paper [optional]:** [More Information Needed]
119
- - **Demo [optional]:** [More Information Needed]
120
 
121
  ## Uses
122
 
123
- <!-- Address questions around how the dataset is intended to be used. -->
124
-
125
- ### Direct Use
126
-
127
- <!-- This section describes suitable use cases for the dataset. -->
128
 
129
- [More Information Needed]
130
-
131
- ### Out-of-Scope Use
132
-
133
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
134
 
135
- [More Information Needed]
136
 
137
  ## Dataset Structure
138
 
139
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
140
-
141
- [More Information Needed]
142
-
143
- ## Dataset Creation
144
-
145
- ### Curation Rationale
146
-
147
- <!-- Motivation for the creation of this dataset. -->
148
-
149
- [More Information Needed]
150
-
151
- ### Source Data
152
-
153
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
154
-
155
- #### Data Collection and Processing
156
-
157
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
158
-
159
- [More Information Needed]
160
-
161
- #### Who are the source data producers?
162
-
163
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
164
-
165
- [More Information Needed]
166
-
167
- ### Annotations [optional]
168
 
169
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
170
-
171
- #### Annotation process
172
-
173
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
174
-
175
- [More Information Needed]
176
-
177
- #### Who are the annotators?
178
-
179
- <!-- This section describes the people or systems who created the annotations. -->
180
-
181
- [More Information Needed]
182
-
183
- #### Personal and Sensitive Information
184
-
185
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
186
-
187
- [More Information Needed]
188
-
189
- ## Bias, Risks, and Limitations
190
-
191
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
192
-
193
- [More Information Needed]
194
-
195
- ### Recommendations
196
-
197
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
198
-
199
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
200
 
201
- ## Citation [optional]
202
 
 
203
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
204
 
205
  **BibTeX:**
206
 
207
- [More Information Needed]
208
-
209
- **APA:**
210
-
211
- [More Information Needed]
212
-
213
- ## Glossary [optional]
214
-
215
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
216
-
217
- [More Information Needed]
218
-
219
- ## More Information [optional]
220
 
221
- [More Information Needed]
222
 
223
- ## Dataset Card Authors [optional]
224
 
225
- [More Information Needed]
226
 
227
- ## Dataset Card Contact
228
 
229
- [More Information Needed]
 
49
 
50
  # Note: other available arguments include ''max_samples'', etc
51
 
52
+ dataset = fouh.load_from_hub("Voxel51/Stanford-Dogs-Imbalanced")
53
 
54
 
55
  # Launch the App
 
89
 
90
  # Load the dataset
91
  # Note: other available arguments include 'max_samples', etc
92
+ dataset = fouh.load_from_hub("Voxel51/Stanford-Dogs-Imbalanced")
93
 
94
  # Launch the App
95
  session = fo.launch_app(dataset)
 
100
 
101
  ### Dataset Description
102
 
103
+ An imbalanced version of the [Stanford Dogs dataset](http://vision.stanford.edu/aditya86/ImageNetDogs/) designed for testing class imbalance mitigation techniques, including but not limited to synthetic data generation.
104
 
105
+ This version of the dataset was constructed by randomly splitting the original dataset into train, val, and test sets with a 60/20/20 split. For 15 randomly chosen classes, we then removed all but 10 of the training examples.
106
+
107
+ ```python
108
+ # Split the dataset into train, val, and test sets
109
+ import fiftyone.utils.random as four
110
+ train, val, test = four.random_split(dataset, split_fracs=(0.6, 0.2, 0.2))
111
+ splits_dict = { "train": train, "val": val, "test": test }
112
+
113
+ # Get the classes to limit
114
+ import random
115
+ classes = list(dataset.distinct("ground_truth.label"))
116
+ classes_to_limit = random.sample(classes, 15)
117
+
118
+ # Limit the number of samples for the selected classes
119
+ for class_name in classes_to_limit:
120
+ class_samples = dataset.match(F("ground_truth.label") == class_name)
121
+ samples_to_keep = class_samples.take(10)
122
+ samples_to_remove = class_samples.exclude(samples_to_keep)
123
+ dataset.delete_samples(samples_to_remove)
124
+ ```
125
 
126
 
127
  - **Curated by:** [More Information Needed]
 
130
  - **Language(s) (NLP):** en
131
  - **License:** [More Information Needed]
132
 
133
+ ### Dataset Sources
134
 
135
  <!-- Provide the basic links for the dataset. -->
136
 
137
+ - **Paper:** [More Information Needed]
138
+ - **Homepage:** [More Information Needed]
 
139
 
140
  ## Uses
141
 
142
+ - Fine-grained visual classification
143
+ - Class imbalance mitigation strategies
 
 
 
144
 
145
+ <!-- Address questions around how the dataset is intended to be used. -->
 
 
 
 
146
 
 
147
 
148
  ## Dataset Structure
149
 
150
+ The following classes only have 10 samples in the train split:
151
+
152
+ - Australian_terrier
153
+ - Saluki
154
+ - Cardigan
155
+ - standard_schnauzer
156
+ - Eskimo_dog
157
+ - American_Staffordshire_terrier
158
+ - Lakeland_terrier
159
+ - Lhasa
160
+ - cocker_spaniel
161
+ - Greater_Swiss_Mountain_dog
162
+ - basenji
163
+ - toy_terrier
164
+ - Chihuahua
165
+ - Walker_hound
166
+ - Shih-Tzu
167
+ - Newfoundland
 
 
 
 
 
 
 
 
 
 
 
168
 
169
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
170
 
 
171
 
172
+ ## Citation
173
  <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
174
 
175
  **BibTeX:**
176
 
177
+ ```bibtex
178
+ @inproceedings{KhoslaYaoJayadevaprakashFeiFei_FGVC2011,
179
+ author = "Aditya Khosla and Nityananda Jayadevaprakash and Bangpeng Yao and Li Fei-Fei",
180
+ title = "Novel Dataset for Fine-Grained Image Categorization",
181
+ booktitle = "First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition",
182
+ 2011,
183
+ month = "June",
184
+ address = "Colorado Springs, CO",
185
+ }
186
+ ```
 
 
 
187
 
 
188
 
189
+ ## Dataset Card Author
190
 
191
+ [Jacob Marks](https://huggingface.co/jamarks)
192
 
193
+ ## Dataset Contacts
194
 
195
+ aditya86@cs.stanford.edu and bangpeng@cs.stanford.edu