File size: 3,619 Bytes
5b83f3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a5df35
5b83f3e
 
 
 
 
 
 
 
 
 
 
ecb95bb
5b83f3e
ecb95bb
5b83f3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a5df35
5b83f3e
 
 
 
 
 
 
 
 
 
8a5df35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5b83f3e
 
 
 
 
8a5df35
 
5b83f3e
 
 
8a5df35
5b83f3e
 
 
 
8a5df35
5b83f3e
8a5df35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5b83f3e
 
 
 
 
8a5df35
5b83f3e
 
 
 
 
8a5df35
5b83f3e
8a5df35
5b83f3e
 
 
 
 
8a5df35
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
---
annotations_creators: []
language: en
size_categories:
- 10K<n<100K
task_categories:
- object-detection
task_ids: []
pretty_name: lecture_dataset_train
tags:
- fiftyone
- image
- object-detection
dataset_summary: '




  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 16638 samples.


  ## Installation


  If you haven''t already, install FiftyOne:


  ```bash

  pip install -U fiftyone

  ```


  ## Usage


  ```python

  import fiftyone as fo

  import fiftyone.utils.huggingface as fouh


  # Load the dataset

  # Note: other available arguments include ''max_samples'', etc

  dataset = fouh.load_from_hub("Voxel51/Coursera_lecture_dataset_train")


  # Launch the App

  session = fo.launch_app(dataset)

  ```

  '
---

# Dataset Card for Lecture Training Set for Coursera MOOC - Hands Data Centric Visual AI

This dataset is the **training dataset for the in-class lectures** of the Hands-on Data Centric Visual AI Coursera course.

This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 16638 samples.

## Installation

If you haven't already, install FiftyOne:

```bash
pip install -U fiftyone
```

## Usage

```python
import fiftyone as fo
import fiftyone.utils.huggingface as fouh

# Load the dataset
# Note: other available arguments include 'max_samples', etc
dataset = fouh.load_from_hub("Voxel51/Coursera_lecture_dataset_train")

# Launch the App
session = fo.launch_app(dataset)
```


## Dataset Details

### Dataset Description

This dataset is a modified subset of the [LVIS dataset](https://www.lvisdataset.org/).

The dataset here only contains detections, some of which have been artificially perturbed and altered to demonstrate data centric AI techniques and methodologies for the course.

This dataset has the following labels: 

 - 'jacket'
 - 'coat'
 - 'jean'
 - 'trousers'
 - 'short_pants'
 - 'trash_can'
 - 'bucket'
 - 'flowerpot'
 - 'helmet'
 - 'baseball_cap'
 - 'hat'
 - 'sunglasses'
 - 'goggles'
 - 'doughnut'
 - 'pastry'
 - 'onion'
 - 'tomato'

### Dataset Sources [optional]

<!-- Provide the basic links for the dataset. -->

- **Repository:** https://www.lvisdataset.org/
- **Paper:** https://arxiv.org/abs/1908.03195

## Uses

The labels in this dataset have been perturbed to illustrate data centric AI techniques for the Hands-on Data Centric AI Coursera MOOC.


## Dataset Structure

Each image in the dataset comes with detailed annotations in FiftyOne detection format. A typical annotation looks like this:

```python
<Detection: {
    'id': '66a2f24cce2f9d11d98d39f3',
    'attributes': {},
    'tags': [],
    'label': 'trousers',
    'bounding_box': [
        0.5562343750000001,
        0.4614166666666667,
        0.1974375,
        0.29300000000000004,
    ],
    'mask': None,
    'confidence': None,
    'index': None,
}>
```

## Dataset Creation

### Curation Rationale

The selected labels for this dataset is due to the fact that these objects can be confusing to a model. Thus, making them a great choice for demonstrating data centric AI techniques.

[More Information Needed]

### Source Data

This is a subset of the [LVIS dataset.](https://www.lvisdataset.org/)

## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```bibtex
@inproceedings{gupta2019lvis,
  title={{LVIS}: A Dataset for Large Vocabulary Instance Segmentation},
  author={Gupta, Agrim and Dollar, Piotr and Girshick, Ross},
  booktitle={Proceedings of the {IEEE} Conference on Computer Vision and Pattern Recognition},
  year={2019}
}
```