File size: 2,361 Bytes
97f695a
 
1c52923
 
 
 
 
 
6a67b78
 
1c52923
 
6a67b78
1c52923
6a67b78
 
1c52923
 
 
 
 
e5746f2
 
 
1d7b4be
e5746f2
 
 
1d7b4be
 
 
 
97f695a
e5746f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1dab209
e5746f2
 
 
 
 
 
 
 
 
 
 
1dab209
e5746f2
 
1dab209
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: mit
dataset_info:
  features:
  - name: image
    dtype: image
  - name: prompt
    dtype: string
  - name: class_name
    dtype: string
  splits:
  - name: train
    num_bytes: 3307884193.56
    num_examples: 5560
  download_size: 2737134316
  dataset_size: 3307884193.56
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
tags:
- biology
- microscopy
- brightfield
pretty_name: Dreambooth Brightfield Microscopy
size_categories:
- 1K<n<10K
task_categories:
- text-to-image
language:
- en
---
# Dataset Card for Dreambooth Brightfield Microscopy

This dataset was created as part of my masters research and thesis, where I am trying to generate realistic looking
brightfield microscopy images for dataset augmentation.
With the downstream goal of enhancing cell detection objects, increasing the dataset size of an object detection model
is a necessary step.

## Dataset Details

### Dataset Description

As part of my research, I previously generated brightfield microscopy images using unconditional diffusion models, curating a
large dataset of SCC brightfield images.
The results of these models were pretty impressive, but still needed many images.
Hence, this dataset was created to test the capabilities of dreambooth on brightfield microscopy image generation.
I'm testing several configurations:
- Diffusion Model Architectures (SD-1.5(, SD-2.1, SDXL 1.0)) -- The last two had to be discontinued due to time, and compute constraints
- Training Data Size (10, 20, 30, 50)
- 4 Concepts are trained in parallel (cell, cell rug, well edge, debris)
- With and without subject class images for class-specific prior preservation loss impact assessment

The dataset consists of several classes:
- Real microscopy images, one class for each concept
- Generated images from SD-1.5, one class for each concept
- Generated images from SD-2.1, one class for each concept
- Generated images from SDXL 1.0, one class for each concept

These classes are used in the concepts for the dreambooth model training,
resulting in 8 models trained to assess the usability of dreambooth in this domain.
Unfortunately, due to time constraints, I'm not able to test many hyperparameter configurations for each model, nor play around
a lot with prompt engineering.
This research serves as a base thath others (or me) can work upon.