|
--- |
|
license: mit |
|
dataset_info: |
|
features: |
|
- name: image |
|
dtype: image |
|
- name: prompt |
|
dtype: string |
|
- name: class_name |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 3307884193.56 |
|
num_examples: 5560 |
|
download_size: 2737134316 |
|
dataset_size: 3307884193.56 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
tags: |
|
- biology |
|
- microscopy |
|
- brightfield |
|
pretty_name: Dreambooth Brightfield Microscopy |
|
size_categories: |
|
- 1K<n<10K |
|
task_categories: |
|
- text-to-image |
|
language: |
|
- en |
|
--- |
|
# Dataset Card for Dreambooth Brightfield Microscopy |
|
|
|
This dataset was created as part of my masters research and thesis, where I am trying to generate realistic looking |
|
brightfield microscopy images for dataset augmentation. |
|
With the downstream goal of enhancing cell detection objects, increasing the dataset size of an object detection model |
|
is a necessary step. |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
As part of my research, I previously generated brightfield microscopy images using unconditional diffusion models, curating a |
|
large dataset of SCC brightfield images. |
|
The results of these models were pretty impressive, but still needed many images. |
|
Hence, this dataset was created to test the capabilities of dreambooth on brightfield microscopy image generation. |
|
I'm testing several configurations: |
|
- Diffusion Model Architectures (SD-1.5(, SD-2.1, SDXL 1.0)) -- The last two had to be discontinued due to time, and compute constraints |
|
- Training Data Size (10, 20, 30, 50) |
|
- 4 Concepts are trained in parallel (cell, cell rug, well edge, debris) |
|
- With and without subject class images for class-specific prior preservation loss impact assessment |
|
|
|
The dataset consists of several classes: |
|
- Real microscopy images, one class for each concept |
|
- Generated images from SD-1.5, one class for each concept |
|
- Generated images from SD-2.1, one class for each concept |
|
- Generated images from SDXL 1.0, one class for each concept |
|
|
|
These classes are used in the concepts for the dreambooth model training, |
|
resulting in 8 models trained to assess the usability of dreambooth in this domain. |
|
Unfortunately, due to time constraints, I'm not able to test many hyperparameter configurations for each model, nor play around |
|
a lot with prompt engineering. |
|
This research serves as a base thath others (or me) can work upon. |