Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ pretty_name: DiffusionDB-Pixelart
|
|
18 |
size_categories:
|
19 |
- n>1T
|
20 |
source_datasets:
|
21 |
-
-
|
22 |
tags:
|
23 |
- stable diffusion
|
24 |
- prompt engineering
|
@@ -84,7 +84,6 @@ task_ids:
|
|
84 |
- **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb)
|
85 |
- **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb)
|
86 |
- **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896)
|
87 |
-
- **Point of Contact:** [Jay Wang](mailto:jayw@gatech.edu)
|
88 |
|
89 |
### Dataset Summary
|
90 |
|
@@ -102,23 +101,22 @@ The unprecedented scale and diversity of this human-actuated dataset provide exc
|
|
102 |
|
103 |
The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
|
104 |
|
105 |
-
###
|
106 |
|
107 |
-
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs.
|
108 |
|
109 |
|Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
|
110 |
|:--|--:|--:|--:|--:|--:|
|
111 |
-
|DiffusionDB
|
112 |
-
|DiffusionDB Large|14M|1.8M|6.5TB|`diffusiondb-large-part-1/` `diffusiondb-large-part-2/`|`metadata-large.parquet`|
|
113 |
|
114 |
-
##### Key
|
115 |
|
116 |
1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
|
117 |
-
2. Images in DiffusionDB 2M are stored in `png` format
|
118 |
|
119 |
## Dataset Structure
|
120 |
|
121 |
-
We use a modularized file structure to distribute DiffusionDB. The
|
122 |
|
123 |
```bash
|
124 |
# DiffusionDB 2M
|
@@ -137,35 +135,7 @@ We use a modularized file structure to distribute DiffusionDB. The 2 million ima
|
|
137 |
βββ metadata.parquet
|
138 |
```
|
139 |
|
140 |
-
|
141 |
-
# DiffusionDB Large
|
142 |
-
./
|
143 |
-
βββ diffusiondb-large-part-1
|
144 |
-
β βββ part-000001
|
145 |
-
β β βββ 0a8dc864-1616-4961-ac18-3fcdf76d3b08.webp
|
146 |
-
β β βββ 0a25cacb-5d91-4f27-b18a-bd423762f811.webp
|
147 |
-
β β βββ 0a52d584-4211-43a0-99ef-f5640ee2fc8c.webp
|
148 |
-
β β βββ [...]
|
149 |
-
β β βββ part-000001.json
|
150 |
-
β βββ part-000002
|
151 |
-
β βββ part-000003
|
152 |
-
β βββ [...]
|
153 |
-
β βββ part-010000
|
154 |
-
βββ diffusiondb-large-part-2
|
155 |
-
β βββ part-010001
|
156 |
-
β β βββ 0a68f671-3776-424c-91b6-c09a0dd6fc2d.webp
|
157 |
-
β β βββ 0a0756e9-1249-4fe2-a21a-12c43656c7a3.webp
|
158 |
-
β β βββ 0aa48f3d-f2d9-40a8-a800-c2c651ebba06.webp
|
159 |
-
β β βββ [...]
|
160 |
-
β β βββ part-000001.json
|
161 |
-
β βββ part-010002
|
162 |
-
β βββ part-010003
|
163 |
-
β βββ [...]
|
164 |
-
β βββ part-014000
|
165 |
-
βββ metadata-large.parquet
|
166 |
-
```
|
167 |
-
|
168 |
-
These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB 2M) or a lossless `WebP` file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
|
169 |
|
170 |
|
171 |
### Data Instances
|
@@ -197,9 +167,9 @@ For example, below is the image of `f3501e05-aef7-4225-a9e9-f516527408ac.png` an
|
|
197 |
|
198 |
### Dataset Metadata
|
199 |
|
200 |
-
To help you easily access prompts and other attributes of images without downloading all the Zip files, we include
|
201 |
|
202 |
-
The shape of `metadata.parquet` is (2000000, 13)
|
203 |
|
204 |
Below are three random rows from `metadata.parquet`.
|
205 |
|
@@ -211,7 +181,7 @@ Below are three random rows from `metadata.parquet`.
|
|
211 |
|
212 |
#### Metadata Schema
|
213 |
|
214 |
-
`metadata.parquet`
|
215 |
|
216 |
|Column|Type|Description|
|
217 |
|:---|:---|:---|
|
@@ -236,11 +206,11 @@ Below are three random rows from `metadata.parquet`.
|
|
236 |
|
237 |
### Data Splits
|
238 |
|
239 |
-
For DiffusionDB
|
240 |
|
241 |
### Loading Data Subsets
|
242 |
|
243 |
-
DiffusionDB is large
|
244 |
|
245 |
#### Method 1: Using Hugging Face Datasets Loader
|
246 |
|
@@ -251,7 +221,7 @@ import numpy as np
|
|
251 |
from datasets import load_dataset
|
252 |
|
253 |
# Load the dataset with the `large_random_1k` subset
|
254 |
-
dataset = load_dataset('
|
255 |
```
|
256 |
|
257 |
#### Method 2. Use the PoloClub Downloader
|
@@ -402,4 +372,4 @@ The Python code in this repository is available under the [MIT License](https://
|
|
402 |
|
403 |
### Contributions
|
404 |
|
405 |
-
If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact [Jay Wang](https://zijie.wang).
|
|
|
18 |
size_categories:
|
19 |
- n>1T
|
20 |
source_datasets:
|
21 |
+
- modified
|
22 |
tags:
|
23 |
- stable diffusion
|
24 |
- prompt engineering
|
|
|
84 |
- **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb)
|
85 |
- **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb)
|
86 |
- **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896)
|
|
|
87 |
|
88 |
### Dataset Summary
|
89 |
|
|
|
101 |
|
102 |
The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
|
103 |
|
104 |
+
### Subset
|
105 |
|
106 |
+
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs. The pixelated version of the data taken from the DiffusionDB 2M and has 2000 examples only.
|
107 |
|
108 |
|Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
|
109 |
|:--|--:|--:|--:|--:|--:|
|
110 |
+
|DiffusionDB-pixelart|2k|~1.5k|~1.6GB|`images/`|`metadata.parquet`|
|
|
|
111 |
|
112 |
+
##### Key Facts
|
113 |
|
114 |
1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
|
115 |
+
2. Images in DiffusionDB 2M are stored in `png` format.
|
116 |
|
117 |
## Dataset Structure
|
118 |
|
119 |
+
We use a modularized file structure to distribute DiffusionDB. The 2k images in DiffusionDB-pixelart are split into folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters.
|
120 |
|
121 |
```bash
|
122 |
# DiffusionDB 2M
|
|
|
135 |
βββ metadata.parquet
|
136 |
```
|
137 |
|
138 |
+
These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB-pixelart). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
139 |
|
140 |
|
141 |
### Data Instances
|
|
|
167 |
|
168 |
### Dataset Metadata
|
169 |
|
170 |
+
To help you easily access prompts and other attributes of images without downloading all the Zip files, we include a metadata table `metadata.parquet` for DiffusionDB-pixelart.
|
171 |
|
172 |
+
The shape of `metadata.parquet` is (2000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
|
173 |
|
174 |
Below are three random rows from `metadata.parquet`.
|
175 |
|
|
|
181 |
|
182 |
#### Metadata Schema
|
183 |
|
184 |
+
`metadata.parquet` schema:
|
185 |
|
186 |
|Column|Type|Description|
|
187 |
|:---|:---|:---|
|
|
|
206 |
|
207 |
### Data Splits
|
208 |
|
209 |
+
For DiffusionDB-pixelart, we split 2k images into folders where each folder contains 1,000 images and a JSON file.
|
210 |
|
211 |
### Loading Data Subsets
|
212 |
|
213 |
+
DiffusionDB is large! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
|
214 |
|
215 |
#### Method 1: Using Hugging Face Datasets Loader
|
216 |
|
|
|
221 |
from datasets import load_dataset
|
222 |
|
223 |
# Load the dataset with the `large_random_1k` subset
|
224 |
+
dataset = load_dataset('jainr3/diffusiondb-pixelart', 'large_random_1k')
|
225 |
```
|
226 |
|
227 |
#### Method 2. Use the PoloClub Downloader
|
|
|
372 |
|
373 |
### Contributions
|
374 |
|
375 |
+
If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact the original author [Jay Wang](https://zijie.wang).
|