|
--- |
|
dataset_info: |
|
features: |
|
- name: image |
|
dtype: image |
|
--- |
|
# Dataset Card for "pokemon-512-valid" |
|
|
|
A cleaned + upsampled-to-512px-square version of https://www.kaggle.com/datasets/djilax/pkmn-image-dataset, suitable for training high-resolution unconditional image generators. |
|
|
|
source from [madebyollin/pokemon-512](https://huggingface.co/datasets/madebyollin/pokemon-512) |
|
|
|
80% train_dataset + 10% test_dataset + 10% valid_dataset |
|
|
|
I use the following code to split it |
|
```python |
|
from datasets import load_dataset, DatasetDict,Dataset |
|
images_dataset = load_dataset('madebyollin/pokemon-512', split="train") |
|
# 80% train_dataset + 20% train_testvalid |
|
train_testvalid = images_dataset.train_test_split(test_size=0.2,shuffle=True,seed=2000) |
|
# 10% test_dataset + 10% valid_dataset |
|
test_valid = train_testvalid['test'].train_test_split(test_size=0.5,shuffle=True,seed=2000) |
|
|
|
train_dev_test_dataset = DatasetDict({ |
|
'train': train_testvalid['train'], |
|
'test': test_valid['train'], |
|
'validation': test_valid['test']}) |
|
print(train_dev_test_dataset) |
|
|
|
train_dataset = train_dev_test_dataset["train"] |
|
test_dataset = train_dev_test_dataset["test"] |
|
valid_dataset = train_dev_test_dataset["validation"] |
|
|
|
train_dataset.to_parquet("./data/train_dataset.parquet") |
|
test_dataset.to_parquet("./data/test_dataset.parquet") |
|
valid_dataset.to_parquet("./data/valid_dataset.parquet") |
|
``` |
|
|
|
I customed a "train_unconditional.py" from diffusers,logging "validation_loss" while training, |
|
and added a module to caculate the FID score by using test_dataset. |
|
|