ast / evaluation /README.md
jgwill's picture
eval
6321685
|
raw
history blame
1.06 kB
## Style transfer quaintitative evaluation using Deception Score
### How to calculate Deception Score:
1. Run `./download_evaluation_data.py` to download the weights for artist classification model.
2. Set `results_dir` variable in `eval_deception_score.py:92` to point to the directory with stylized images.
All images generated by one method must be in one directory.
Image filenames must be in format `"{content_name}_stylized_{artist_name}.jpg"`, for example: `"Places366_val_00000510_stylized_vincent-van-gogh.jpg"`.
3. Run `./run_deception_score_vgg_16_wikiart.sh`
4. Read results in the log file in `./logs` directory.
### How to evaluate your own model:
- Download validation sets from MSCOCO ([val2017.zip](http://images.cocodataset.org/zips/val2017.zip)) and Places365 ([val_large.tar](http://data.csail.mit.edu/places/places365/val_large.tar)).
- To compare with deception score reported in the paper run your stylization model on the content images listed in [eval_paths_700_val.json](evaluation_data/eval_paths_700_val.json).