--- tags: - text-to-image - stable-diffusion license: apache-2.0 language: - en library_name: diffusers --- # StyleShot Model Card
[**Project Page**](https://styleshot.github.io) **|** [**Paper (ArXiv)**](https://arxiv.org/abs/) **|** [**Code**](https://github.com/tencent-ailab/IP-Adapter)
--- ## Introduction we present StyleShot, a generalized plug-to-play style transfer method, capable of generating the high-quality stylized images that match the desired style from any reference image without test-time style-tuning. To the best of our knowledge, StyleShot is the first work to designate a style-aware encoder based on Stable Diffusion, which enables the extraction of features from the reference image that are rich in style expression. StyleShot can be generalized not only to other custom models fine-tuned from the same base model, but also to controllable generation using existing controllable tools. ![arch](./fig1.png) ## Models ### StyleShot for SD 1.5 - [ip-adapter_sd15.bin](https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter_sd15.bin): use global image embedding from OpenCLIP-ViT-H-14 as condition - [ip-adapter_sd15_light.bin](https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter_sd15_light.bin): same as ip-adapter_sd15, but more compatible with text prompt