CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders
Abstract
This work presents <PRE_TAG>CLIPDraw</POST_TAG>, an algorithm that synthesizes novel drawings based on natural language input. <PRE_TAG>CLIPDraw</POST_TAG> does not require any training; rather a pre-trained CLIP language-image encoder is used as a metric for maximizing similarity between the given description and a generated drawing. Crucially, <PRE_TAG>CLIPDraw</POST_TAG> operates over vector strokes rather than pixel images, a constraint that biases drawings towards simpler human-recognizable shapes. Results compare between <PRE_TAG>CLIPDraw</POST_TAG> and other synthesis-through-optimization methods, as well as highlight various interesting behaviors of <PRE_TAG>CLIPDraw</POST_TAG>, such as satisfying ambiguous text in multiple ways, reliably producing drawings in diverse artistic styles, and scaling from simple to complex visual representations as stroke count is increased. Code for experimenting with the method is available at: https://colab.research.google.com/github/kvfrans/clipdraw/blob/main/clipdraw.ipynb
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper