Papers
arxiv:2106.14843

CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders

Published on Jun 28, 2021
Authors:
,
,

Abstract

This work presents <PRE_TAG>CLIPDraw</POST_TAG>, an algorithm that synthesizes novel drawings based on natural language input. <PRE_TAG>CLIPDraw</POST_TAG> does not require any training; rather a pre-trained CLIP language-image encoder is used as a metric for maximizing similarity between the given description and a generated drawing. Crucially, <PRE_TAG>CLIPDraw</POST_TAG> operates over vector strokes rather than pixel images, a constraint that biases drawings towards simpler human-recognizable shapes. Results compare between <PRE_TAG>CLIPDraw</POST_TAG> and other synthesis-through-optimization methods, as well as highlight various interesting behaviors of <PRE_TAG>CLIPDraw</POST_TAG>, such as satisfying ambiguous text in multiple ways, reliably producing drawings in diverse artistic styles, and scaling from simple to complex visual representations as stroke count is increased. Code for experimenting with the method is available at: https://colab.research.google.com/github/kvfrans/clipdraw/blob/main/clipdraw.ipynb

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2106.14843 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2106.14843 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2106.14843 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.