Papers
arxiv:2410.10034

TULIP: Token-length Upgraded CLIP

Published on Oct 13, 2024
Authors:
,
,
,
,
,

Abstract

We address the challenge of representing long captions in vision-language models, such as CLIP. By design these models are limited by fixed, absolute positional encodings, restricting inputs to a maximum of 77 tokens and hindering performance on tasks requiring longer descriptions. Although recent work has attempted to overcome this limit, their proposed approaches struggle to model token relationships over longer distances and simply extend to a fixed new token length. Instead, we propose a generalizable method, named TULIP, able to upgrade the token length to any length for <PRE_TAG>CLIP-like models</POST_TAG>. We do so by improving the architecture with <PRE_TAG>relative position encodings</POST_TAG>, followed by a training procedure that (i) distills the original CLIP text encoder into an encoder with <PRE_TAG>relative position encodings</POST_TAG> and (ii) enhances the model for aligning longer captions with images. By effectively encoding captions longer than the default 77 tokens, our model outperforms baselines on cross-modal tasks such as retrieval and text-to-image generation.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.10034 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.10034 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.