Papers
arxiv:2310.10962

Large Language Models can Contrastively Refine their Generation for Better Sentence Representation Learning

Published on Oct 17, 2023
Authors:
,
,
,
,

Abstract

Recently, large language models (LLMs) have emerged as a groundbreaking technology and their unparalleled text generation capabilities have sparked interest in their application to the fundamental sentence representation learning task. Existing methods have explored utilizing LLMs as data annotators to generate synthesized data for training contrastive learning based sentence embedding models such as SimCSE. However, since <PRE_TAG>contrastive learning model</POST_TAG>s are sensitive to the quality of sentence pairs, the effectiveness of these methods is largely influenced by the content generated from LLMs, highlighting the need for more refined generation in the context of <PRE_TAG>sentence representation learning</POST_TAG>. Building upon this premise, we propose MultiCSR, a multi-level contrastive <PRE_TAG>sentence representation learning</POST_TAG> framework that decomposes the process of prompting LLMs to generate a corpus for training base sentence embedding models into three stages (i.e., sentence generation, sentence pair construction, in-batch training) and refines the generated content at these three distinct stages, ensuring only high-quality <PRE_TAG>sentence pairs</POST_TAG> are utilized to train a base <PRE_TAG>contrastive learning model</POST_TAG>. Our extensive experiments reveal that MultiCSR enables a less advanced LLM to surpass the performance of ChatGPT, while applying it to ChatGPT achieves better state-of-the-art results. Comprehensive analyses further underscore the potential of our framework in various application scenarios and achieving better <PRE_TAG>sentence representation learning</POST_TAG> with LLMs.

Community

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2310.10962 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.