RepresentLM-v1

This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.

The model is trained on the HEADLINES semantic similarity dataset, using the StoriesLM-v1-1963 model as a base.

Usage

First install the sentence-transformers package:

pip install -U sentence-transformers

The model can then be used to encode language sequences:

from sentence_transformers import SentenceTransformer
sequences = ["This is an example sequence", "Each sequence is embedded"]

model = SentenceTransformer('RepresentLM/RepresentLM-v1')
embeddings = model.encode(sequences)
print(embeddings)
Downloads last month
118
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for RepresentLM/RepresentLM-v1

Finetuned
(1)
this model

Datasets used to train RepresentLM/RepresentLM-v1