pszemraj's picture
Update README.md
ec22b5c verified
|
raw
history blame
3.58 kB
metadata
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - feature-extraction
  - sentence-similarity
  - transformers
license: artistic-2.0
datasets:
  - pszemraj/synthetic-text-similarity
language:
  - en

BEE-spoke-data/mega-small-embed-syntheticSTS-16384

This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.

Usage

Regardless of method, you will need to have this specific fork of transformers installed unless you want to get errors related to padding:

pip install -U git+https://github.com/pszemraj/transformers.git@mega-upgrades --force-reinstall --no-deps

Usage (Sentence-Transformers)

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]

model = SentenceTransformer('BEE-spoke-data/mega-small-embed-syntheticSTS-16384')
embeddings = model.encode(sentences)
print(embeddings)

Usage (HuggingFace Transformers)

Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.

from transformers import AutoTokenizer, AutoModel
import torch


#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('BEE-spoke-data/mega-small-embed-syntheticSTS-16384')
model = AutoModel.from_pretrained('BEE-spoke-data/mega-small-embed-syntheticSTS-16384')

# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)

Training

The model was trained with the parameters:

Loss:

sentence_transformers.losses.MatryoshkaLoss.MatryoshkaLoss with parameters:

{'loss': 'CosineSimilarityLoss', 'matryoshka_dims': [768, 512, 256, 128, 64], 'matryoshka_weights': [1, 1, 1, 1, 1], 'n_dims_per_step': -1}

arch

SentenceTransformer(
  (0): Transformer({'max_seq_length': 16384, 'do_lower_case': False}) with Transformer model: MegaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)