Edit model card

LightEmbed/sentence-t5-base-onnx

This is the ONNX version of the Sentence Transformers model sentence-transformers/sentence-t5-base (https://huggingface.co/sentence-transformers/sentence-t5-base) for sentence embedding, optimized for speed and lightweight performance. By utilizing onnxruntime and tokenizers instead of heavier libraries like sentence-transformers and transformers, this version ensures a smaller library size and faster execution. Below are the details of the model:

  • Base model: sentence-transformers/sentence-t5-base
  • Embedding dimension: 768
  • Max sequence length: 256
  • File size on disk: 0.41 GB
  • Pooling incorporated: Yes

This ONNX model consists all components in the original sentence transformer model: Transformer, Pooling, Dense, Normalize

Usage (LightEmbed)

Using this model becomes easy when you have LightEmbed installed:

pip install -U light-embed

Then you can use the model using the original model name like this:

from light_embed import TextEmbedding
sentences = [
    "This is an example sentence",
    "Each sentence is converted"
]

model = TextEmbedding('sentence-transformers/sentence-t5-base')
embeddings = model.encode(sentences)
print(embeddings)

Then you can use the model using onnx model name like this:

from light_embed import TextEmbedding
sentences = [
    "This is an example sentence",
    "Each sentence is converted"
]

model = TextEmbedding('LightEmbed/sentence-t5-base-onnx')
embeddings = model.encode(sentences)
print(embeddings)

Citing & Authors

Binh Nguyen / binhcode25@gmail.com

Downloads last month
8
Inference Examples
Inference API (serverless) does not yet support light-embed models for this pipeline type.