Edit model card

Quantized ONNX port of BAAI/bge-base-en-v1.5 for text classification and similarity searches.

Usage

Here's an example of performing inference using the model with FastEmbed.

from fastembed import TextEmbedding

documents = [
    "You should stay, study and sprint.",
    "History can only prepare us to be surprised yet again.",
]

model = TextEmbedding(model_name="BAAI/bge-base-en-v1.5")
embeddings = list(model.embed(documents))

# [
#     array([
#         0.00611658, 0.00068912, -0.0203846, ..., -0.01751488, -0.01174267,
#         0.01463472
#     ],
#           dtype=float32),
#     array([
#         0.00173448, -0.00329958, 0.01557874, ..., -0.01473586, 0.0281806,
#         -0.00448205
#     ],
#           dtype=float32)
# ]
Downloads last month
53,277
Inference Examples
Inference API (serverless) does not yet support transformers models for this pipeline type.

Model tree for Qdrant/bge-base-en-v1.5-onnx-Q

Quantized
(5)
this model