Neuronx model for BAAI/bge-base-en-v1.5
This repository contains are AWS Inferentia2 and neuronx
compatible checkpoint for BAAI/bge-base-en-v1.5. You can find detailed information about the base model on its Model Card.
Usage on Amazon SageMaker
coming soon
Usage with optimum-neuron
from optimum.neuron import NeuronModelForFeatureExtraction
from transformers import AutoTokenizer
import torch
import torch_neuronx
# Load Model from Hugging Face repository
model = NeuronModelForFeatureExtraction.from_pretrained("aws-neuron/bge-base-en-v1-5-seqlen-384-bs-1")
tokenizer = AutoTokenizer.from_pretrained("aws-neuron/bge-base-en-v1-5-seqlen-384-bs-1")
# sentence input
inputs = "Hello, my dog is cute"
# Tokenize sentences
encoded_input = tokenizer(inputs,return_tensors="pt",truncation=True,max_length=model.config.neuron["static_sequence_length"])
# Compute embeddings
with torch.no_grad():
model_output = model(*tuple(encoded_input.values()))
# Perform pooling. In this case, cls pooling.
sentence_embeddings = model_output[0][:, 0]
# normalize embeddings
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
input_shapes
{
"sequence_length": 384,
"batch_size": 1
}
- Downloads last month
- 170
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Spaces using aws-neuron/bge-base-en-v1-5-seqlen-384-bs-1 2
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported76.149
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported39.323
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported70.169
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported93.387
- ap on MTEB AmazonPolarityClassificationtest set self-reported90.213
- f1 on MTEB AmazonPolarityClassificationtest set self-reported93.377
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported48.846
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported48.146
- map_at_1 on MTEB ArguAnatest set self-reported40.754
- map_at_10 on MTEB ArguAnatest set self-reported55.761