YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
ms-marco-MiniLM-L-6-v2
Model description
This model is a fine-tuned version of ms-marco-MiniLM-L-6-v2 for relevancy evaluation in RAG scenarios.
Training Data
The model was trained on a specialized dataset for evaluating RAG responses, containing pairs of (context, response) with relevancy labels. Dataset size: 4505 training examples, 5006 validation examples.
Performance Metrics
Validation Metrics:
- NDCG: 0.9996 ± 0.0001
- MAP: 0.9970 ± 0.0009
- Accuracy: 0.9766 ± 0.0033
Usage Example
from sentence_transformers import CrossEncoder
# Load model
model = CrossEncoder('xtenzr/ms-marco-MiniLM-L-6-v2_finetuned_20241120_2220')
# Prepare inputs
texts = [
["Context: {...}
Query: {...}", "Response: {...}"],
]
# Get predictions
scores = model.predict(texts) # Returns relevancy scores [0-1]
Training procedure
- Fine-tuned using sentence-transformers CrossEncoder
- Trained on relevancy evaluation dataset
- Optimized for RAG response evaluation
- Downloads last month
- 4