Edit model card

SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
discard
  • 'Marcos informa que se puede realizar el pago de productos de BBVA a través de la Línea BBVA, cajeros automáticos, practicajas, ventanilla de sucursal o diversos comercios.'
  • 'Se ha celebrado una reunión de alto nivel en 2024 para concretar proyectos de inversión, incluyendo la cooperación con BBVA para la construcción de un portadrones y en el ámbito turístico.'
  • 'Diversificar es clave para alcanzar nuestros objetivos en inversiones y en la vida, descubre cómo tus decisiones financieras pueden impactar tu vida personal en este artículo.'
relevant
  • 'La persona recibió un correo idéntico al que le explicaron que es una técnica de estafa que simula enviarlo desde su propia cuenta.'
  • 'La cancelación de la cuenta se ha demorado un mes y al solicitar 200 euros para un viaje, me han cobrado 9 euros de comisión.'
  • 'El Santander logró récords en beneficios y comisiones a los desfavorecidos bajo el ministerio del consagrado en Consumo, mientras se obsesionan con la apariencia y carecen de dignidad y principios.'

Evaluation

Metrics

Label Accuracy
all 0.8029

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("saraestevez/setfit-minilm-bank-tweets-processed-400")
# Run inference
preds = model("La app de BBVA está caída, pero se pide paciencia para los depósitos de mañana.")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 21.6612 44
Label Training Sample Count
discard 400
relevant 400

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (1, 1)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0005 1 0.3197 -
0.025 50 0.2199 -
0.05 100 0.2876 -
0.075 150 0.2568 -
0.1 200 0.196 -
0.125 250 0.15 -
0.15 300 0.1475 -
0.175 350 0.081 -
0.2 400 0.0441 -
0.225 450 0.0228 -
0.25 500 0.0017 -
0.275 550 0.0083 -
0.3 600 0.002 -
0.325 650 0.0013 -
0.35 700 0.0011 -
0.375 750 0.0014 -
0.4 800 0.0004 -
0.425 850 0.0001 -
0.45 900 0.0118 -
0.475 950 0.0002 -
0.5 1000 0.0012 -
0.525 1050 0.0003 -
0.55 1100 0.0001 -
0.575 1150 0.0003 -
0.6 1200 0.0001 -
0.625 1250 0.0001 -
0.65 1300 0.0001 -
0.675 1350 0.0002 -
0.7 1400 0.0197 -
0.725 1450 0.0002 -
0.75 1500 0.0002 -
0.775 1550 0.0001 -
0.8 1600 0.0004 -
0.825 1650 0.0001 -
0.85 1700 0.0001 -
0.875 1750 0.0001 -
0.9 1800 0.0001 -
0.925 1850 0.0001 -
0.95 1900 0.0158 -
0.975 1950 0.0001 -
1.0 2000 0.0001 -

Framework Versions

  • Python: 3.11.0rc1
  • SetFit: 1.0.3
  • Sentence Transformers: 2.7.0
  • Transformers: 4.39.0
  • PyTorch: 2.3.1+cu121
  • Datasets: 2.19.1
  • Tokenizers: 0.15.2

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
2
Safetensors
Model size
118M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for saraestevez/setfit-minilm-bank-tweets-processed-400

Evaluation results