Edit model card

SetFit with sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2

This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
relevant
  • 'Nuevo caso de phishing relacionado con Abanca, registrado el 23 de julio de 2024, con la URL: /www.inicio-abanca.com/es/WELE200M_Logon_Ini.aspx.'
  • 'Una alumna que trabajó en Bancomer reveló un esquema de robo en el que dos cajeros afirmaban que un cliente había depositado mil pesos en un pago de dos mil y se quedaban con la mitad cada uno.'
  • 'Las previsiones de crecimiento de España para 2024 han mejorado según diversas organizaciones, con estimaciones que oscilan entre el 1,8% y el 2,4%, impulsadas por turismo, exportaciones y trabajadores extranjeros.'
discard
  • 'Banco Santander ofrece una cuenta en línea sin comisiones y un bono de 400€ por domiciliar tu nómina.'
  • 'El BBVA fue el banco que peor me trató al tener que contratar productos innecesarios para conseguir mi primera hipoteca de funcionario.'
  • 'CaixaBank se destaca como líder del sector bancario gracias a su sólido crecimiento y eficiencia operativa, convirtiéndose en una opción atractiva para inversores.'

Evaluation

Metrics

Label Accuracy
all 0.7739

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("saraestevez/setfit-minilm-bank-tweets-processed-200")
# Run inference
preds = model("Los resultados del Banco Sabadell impulsan al IBEX 35.")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 21.3275 41
Label Training Sample Count
discard 200
relevant 200

Training Hyperparameters

  • batch_size: (16, 2)
  • num_epochs: (1, 16)
  • max_steps: -1
  • sampling_strategy: oversampling
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0002 1 0.4199 -
0.0100 50 0.3357 -
0.0199 100 0.3198 -
0.0299 150 0.2394 -
0.0398 200 0.2411 -
0.0498 250 0.2277 -
0.0597 300 0.1876 -
0.0697 350 0.1481 -
0.0796 400 0.1533 -
0.0896 450 0.0145 -
0.0995 500 0.0113 -
0.1095 550 0.0045 -
0.1194 600 0.0201 -
0.1294 650 0.0008 -
0.1393 700 0.0003 -
0.1493 750 0.0003 -
0.1592 800 0.0003 -
0.1692 850 0.0001 -
0.1791 900 0.0001 -
0.1891 950 0.0001 -
0.1990 1000 0.0001 -
0.2090 1050 0.0001 -
0.2189 1100 0.0002 -
0.2289 1150 0.0001 -
0.2388 1200 0.0001 -
0.2488 1250 0.0001 -
0.2587 1300 0.0 -
0.2687 1350 0.0001 -
0.2786 1400 0.0001 -
0.2886 1450 0.0001 -
0.2985 1500 0.0 -
0.3085 1550 0.0001 -
0.3184 1600 0.0 -
0.3284 1650 0.0 -
0.3383 1700 0.0 -
0.3483 1750 0.0001 -
0.3582 1800 0.0 -
0.3682 1850 0.0 -
0.3781 1900 0.0 -
0.3881 1950 0.0 -
0.3980 2000 0.0 -
0.4080 2050 0.0 -
0.4179 2100 0.0 -
0.4279 2150 0.0 -
0.4378 2200 0.0 -
0.4478 2250 0.0 -
0.4577 2300 0.0 -
0.4677 2350 0.0 -
0.4776 2400 0.0 -
0.4876 2450 0.0 -
0.4975 2500 0.0 -
0.5075 2550 0.0 -
0.5174 2600 0.0 -
0.5274 2650 0.0 -
0.5373 2700 0.0 -
0.5473 2750 0.0 -
0.5572 2800 0.0 -
0.5672 2850 0.0 -
0.5771 2900 0.0 -
0.5871 2950 0.0 -
0.5970 3000 0.0 -
0.6070 3050 0.0 -
0.6169 3100 0.0 -
0.6269 3150 0.0 -
0.6368 3200 0.0 -
0.6468 3250 0.0 -
0.6567 3300 0.0 -
0.6667 3350 0.0 -
0.6766 3400 0.0 -
0.6866 3450 0.0 -
0.6965 3500 0.0 -
0.7065 3550 0.0 -
0.7164 3600 0.0 -
0.7264 3650 0.0 -
0.7363 3700 0.0 -
0.7463 3750 0.0 -
0.7562 3800 0.0 -
0.7662 3850 0.0 -
0.7761 3900 0.0 -
0.7861 3950 0.0 -
0.7960 4000 0.0 -
0.8060 4050 0.0 -
0.8159 4100 0.0 -
0.8259 4150 0.0 -
0.8358 4200 0.0 -
0.8458 4250 0.0 -
0.8557 4300 0.0 -
0.8657 4350 0.0 -
0.8756 4400 0.0 -
0.8856 4450 0.0 -
0.8955 4500 0.0 -
0.9055 4550 0.0 -
0.9154 4600 0.0 -
0.9254 4650 0.0 -
0.9353 4700 0.0 -
0.9453 4750 0.0 -
0.9552 4800 0.0 -
0.9652 4850 0.0 -
0.9751 4900 0.0 -
0.9851 4950 0.0 -
0.9950 5000 0.0 -

Framework Versions

  • Python: 3.11.0rc1
  • SetFit: 1.0.3
  • Sentence Transformers: 2.7.0
  • Transformers: 4.39.0
  • PyTorch: 2.3.1+cu121
  • Datasets: 2.19.1
  • Tokenizers: 0.15.2

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
2
Safetensors
Model size
118M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for saraestevez/setfit-minilm-bank-tweets-processed-200

Evaluation results