Edit model card

SetFit with BAAI/bge-small-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-small-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
end_question
  • 'I’m done, we can close the session now.'
  • 'I’ve addressed everything I wanted to, let’s wrap it up.'
  • 'I’ve covered everything I wanted to say and I’m done.'
nothing
  • 'I think Aaron Rodgers is better than Tom Brady'
  • 'I’ve just thought of something else I want to add.'
  • 'There’s another detail I want to bring up.'
wrap_question
  • 'I’ve gone through everything relevant, let me know if there’s more to cover.'
  • 'That’s my take on this, feel free to ask if anything’s unclear.'
  • 'I think I’ve addressed the question in full, but let me know if you need more.'
next_question
  • 'I’ve finished answering this, time for a new topic.'
  • 'I’m ready to explore a different question.'
  • 'I think we’ve exhausted this, what’s next on the agenda?'

Evaluation

Metrics

Label Accuracy
all 0.9189

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("nksk/Intent_bge-small-en-v1.5_v1.0")
# Run inference
preds = model("Repeat the question for me please")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 2 8.6319 16
Label Training Sample Count
end_question 31
next_question 33
nothing 47
wrap_question 33

Training Hyperparameters

  • batch_size: (32, 16)
  • num_epochs: (3, 10)
  • max_steps: -1
  • sampling_strategy: oversampling
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.0005
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: True
  • use_amp: True
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0021 1 0.2202 -
0.1040 50 0.2429 -
0.2079 100 0.1651 -
0.3119 150 0.0268 -
0.4158 200 0.0079 -
0.5198 250 0.0033 -
0.6237 300 0.0031 -
0.7277 350 0.002 -
0.8316 400 0.0022 -
0.9356 450 0.0022 -
1.0395 500 0.002 -
1.1435 550 0.0017 -
1.2474 600 0.0014 -
1.3514 650 0.001 -
1.4553 700 0.0013 -
1.5593 750 0.0013 -
1.6632 800 0.0011 -
1.7672 850 0.0011 -
1.8711 900 0.0014 -
1.9751 950 0.001 -
2.0790 1000 0.0009 -
2.1830 1050 0.001 -
2.2869 1100 0.0009 -
2.3909 1150 0.0008 -
2.4948 1200 0.0009 -
2.5988 1250 0.0011 -
2.7027 1300 0.0009 -
2.8067 1350 0.0009 -
2.9106 1400 0.0009 -

Framework Versions

  • Python: 3.10.12
  • SetFit: 1.0.3
  • Sentence Transformers: 3.1.1
  • Transformers: 4.39.0
  • PyTorch: 2.4.1+cu121
  • Datasets: 3.0.0
  • Tokenizers: 0.15.2

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
219
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for nksk/Intent_bge-small-en-v1.5_v1.0

Finetuned
(93)
this model

Evaluation results