Edit model card

SetFit with BAAI/bge-base-en-v1.5

This is a SetFit model that can be used for Text Classification. This SetFit model uses BAAI/bge-base-en-v1.5 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
0
  • 'The answer provided is a general approach to saving money, offering advice on spending wisely, making good investments, and considering personal goals. It also suggests discussing with a lead or ORGANIZATION_2 for guidance and highlights the importance of health and setting priorities.\n\nHowever, the answer lacks direct grounding in the provided document, which primarily discusses budgeting for studies and spending on learning and development. The document emphasizes evaluating expenditures based on their benefits to personal and organizational goals but does not offer explicit general financial saving tips.\n\nDue to this lack of specific correlation and grounding in the document, the evaluation is:\n\nBad'
  • 'The answer provided reads incoherently due to the insertion of "Cassandra Rivera Heather Nelson" and other names in inappropriate places throughout the text, making it difficult to assess its alignment with the original document.\n\nFurthermore, the inserted names disrupt the meaning of the text, making it unclear if all relevant points from the document are covered accurately. The structure of the sentences becomes disjointed and presents an overall lack of clarity. \n\nGiven the nature of these errors, it's impossible to fairly evaluate whether the answer strictly adheres to the details provided in the source documents. Therefore, based on the clarity, coherence, and alignment to the original text, the final result is:\n\nBad'
  • "The provided answer does not satisfactorily respond to the question about accessing the company's training resources. Instead, it contains unrelated information about document management, security protocols, feedback processes, and learning budget requests. The relevant information about accessing training resources is clearly missing or obscure.\n\n**Reasoning:**\n\n1. Irrelevant Details: The answer includes details about the usage of a password manager, secure sharing of information, and expense reimbursement, none of which pertain to accessing training resources.\n\n2. Lack of Specificity: No explicit method or platform for accessing training resources is mentioned, which is the core inquiry.\n\n3. Missed Key Point: The document points towards systems used for personal documents and reimbursement requests but fails to highlight training resource access points.\n\nFinal evaluation: Bad"
1
  • 'The answer demonstrates a clear connection to the provided document, outlining key tips and principles for giving feedback as requested by the question. The response includes the importance of timing, focusing on the situation rather than the person, being clear and direct, and the goal of helping rather than shaming. It also mentions the importance of appreciation and receiving feedback with an open mind. \n\nHowever, there are some typographical errors and misplaced words (e.g., "emichelle James Johnson MDamples") that detract slightly from the clarity. Despite these minor issues, the content provided accurately reflects the information in the source document and comprehensively addresses the question. Therefore, the final evaluation is:\n\nGood'
  • "The given answer provides a general explanation of why it is important to proactively share information from high-level meetings, but it lacks grounding in the provided document. \n\nWhile the answer discusses the benefits of transparency, alignment with the organization's vision and mission, and fostering an open work environment, it does not directly quote or refer to specific points in the document. This weakens the argument, as it seems more like an independent explanation rather than an answer strictly based on the provided material.\n\nThe document mentions the importance of employees knowing why they are doing what they do and the necessity of closing the information gap by proactively sharing what was discussed in high-level meetings. This specific point from Document 4 could have been directly referenced to make the answer more aligned with the source material.\n\nThus, the answer, although conceptually correct, does not appropriately leverage the provided document.\n\nFinal result: Bad"
  • "The answer is partially correct since it provides the most important guidance on how to report car travel expenses for reimbursement. However, it contains inaccuracies and omissions that could cause confusion. \n\n1. The email addresses cited in the answer don’t match those in the document.\n2. There’s mention of requesting a parking card for a specific date (2004-04-14), which implies an inaccurate, irrelevant detail that might mislead the reader.\n3. The answer doesn't explicitly suggest that travel cost reimbursement is handled monthly, which is a crucial piece of information.\n\nConsidering these elements, the evaluation is as follows:\n\n**Evaluation:**\nThe essential details related to car travel reimbursement are present, but incorrect email addresses and irrelevant details might mislead or cause inconvenience for employees.\n\nThe final evaluation: Bad"

Evaluation

Metrics

Label Accuracy
all 0.6269

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_newrelic_gpt-4o_cot-few_shot_only_reasoning_1726750062.370463")
# Run inference
preds = model("The answer succinctly addresses the question by stating that finance@ORGANIZATION_2.<89312988> should be contacted for questions about travel reimbursement. This is correctly derived from the provided document, which specifies that questions about travel costs and reimbursements should be directed to the finance email.

Final evaluation: Good")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 30 85.7538 210
Label Training Sample Count
0 32
1 33

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (5, 5)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 20
  • body_learning_rate: (2e-05, 2e-05)
  • head_learning_rate: 2e-05
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Training Results

Epoch Step Training Loss Validation Loss
0.0061 1 0.2304 -
0.3067 50 0.2556 -
0.6135 100 0.244 -
0.9202 150 0.1218 -
1.2270 200 0.0041 -
1.5337 250 0.0022 -
1.8405 300 0.0017 -
2.1472 350 0.0017 -
2.4540 400 0.0015 -
2.7607 450 0.0014 -
3.0675 500 0.0013 -
3.3742 550 0.0013 -
3.6810 600 0.0012 -
3.9877 650 0.0012 -
4.2945 700 0.0012 -
4.6012 750 0.0012 -
4.9080 800 0.0012 -

Framework Versions

  • Python: 3.10.14
  • SetFit: 1.1.0
  • Sentence Transformers: 3.1.0
  • Transformers: 4.44.0
  • PyTorch: 2.4.1+cu121
  • Datasets: 2.19.2
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}
Downloads last month
3
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Netta1994/setfit_baai_newrelic_gpt-4o_cot-few_shot_only_reasoning_1726750062.370463

Finetuned
(241)
this model

Evaluation results