--- base_model: BAAI/bge-base-en-v1.5 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 'Reasoning: **Context Grounding:** - The answer accurately pulls information directly from the provided document, including specific changes Haribabu Kommi is making to the storage AM. It lists the changes in a manner that seems consistent with the details given in the document. **Relevance:** - The answer is directly relevant to the question, which asked specifically about the changes Haribabu Kommi is making to the storage AM. It enumerates the exact modifications and additions that are being incorporated based on the email content. **Conciseness:** - The answer is concise and to the point, listing only the changes mentioned in the supplied email without deviating into unrelated topics or providing extraneous information. Final Result: Good' - text: '**Good** **Reasoning:** 1. **Context Grounding:** The answer "China''s Ning Zhongyan won the gold medal in the men''s 1,500m final at the speed skating World Cup" is well-supported by the provided document, which explicitly states that Ning Zhongyan won the gold medal in the men''s 1,500m final. 2. **Relevance:** The answer directly addresses the specific question asked, identifying the athlete who won the gold medal in the men''s 1,500m final. 3. **Conciseness:** The answer is clear and to the point, providing only the necessary information without any additional, unrelated details.' - text: 'Reasoning why the answer may be good: 1. **Context Grounding:** The details in the answer about the sizes of the individual and combined portraits are directly pulled from the provided document. 2. **Relevance:** The answer strictly addresses the question about the available sizes for the individual and combined portraits without deviating into unrelated topics. 3. **Conciseness:** The answer is concise, directly providing the requested size information without including extraneous details. Reasoning why the answer may be bad: 1. There is no discernible reason why this answer may be bad based on the provided criteria. It is well-supported by the document, directly answers the question, and is concise. Final Result: **Good**' - text: 'Reasoning why the answer may be good: 1. **Context Grounding:** The answer accurately lists the components found in the provided document, such as comprehension questions, writing exercises, discussion questions, an additional reading list, semester and full-year schedules, and a bibliography. It also includes details about the organization of the guide into units and lessons, which is mentioned in the document. 2. **Relevance:** The answer specifically addresses the question by identifying the components of the British Medieval Student Guide. 3. **Conciseness:** The answer is relatively to the point, mentioning the main components without unnecessary elaboration. Reasoning why the answer may be bad: 1. **Context Grounding:** Although the details are generally correct, some parts of the provided description are omitted, such as the note that comprehension question answers are in the Teacher''s Guide. 2. **Relevance:** The initial part about the introductory question "Why read great literature?" and some other additional comments are not directly related to the components of the Student Guide. 3. **Conciseness:** The answer could be more concise by excluding repeated and unrelated information, focusing only on listing the components directly. Final Result: **Bad** The answer introduces unnecessary elements that are not related to enumerating the components of the guide, and it overlooks some specific details provided in the document. Overall, the response is correct but not optimal in addressing the specific question concisely.' - text: '**Reasoning:** **Why the answer may be good:** 1. It lists three names of Members of Congress, which directly responds to the question. **Why the answer may be bad:** 1. **Context Grounding:** The provided document specifically names Rep. Danny Davis as the third Member of Congress, and the first two were Reps. Keith Ellison and Barbara Lee, not Andy Harris, Kyle Evans, or Jessica Smith. This indicates that the actual names provided in the answer are incorrect and not grounded in the given context. 2. **Relevance:** The answer is irrelevant because it provides incorrect names, which does not address the question accurately. 3. **Conciseness:** The answer is concise, but since it’s incorrect, its brevity doesn''t contribute to its correctness. **Final Result:** Bad' inference: true model-index: - name: SetFit with BAAI/bge-base-en-v1.5 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: accuracy value: 0.92 name: Accuracy --- # SetFit with BAAI/bge-base-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 512 tokens - **Number of Classes:** 2 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 | | | 1 | | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.92 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Netta1994/setfit_baai_rag_ds_gpt-4o_improved-cot-instructions_two_reasoning_only_reasoning_1726") # Run inference preds = model("**Good** **Reasoning:** 1. **Context Grounding:** The answer \"China's Ning Zhongyan won the gold medal in the men's 1,500m final at the speed skating World Cup\" is well-supported by the provided document, which explicitly states that Ning Zhongyan won the gold medal in the men's 1,500m final. 2. **Relevance:** The answer directly addresses the specific question asked, identifying the athlete who won the gold medal in the men's 1,500m final. 3. **Conciseness:** The answer is clear and to the point, providing only the necessary information without any additional, unrelated details.") ``` ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:---------|:----| | Word count | 52 | 125.5070 | 199 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 34 | | 1 | 37 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (5, 5) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0056 | 1 | 0.2031 | - | | 0.2809 | 50 | 0.2589 | - | | 0.5618 | 100 | 0.2125 | - | | 0.8427 | 150 | 0.0079 | - | | 1.1236 | 200 | 0.0022 | - | | 1.4045 | 250 | 0.0017 | - | | 1.6854 | 300 | 0.0017 | - | | 1.9663 | 350 | 0.0014 | - | | 2.2472 | 400 | 0.0014 | - | | 2.5281 | 450 | 0.0012 | - | | 2.8090 | 500 | 0.0012 | - | | 3.0899 | 550 | 0.0012 | - | | 3.3708 | 600 | 0.0012 | - | | 3.6517 | 650 | 0.0011 | - | | 3.9326 | 700 | 0.0011 | - | | 4.2135 | 750 | 0.0011 | - | | 4.4944 | 800 | 0.0011 | - | | 4.7753 | 850 | 0.001 | - | ### Framework Versions - Python: 3.10.14 - SetFit: 1.1.0 - Sentence Transformers: 3.1.0 - Transformers: 4.44.0 - PyTorch: 2.4.1+cu121 - Datasets: 2.19.2 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```