--- library_name: setfit tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer metrics: - accuracy widget: - text: part-of-speech ( pos ) tagging is a fundamental language analysis task---part-of-speech ( pos ) tagging is a fundamental nlp task , used by a wide variety of applications - text: the two baseline methods were implemented using scikit-learn in python---the models were implemented using scikit-learn module - text: semantic parsing is the task of converting a sentence into a representation of its meaning , usually in a logical form grounded in the symbols of some fixed ontology or relational database ( cite-p-21-3-3 , cite-p-21-3-4 , cite-p-21-1-11 )---for this language model , we built a trigram language model with kneser-ney smoothing using srilm from the same automatically segmented corpus - text: the results show that our model can clearly outperform the baselines in terms of three evaluation metrics---for the extractive or abstractive summaries , we use rouge scores , a metric used to evaluate automatic summarization performance , to measure the pairwise agreement of summaries from different annotators - text: language models were built with srilm , modified kneser-ney smoothing , default pruning , and order 5---the language model used was a 5-gram with modified kneserney smoothing , built with srilm toolkit pipeline_tag: text-classification inference: true base_model: sentence-transformers/paraphrase-TinyBERT-L6-v2 --- # SetFit with sentence-transformers/paraphrase-TinyBERT-L6-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-TinyBERT-L6-v2](https://huggingface.co/sentence-transformers/paraphrase-TinyBERT-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-TinyBERT-L6-v2](https://huggingface.co/sentence-transformers/paraphrase-TinyBERT-L6-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 2 classes ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 0 |