syedkhalid076's picture
Update README.md
a90aec1 verified
|
raw
history blame
4.12 kB
metadata
language:
  - en
license: mit
tags:
  - text-classification
  - zero-shot-classification
datasets:
  - facebook/anli
  - fever/fever
  - nyu-mll/multi_nli
pipeline_tag: zero-shot-classification
library_name: transformers

DeBERTa-v3-base Zero-Shot Classification Model

This repository hosts a fine-tuned version of the DeBERTa-v3-base model for Zero-Shot Text Classification. With this model, you can classify text into predefined categories without needing to retrain the model for specific tasks. It is ideal for tasks where labeled data is scarce or for rapid prototyping of text classification solutions.


Model Overview

  • Base Model: DeBERTa-v3-base
  • Architecture: DebertaV2ForSequenceClassification
  • Language: English (en)
  • Data Type: float16 (SafeTensor format for efficiency)

This model leverages the capabilities of DeBERTa-v3-base, fine-tuned on datasets like multi_nli, facebook/anli, and fever to enhance its zero-shot classification performance.


Features

  • Zero-Shot Classification: Directly classify text into any set of user-defined labels without additional training.
  • Multi-Label Support: Handle tasks with overlapping categories or multiple applicable labels (set multi_label=True).
  • Pretrained Efficiency: Optimized for inference using mixed-precision (float16) with SafeTensors.

Example Usage

This model is designed to integrate seamlessly with Hugging Face's transformers library. Here's a simple example:

from transformers import pipeline

# Load the Zero-Shot Classification pipeline
classifier = pipeline("zero-shot-classification", model="syedkhalid076/DeBERTa-Zero-Shot-Classification")

# Input sequence and candidate labels
sequence_to_classify = "Last week I upgraded my iOS version and ever since then your app is crashing."
candidate_labels = ["mobile", "website", "billing", "account access", "app crash"]

# Perform classification
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
print(output)

Applications

I trained this model for UX research purposes, but it can be used for any of the following tasks:

  • Customer Feedback Analysis: Categorize user reviews or feedback.
  • Intent Detection: Identify user intents in conversational AI systems.
  • Content Classification: Classify articles, social media posts, or documents.
  • Error Detection: Detect error reports in logs or feedback.

Training Data

The model was fine-tuned on the following datasets:

  • MultiNLI: Multi-genre natural language inference corpus.
  • ANLI: Adversarial NLI dataset for robust entailment modeling.
  • FEVER: Fact Extraction and Verification dataset.

These datasets help the model generalize across a wide range of zero-shot classification tasks.


Performance

This model demonstrates strong performance across various zero-shot classification benchmarks, effectively distinguishing between user-defined categories in diverse text inputs.


Limitations

  • Language Support: Currently supports English (en) only.
  • Context Length: Performance may degrade with extremely long text inputs. Consider truncating inputs to the model's max token length.

License

This model is licensed under the MIT License. You are free to use, modify, and distribute it with appropriate attribution.


Citation

If you use this model in your work, please cite this repository:

@misc{syedkhalid076_deberta_zeroshoot,
  author = {Syed Khalid Hussain},
  title = {DeBERTa Zero-Shot Classification},
  year = {2024},
  url = {https://huggingface.co/syedkhalid076/DeBERTa-Zero-Shot-Classification}
}

Acknowledgements

This model was fine-tuned using Hugging Face Transformers and hosted on the Hugging Face Model Hub. Special thanks to the creators of DeBERTa-v3 and the contributing datasets.