Edit model card

Fast Emotion-X: Fine-tuned DeBERTa V3 Small Based Emotion Detection

This model is a fine-tuned version of microsoft/deberta-v3-small for emotion detection using the dair-ai/emotion dataset.

Overview

Fast Emotion-X is a state-of-the-art emotion detection model fine-tuned from Microsoft's DeBERTa V3 Small model. It is designed to accurately classify text into one of six emotional categories. Leveraging the robust capabilities of DeBERTa, this model is fine-tuned on a comprehensive emotion dataset, ensuring high accuracy and reliability.

Model Details

  • Model Name: AnkitAI/deberta-v3-small-base-emotions-classifier
  • Base Model: microsoft/deberta-v3-small
  • Dataset: dair-ai/emotion
  • Fine-tuning: The model is fine-tuned for emotion detection with a classification head for six emotional categories: anger, disgust, fear, joy, sadness, and surprise.

Emotion Labels

  • Anger
  • Disgust
  • Fear
  • Joy
  • Sadness
  • Surprise

Usage

You can use this model directly with the provided Python package or the Hugging Face transformers library.

Installation

Install the package using pip:

pip install emotionclassifier

Basic Usage

Here's an example of how to use the emotionclassifier to classify a single text:

from emotionclassifier import EmotionClassifier

# Initialize the classifier with the default model
classifier = EmotionClassifier()

# Classify a single text
text = "I am very happy today!"
result = classifier.predict(text)
print("Emotion:", result['label'])
print("Confidence:", result['confidence'])

Batch Processing

You can classify multiple texts at once using the predict_batch method:

texts = ["I am very happy today!", "I am so sad."]
results = classifier.predict_batch(texts)
print("Batch processing results:", results)

Visualization

To visualize the emotion distribution of a text:

from emotionclassifier import plot_emotion_distribution

result = classifier.predict("I am very happy today!")
plot_emotion_distribution(result['probabilities'], classifier.labels.values())

Command-Line Interface (CLI) Usage

You can also use the package from the command line:

emotionclassifier --model deberta-v3-small --text "I am very happy today!"

DataFrame Integration

Integrate with pandas DataFrames to classify text columns:

import pandas as pd
from emotionclassifier import DataFrameEmotionClassifier

df = pd.DataFrame({
    'text': ["I am very happy today!", "I am so sad."]
})

classifier = DataFrameEmotionClassifier()
df = classifier.classify_dataframe(df, 'text')
print(df)

Emotion Trends Over Time

Analyze and plot emotion trends over time:

from emotionclassifier import EmotionTrends

texts = ["I am very happy today!", "I am feeling okay.", "I am very sad."]
trends = EmotionTrends()
emotions = trends.analyze_trends(texts)
trends.plot_trends(emotions)

Fine-tuning

Fine-tune a pre-trained model on your own dataset:

from emotionclassifier.fine_tune import fine_tune_model

# Define your training and validation datasets
train_dataset = ...
val_dataset = ...

# Fine-tune the model
fine_tune_model(classifier.model, classifier.tokenizer, train_dataset, val_dataset, output_dir='fine_tuned_model')

Using transformers Library

from transformers import AutoModelForSequenceClassification, AutoTokenizer

model_name = "AnkitAI/deberta-v3-small-base-emotions-classifier"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Example usage
def predict_emotion(text):
    inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
    outputs = model(**inputs)
    logits = outputs.logits
    predictions = logits.argmax(dim=1)
    return predictions

text = "I'm so happy with the results!"
emotion = predict_emotion(text)
print("Detected Emotion:", emotion)

Training

The model was trained using the following parameters:

  • Learning Rate: 2e-5
  • Batch Size: 4
  • Weight Decay: 0.01
  • Evaluation Strategy: Epoch

Training Details

  • Evaluation Loss: 0.0858
  • Evaluation Runtime: 110070.6349 seconds
  • Evaluation Samples/Second: 78.495
  • Evaluation Steps/Second: 2.453
  • Training Loss: 0.1049
  • Evaluation Accuracy: 94.6%
  • Evaluation Precision: 94.8%
  • Evaluation Recall: 94.5%
  • Evaluation F1 Score: 94.7%

Model Card Data

Parameter Value
Model Name microsoft/deberta-v3-small
Training Dataset dair-ai/emotion
Number of Training Epochs 20
Learning Rate 2e-5
Per Device Train Batch Size 4
Evaluation Strategy Epoch
Best Model Accuracy 94.6%

License

This model is licensed under the MIT License.

Downloads last month
31
Safetensors
Model size
142M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train AnkitAI/deberta-v3-small-base-emotions-classifier

Space using AnkitAI/deberta-v3-small-base-emotions-classifier 1