Edit model card

Hellenic Sentiment AI - Version 2.0

HellenicSentimentAI Logo

Model Description

This is the second version of Hellenic Sentiment AI.

Like the first version, this second version of the model is an open-weights only model and designed for both emotion and sentiment classification of text in Greek language.

The new Emotions classifier, is based on a custom multi-label classification architecture and model which extends the previous version of the model (version 1.1).

18 diverse emotion labels are available for classification:

    emotion_labels = [
        'joy', 'trust', 'excitement', 'gratitude', 'hope', 'love', 'pride',
        'anger', 'disgust', 'fear', 'sadness', 'anxiety', 'frustration', 'guilt',
        'disappointment', 'surprise', 'anticipation', 'neutral'
    ]

The Sentiment polarity labels remain the same as in Version 1.1 of the model.

For reference, these are:

sentiment_labels = ['negative', 'neutral', 'positive']

Model Details

  • Model Name: Hellenic Sentiment AI
  • Model Version: 2.0
  • Language: Emotion classification: only Greek (Version 2.0), Sentiment polarity: Multilingual (El, En, Fr, It, Es, De, Ar) (Version 1.1)
  • Framework: Transformers from HuggingFace
  • Max Sequence Length: 512
  • Base Architecture: RoBERTa
  • Training Data: The model (version 2.0) was trained on a custom, curated (Greek language only) dataset of reviews with their respective emotions, comprising human-handpicked reviews from products, places, restaurants, etc., with a specific emphasis on Greek language texts, and labeling of the emotions was performed manually by a human.

Production readiness

This model is a production-grade sentiment analysis solution, carefully designed and trained to deliver high-performance results in downstream applications. With its robust architecture and rigorous testing, it is ready to be deployed in real-world scenarios, providing accurate and reliable sentiment analysis capabilities for a wide range of use cases.

Ongoing Improvement

To ensure the model remains at the forefront of sentiment analysis capabilities, it is regularly updated and fine-tuned using new data and techniques.

This commitment to ongoing improvement enables the model to adapt to emerging trends, nuances, and complexities in language, ensuring that it continues to provide exceptional performance and accuracy in production environments.

Usage:

For simplicity, you can run this here: Google Colab

Alternatively, embed the following code in your application:

import torch

from transformers import AutoTokenizer, AutoConfig,XLMRobertaForSequenceClassification, PreTrainedModel
from torch import nn
from torch.nn import Dropout


# Define the CustomModel class which is predicting Both SENTIMENT POLARITY &  EMOTIONS
class CustomModel(XLMRobertaForSequenceClassification):
    def __init__(self, config, num_emotion_labels):
        super(CustomModel, self).__init__(config)
        self.num_emotion_labels = num_emotion_labels
        self.dropout_emotion = nn.Dropout(config.hidden_dropout_prob)
        self.emotion_classifier = nn.Sequential(
            nn.Linear(config.hidden_size, 512),
            nn.Mish(),
            nn.Dropout(0.3),
            nn.Linear(512, num_emotion_labels)
        )
        self._init_weights(self.emotion_classifier[0])
        self._init_weights(self.emotion_classifier[3])
    def _init_weights(self, module):
        if isinstance(module, nn.Linear):
            module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
            if module.bias is not None:
                module.bias.data.zero_()
    def forward(self, input_ids=None, attention_mask=None, sentiment=None, labels=None):
        outputs = self.roberta(input_ids=input_ids, attention_mask=attention_mask)
        sequence_output = outputs[0]
        if len(sequence_output.shape) != 3:
            raise ValueError(f"Expected sequence_output to have 3 dimensions, got {sequence_output.shape}")
        cls_hidden_states = sequence_output[:, 0, :]
        cls_hidden_states = self.dropout_emotion(cls_hidden_states)
        emotion_logits = self.emotion_classifier(cls_hidden_states)
        with torch.no_grad():
            cls_token_state = sequence_output[:, 0, :].unsqueeze(1)
            sentiment_logits = self.classifier(cls_token_state).squeeze(1)
        if labels is not None:
            class_weights = torch.tensor([1.0] * self.num_emotion_labels).to(labels.device)
            loss_fct = nn.BCEWithLogitsLoss(pos_weight=class_weights)
            loss = loss_fct(emotion_logits, labels)
            return {"loss": loss, "emotion_logits": emotion_logits, "sentiment_logits": sentiment_logits}
        return {"emotion_logits": emotion_logits, "sentiment_logits": sentiment_logits}


# Load the tokenizer and model from the local directory
model_dir = "gsar78/HellenicSentimentAI_v2"
tokenizer = AutoTokenizer.from_pretrained(model_dir)
config = AutoConfig.from_pretrained(model_dir)
model = CustomModel.from_pretrained(model_dir, config=config, num_emotion_labels=18)



# Function to predict sentiment and emotion
def predict(texts):
    # Tokenize the input texts
    inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt", max_length=512)

    # Move inputs to the same device as the model
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    inputs = {k: v.to(device) for k, v in inputs.items()}

    # Ensure the model is on the correct device
    model.to(device)
    model.eval()  # Set the model to evaluation mode

    # Clear any gradients
    model.zero_grad()

    # Get model predictions
    with torch.no_grad():
        outputs = model(**inputs)

    # Extract logits
    emotion_logits = outputs["emotion_logits"]
    sentiment_logits = outputs["sentiment_logits"]

    # Convert logits to probabilities
    emotion_probs = torch.sigmoid(emotion_logits)
    sentiment_probs = torch.softmax(sentiment_logits, dim=1)

    # Convert tensors to lists for easier handling
    emotion_probs_list = (emotion_probs * 100).tolist()[0]  # Get the first (and only) sample and convert to %
    sentiment_probs_list = (sentiment_probs * 100).tolist()[0]  # Get the first (and only) sample and convert to %

    # Define the sentiment and emotion labels
    sentiment_labels = ['negative', 'neutral', 'positive']
    emotion_labels = [
        'joy', 'trust', 'excitement', 'gratitude', 'hope', 'love', 'pride',
        'anger', 'disgust', 'fear', 'sadness', 'anxiety', 'frustration', 'guilt',
        'disappointment', 'surprise', 'anticipation', 'neutral'
    ]

    # Threshold for displaying probabilities
    threshold = 0.0

    # Map emotion probabilities to their corresponding labels
    emotion_results = {label: prob for label, prob in zip(emotion_labels, emotion_probs_list) if prob > 0.30}

    # Map sentiment probabilities to their corresponding labels
    sentiment_results = {label: prob for label, prob in zip(sentiment_labels, sentiment_probs_list) if prob > threshold}

    return emotion_results, sentiment_results

# Example usage
sample_texts = ["Απολαύσαμε μια υπέροχη βραδιά σε αυτό το εστιατόριο. "
"Το μενού ήταν πολύ καλά σχεδιασμένο και κάθε πιάτο ήταν μια γευστική έκπληξη. "
"Η εξυπηρέτηση ήταν άψογη και η ατμόσφαιρα ευχάριστη. Σίγουρα θα επιστρέψουμε για άλλη μια φορά."]


print("Text: ", sample_texts[0])
emotion_results, sentiment_results = predict(sample_texts)

print("\nSentiment probabilities (%):")
for label, prob in sentiment_results.items():
    print(f"    {label}: {prob:.2f}%")
# Print the results
print("\nEmotion probabilities (%):")
for label, prob in emotion_results.items():
    print(f"    {label}: {prob:.2f}%")



# Change the text and predict again
# Print the results
print("\n======")


print("\nNew prediction:")
sample_texts = ["Η τελευταία μας εμπειρία στο εστιατόριο αυτό δεν ήταν ιδιαίτερα θετική. "
"Αν και ο χώρος είχε μια ενδιαφέρουσα ατμόσφαιρα, το φαγητό ήταν μέτριο και η εξυπηρέτηση ήταν αργή. "
"Οι τιμές ήταν επίσης απογοητευτικές για την ποιότητα που προσφέρθηκε."]




print("Text: ", sample_texts[0])
emotion_results, sentiment_results = predict(sample_texts)

print("\nSentiment probabilities (%):")
for label, prob in sentiment_results.items():
    print(f"    {label}: {prob:.2f}%")
print("\nEmotion probabilities (%):")
for label, prob in emotion_results.items():
    print(f"    {label}: {prob:.2f}%")

Expected output:

Text:  Απολαύσαμε μια υπέροχη βραδιά σε αυτό το εστιατόριο. Το μενού ήταν πολύ καλά σχεδιασμένο και κάθε πιάτο ήταν μια γευστική έκπληξη. Η εξυπηρέτηση ήταν άψογη και η ατμόσφαιρα ευχάριστη. Σίγουρα θα επιστρέψουμε για άλλη μια φορά.

Sentiment probabilities (%):
    negative: 17.36%
    neutral: 11.31%
    positive: 71.33%

Emotion probabilities (%):
    joy: 99.92%
    trust: 93.40%
    excitement: 73.43%
    gratitude: 97.52%
    hope: 0.33%
    love: 12.20%
    pride: 1.09%
    anticipation: 0.31%

======

New prediction:
Text:  Η τελευταία μας εμπειρία στο εστιατόριο αυτό δεν ήταν ιδιαίτερα θετική. Αν και ο χώρος είχε μια ενδιαφέρουσα ατμόσφαιρα, το φαγητό ήταν μέτριο και η εξυπηρέτηση ήταν αργή. Οι τιμές ήταν επίσης απογοητευτικές για την ποιότητα που προσφέρθηκε.

Sentiment probabilities (%):
    negative: 58.39%
    neutral: 16.34%
    positive: 25.27%

Emotion probabilities (%):
    frustration: 68.61%
    disappointment: 99.84%
    neutral: 0.75%

Evaluation

Due to time constraints, there is no official benchmarking done yet.

However, the evaluation on a test dataset is the following:

Evaluation results for emotion classification:

'eval_f1': 0.9448,

'eval_loss': 0.0322,

'eval_accuracy': 0.7857,

'eval_hamming_loss': 0.0141,

'eval_precision': 0.9785,

'eval_recall': 0.9133,

Enjoy!

Downloads last month
98
Safetensors
Model size
278M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using gsar78/HellenicSentimentAI_v2 1