EmotionCLIP-V2

image/jpeg

Project Overview

EmotionCLIP is an open-domain multimodal emotion perception model built on CLIP. This model aims to perform broad emotion recognition through multimodal inputs such as faces, scenes, and photos, supporting the analysis of emotional attributes in images, scene layouts, and even artworks.

Datasets

The model is trained using the following datasets:

  1. EmoSet:

    • Citation:
      @inproceedings{yang2023emoset,
        title={EmoSet: A Large-Scale Visual Emotion Dataset with Rich Attributes},
        author={Yang, Jingyuan and Huang, Qirui and Ding, Tingting and Lischinski, Dani and Cohen-Or, Danny and Huang, Hui},
        booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
        pages={20383--20394},
        year={2023}
      }
      
    • This dataset contains rich emotional labels and visual features, providing a foundation for emotion perception.In this model, We use the dataset Emoset118K.
  2. Open Human Facial Emotion Recognition Dataset:

    • Contains nearly 10,000 images with emotion labels gathered from wild scenes to enhance the model's capability in facial emotion recognition.

3.SFEW:

  • The Static Facial Expressions in the Wild (SFEW) dataset is a dataset for facial expression recognition. It is created by selecting static frames from the AFEW database by computing keyframes based on facial point clustering.

4.Neutral add:

  • Contains 50K images without obvious emotional fluctuations as a supplementary category.

training method

Combining the fine-tuning methods of layer_norm tuning, prefix tuning, and prompt tuning, the practical results show that the mixture of the three training methods can be comparable to or even exceed the performance of full fine-tuning in generalized visual emotion recognition by introducing only a small number of parameters. In addition, thanks to the adjustment of layer norm, it converges faster than prefix tuning and prompt tuning, achieving higher performance than EmotionCLIP-V1.

Fine-tuning Weights

This repository provides one fine-tuned weights:

  1. EmotionCLIP-V2 Weights
    • Fine-tuned on the EmoSet 118K dataset, without additional training specifically for facial emotion recognition.
    • Final evaluation results:
      • Loss: 1.5465
      • Accuracy: 0.8256
      • Macro_Recall: 0.7803
      • F1: 0.8235

Usage Instructions

git clone https://huggingface.co/jiangchengchengNLP/EmotionCLIP-V2

cd EmotionCLIP-V2
# Create your own test file to store images ending in JPG, or organize images from the repository for testing
# By default, MixCLIP weights are used. Run the following python command in the current folder.
from EmotionCLIP import model, preprocess, tokenizer
from PIL import Image
import torch
import matplotlib.pyplot as plt
import os
from torch.nn import functional as F

# Image folder path
image_folder = r'./test'  #test images are in EmotionCLIP repo : jiangchengchengNLP/EmotionCLIP
image_files = [os.path.join(image_folder, f) for f in os.listdir(image_folder) if f.endswith('.jpg')]

# Emotion label mapping
consist_json = {
    'amusement': 0,
    'anger': 1,
    'awe': 2,
    'contentment': 3,
    'disgust': 4,
    'excitement': 5,
    'fear': 6,
    'sadness': 7,
    'neutral': 8
}
reversal_json = {v: k for k, v in consist_json.items()}
text_list = [f"This picture conveys a sense of {key}" for key in consist_json.keys()]
text_input = tokenizer(text_list)

# Create subplots
num_images = len(image_files)
rows = 3  # 3 rows
cols = 3  # 3 columns
fig, axes = plt.subplots(rows, cols, figsize=(15, 10))  # Adjust the canvas size
axes = axes.flatten()  # Flatten the subplots to a 1D array
title_fontsize = 20

# Iterate through each image
for idx, img_path in enumerate(image_files):
    # Load image
    img = Image.open(img_path)
    img_input = preprocess(img)

    # Predict emotion
    with torch.no_grad():
        logits_per_image, _ = model(img_input.unsqueeze(0).to(device=model.device, dtype=model.dtype), text_input.to(device=model.device))
    softmax_logits_per_image = F.softmax(logits_per_image, dim=-1)
    top_k_values, top_k_indexes = torch.topk(softmax_logits_per_image, k=1, dim=-1)
    predicted_emotion = reversal_json[top_k_indexes.item()]

    # Display image and prediction result
    ax = axes[idx]
    ax.imshow(img)
    ax.set_title(f"Predicted: {predicted_emotion}", fontsize=title_fontsize)
    ax.axis('off')

# Hide any extra subplots
for idx in range(num_images, rows * cols):
    axes[idx].axis('off')

plt.tight_layout()
plt.show()

Existing Issues

The hybrid fine-tuning method improved the model by 2% in the prediction task after the introduction of the neutral category, but this introduction still has noise, which will interfere with the emotion recognition in other scenes. The introduction of prompt tuning is the key to surpassing the effect of full fine-tuning, and the introduction of layer norm tuning makes the convergence faster during training. But this also has disadvantages. After mixing so many fine-tuning methods, the generalization performance of the model has seriously declined. At the same time, the recognition of difficult categories disgust and anger has not been improved. Although I have deliberately added some disgusting pictures of humans, the effect is still not as expected. Therefore, it is still necessary to build a high-quality, large-scale visual emotion dataset. I can feel that the performance of the model is limited by the number of datasets that are far less than the pre-training dataset. At the same time, seeking breakthroughs in model structure will also provide great help for this problem.

Summary

I proposed a hybrid layer_norm prefix_tuning prompt_tuning training method for efficient fine-tuning CLIP, which can make the model converge faster and have performance comparable to full fine-tuning. However, the loss of generalization performance is still a serious problem. I released EmosetCLIP-V2 trained with this training method, which has an additional neutral category compared to EmosetCLIP-V1, and the performance is slightly improved. Future work aims to expand the training data for difficult categories and optimize the model architecture.


Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for jiangchengchengNLP/EmotionCLIP-V2

Finetuned
(66)
this model