FacialMMT / README.md
devin777's picture
Update README.md
a56c9f4
|
raw
history blame
1.07 kB
metadata
license: gpl-3.0
tags:
  - emotion
  - emotion-recognition
  - sentiment-analysis
  - roberta
language:
  - en
pipeline_tag: text-classification

FacialMMT

This repo contains the data and pretrained models for FacialMMT, a framework that uses facial sequences of real speaker to help multimodal emotion recognition.

The model performance on MELD test set is:

Release W-F1(%)
23-07-10 66.73

It is currently ranked third on paperswithcode.

If you're interested, please check out this repo for more in-detail explanation of how to use our model.

Paper: A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations. In Proceedings of ACL 2023 (Main Conference), pp. 15445–15459.

Authors: Wenjie Zheng, Jianfei Yu, Rui Xia, and Shijin Wang