--- license: gpl-3.0 tags: - emotion - emotion-recognition - sentiment-analysis - roberta language: - en pipeline_tag: text-classification --- ## FacialMMT This repo contains the data and pretrained models for FacialMMT, a framework that uses facial sequences of real speaker to help multimodal emotion recognition. The model performance on MELD test set is: | Release | W-F1(%) | |:-------------:|:--------------:| | 23-07-10 | 66.73 | It is currently ranked third on [paperswithcode](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=a-facial-expression-aware-multimodal-multi). If you're interested, please check out this [repo](https://github.com/NUSTM/FacialMMT) for more in-detail explanation of how to use our model. Paper: [A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations](https://aclanthology.org/2023.acl-long.861.pdf). In Proceedings of ACL 2023 (Main Conference), pp. 15445–15459. Authors: Wenjie Zheng, Jianfei Yu, Rui Xia, and Shijin Wang