This is a BERTweet-base model that has been further pre-trained with preferential masking of emotion words for 100k steps on about 6.3M Vent posts.
This model is meant to be fine-tuned on labeled data or used as feature extractor for downstream tasks.
Citation
Please cite the following paper if you find the model useful for your work:
@article{aroyehun2023leia,
title={LEIA: Linguistic Embeddings for the Identification of Affect},
author={Aroyehun, Segun Taofeek and Malik, Lukas and Metzler, Hannah and Haimerl, Nikolas and Di Natale, Anna and Garcia, David},
journal={EPJ Data Science},
volume={12},
year={2023},
publisher={Springer}
}
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.