Papers
arxiv:2401.10845

Emotion Classification In Software Engineering Texts: A Comparative Analysis of Pre-trained Transformers Language Models

Published on Jan 19, 2024

Abstract

Emotion recognition in software engineering texts is critical for understanding developer expressions and improving collaboration. This paper presents a comparative analysis of state-of-the-art Pre-trained Language Models (PTMs) for fine-grained emotion classification on two benchmark datasets from GitHub and Stack Overflow. We evaluate six <PRE_TAG>transformer models</POST_TAG> - <PRE_TAG>BERT</POST_TAG>, <PRE_TAG>Ro<PRE_TAG><PRE_TAG>BERT</POST_TAG>a</POST_TAG></POST_TAG>, <PRE_TAG>AL<PRE_TAG><PRE_TAG>BERT</POST_TAG></POST_TAG></POST_TAG>, <PRE_TAG>De<PRE_TAG><PRE_TAG>BERT</POST_TAG>a</POST_TAG></POST_TAG>, <PRE_TAG>Code<PRE_TAG><PRE_TAG>BERT</POST_TAG></POST_TAG></POST_TAG> and <PRE_TAG>Graph<PRE_TAG><PRE_TAG>Code<PRE_TAG><PRE_TAG>BERT</POST_TAG></POST_TAG></POST_TAG></POST_TAG></POST_TAG> against the current best-performing tool <PRE_TAG>SEntiMoji</POST_TAG>. Our analysis reveals consistent improvements ranging from 1.17\% to 16.79\% in terms of macro-averaged and micro-averaged F1 scores, with general domain models outperforming specialized ones. To further enhance PTMs, we incorporate <PRE_TAG>polarity features</POST_TAG> in <PRE_TAG>attention layer</POST_TAG> during training, demonstrating additional average gains of 1.0\% to 10.23\% over baseline PTMs approaches. Our work provides strong evidence for the advancements afforded by PTMs in recognizing <PRE_TAG>nuanced emotions</POST_TAG> like <PRE_TAG>Anger</POST_TAG>, <PRE_TAG>Love</POST_TAG>, <PRE_TAG>Fear</POST_TAG>, <PRE_TAG>Joy</POST_TAG>, <PRE_TAG>Sadness</POST_TAG>, and <PRE_TAG>Surprise</POST_TAG> in software engineering contexts. Through comprehensive benchmarking and error analysis, we also outline scope for improvements to address <PRE_TAG>contextual gaps</POST_TAG>.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2401.10845 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2401.10845 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2401.10845 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.