Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ widget:
|
|
27 |
In this work, we propose ME²-BERT, the first holistic framework for fine-tuning a pre-trained language model like BERT to the task of moral foundation prediction. ME²-BERT integrates events and emotions for learning domain-invariant morality-relevant text representations.
|
28 |
Our extensive experiments show that ME²-BERT outperforms existing state-of-the-art methods for moral foundation prediction, with an average percentage increase up to 35% in the out-of-domain scenario.
|
29 |
|
30 |
-
|
31 |
|
32 |
## Training Data
|
33 |
ME²-BERT was fine-tuned on the [**E2MoCase dataset**](https://arxiv.org/pdf/2409.09001) (available upon request), which consists of 97,251 paragraphs from news articles encompassing both event-based and event-free samples. It includes annotations for:
|
|
|
27 |
In this work, we propose ME²-BERT, the first holistic framework for fine-tuning a pre-trained language model like BERT to the task of moral foundation prediction. ME²-BERT integrates events and emotions for learning domain-invariant morality-relevant text representations.
|
28 |
Our extensive experiments show that ME²-BERT outperforms existing state-of-the-art methods for moral foundation prediction, with an average percentage increase up to 35% in the out-of-domain scenario.
|
29 |
|
30 |
+
[Paper](https://aclanthology.org/2025.coling-main.638.pdf) | [Source code](https://github.com/lorenzozangari/ME2-BERT) | [WebApp](https://huggingface.co/spaces/lorenzozan/ME2-BERT)
|
31 |
|
32 |
## Training Data
|
33 |
ME²-BERT was fine-tuned on the [**E2MoCase dataset**](https://arxiv.org/pdf/2409.09001) (available upon request), which consists of 97,251 paragraphs from news articles encompassing both event-based and event-free samples. It includes annotations for:
|