ME²-BERT: Are Events and Emotions what you need for Moral Foundation Prediction?

Moralities, emotions, and events are complex aspects of human cognition, which are often treated separately since capturing their combined effects is challenging, especially due to the lack of annotated data. Leveraging their interrelations hence becomes crucial for advancing the understanding of human moral behaviors. In this work, we propose ME²-BERT, the first holistic framework for fine-tuning a pre-trained language model like BERT to the task of moral foundation prediction. ME²-BERT integrates events and emotions for learning domain-invariant morality-relevant text representations. Our extensive experiments show that ME²-BERT outperforms existing state-of-the-art methods for moral foundation prediction, with an average percentage increase up to 35% in the out-of-domain scenario.

Paper | Source code | WebApp

Training Data

ME²-BERT was fine-tuned on the E2MoCase dataset (available upon request), which consists of 97,251 paragraphs from news articles encompassing both event-based and event-free samples. It includes annotations for:

  • Moral values: Care, Harm, Fairness, Cheating, Loyalty, Betrayal, Authority, Subversion, Purity, Degradation.
  • Emotions: Anticipation, Trust, Disgust, Joy, Optimism, Surprise, Love, Anger, Sadness, Pessimism, Fear.
  • Events in JSON format, including the trigger mention and the entities involved in the event.

Evaluation data

ME²-BERT has been evaluated on:

Usage

from transformers import AutoTokenizer, AutoModel
import torch

model_name = "lorenzozan/ME2-BERT"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)

text = ["Faithless is he that says farewell when the road darkens."]
inputs = tokenizer(text, padding="max_length", truncation=True, return_tensors="pt")
with torch.no_grad():
    outputs = model(**inputs, return_dict=False)

print(outputs) # tensor([[0.0185, 0.2401, 0.9166, 0.0498, 0.0453]])

By running the model with return_dict=True, it returns a dictionary containing key-value pairs, where each key represents a moral dimension and its corresponding value indicates the associated score.


text = [
     'Faithless is he that says farewell when the road darkens.',
     'The soul is healed by being with children.', 
     'I remembered how we had we had all come to Gatsby’s and guessed at his corruption… while he stood before us concealing an incorruptible dream…',
     'All the variety, all the charm, all the beauty of life is made up of light and shadow, but justice must always remain clear and unbroken.',
     'When tyranny becomes law, rebellion becomes duty.']  
     
max_seq_length = 200

mf_mapping = {'CH':'CARE/HARM','FC':'FAIRNESS/CHEATING', 'LB':'LOYALTY/BETRAYAL', 'AS':'AUTHORITY/SUBVERSION', 'PD': 'PURITY/DEGRADATION'}

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)

encoded_input = tokenizer(
    text,
    max_length=max_seq_length,  
    padding="max_length",  
    truncation=True,  
    return_tensors="pt", 
)

input_ids = encoded_input["input_ids"]
attention_mask = encoded_input["attention_mask"]

model.eval()  
with torch.no_grad():
    output = model(input_ids=input_ids, attention_mask=attention_mask, return_dict=True)

for i, tt in enumerate(text):
    print(tt)
    for mf, score in output[i].items():
        print(f'{mf_mapping[mf]} : {score}')        
    print()
Faithless is he that says farewell when the road darkens.
CARE/HARM : 0.05056
FAIRNESS/CHEATING : 0.01845
LOYALTY/BETRAYAL : 0.8676
AUTHORITY/SUBVERSION : 0.01655
PURITY/DEGRADATION : 0.06524

The soul is healed by being with children.
CARE/HARM : 0.83783
FAIRNESS/CHEATING : 0.02016
LOYALTY/BETRAYAL : 0.42663
AUTHORITY/SUBVERSION : 0.00525
PURITY/DEGRADATION : 0.61056

I remembered how we had we had all come to Gatsby’s and guessed at his corruption… while he stood before us concealing an incorruptible dream…
CARE/HARM : 0.00676
FAIRNESS/CHEATING : 0.04518
LOYALTY/BETRAYAL : 0.02287
AUTHORITY/SUBVERSION : 0.00545
PURITY/DEGRADATION : 0.64035

All the variety, all the charm, all the beauty of life is made up of light and shadow, but justice must always remain clear and unbroken.
CARE/HARM : 0.08769
FAIRNESS/CHEATING : 0.95034
LOYALTY/BETRAYAL : 0.05768
AUTHORITY/SUBVERSION : 0.00725
PURITY/DEGRADATION : 0.06396

When tyranny becomes law, rebellion becomes duty.
CARE/HARM : 0.1599
FAIRNESS/CHEATING : 0.91123
LOYALTY/BETRAYAL : 0.4824
AUTHORITY/SUBVERSION : 0.96638
PURITY/DEGRADATION : 0.02086

Other examples of usage with different configuration are shown here.

References

If you use this model, please cite:

@inproceedings{zangari-etal-2025-me2,
    title = "{ME}2-{BERT}: Are Events and Emotions what you need for Moral Foundation Prediction?",
    author = "Zangari, Lorenzo  and
      Greco, Candida M.  and
      Picca, Davide  and
      Tagarelli, Andrea",
      publisher = "Association for Computational Linguistics",
      url = "https://aclanthology.org/2025.coling-main.638/",
      pages = "9516--9532",
      abstract = "Moralities, emotions, and events are complex aspects of human cognition, which are often treated separately since capturing their combined effects is challenging, especially due to the lack of annotated data. Leveraging their interrelations hence becomes crucial for advancing the understanding of human moral behaviors. In this work, we propose ME2-BERT, the first holistic framework for fine-tuning a pre-trained language model like BERT to the task of moral foundation prediction. ME2-BERT integrates events and emotions for learning domain-invariant morality-relevant text representations. Our extensive experiments show that ME2-BERT outperforms existing state-of-the-art methods for moral foundation prediction, with an average increase up to 35{\%} in the out-of-domain scenario."
}
Downloads last month
171
Safetensors
Model size
221M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support model that require custom code execution.

Model tree for lorenzozan/ME2-BERT

Finetuned
(3339)
this model

Space using lorenzozan/ME2-BERT 1