DIALogue-level Commonsense Transformer (DIALeCT)

The pretrained checkpoint for the paper Multiview Contextual Commonsense Inference: A New Dataset and Task.

The model is trained based on the T5-large checkpoint.

model image

Datasets

The dataset used to pretrain the model can be obtained from the CICERO repo following instructions. The Contextualized Commonsense Inference in Dialogues v2 (CICEROv2) consists of annotated commonsense inferences including cause and emotional reaction, etc. The dialogues are from multiple datasets.

Dataset #Dialogues #Instances
DailyDialog 1118 3973
MuTual 1011 3384
Dream 250 994

Examples

Some examples of generated results from the pretrained model (the zero-shot setting).

Subsequent Event

What is or could be the subsequent event of the target? <sep> 
target: Oh . I just can't forget it .<sep> 
context: A: David , why didn't you clean the room ?, <utt> 
B: I'm not in the mood ., <utt> 
A: Why are you feeling depressed ?, <utt> 
B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> 
A: I don t think she will do such a thing ., <utt> 
B: But she did and made me disappointed ., <utt> 
A: Oh , cheer up . A girlfriend is not everything ., <utt> 
B: But she means a lot to me ., <utt> 
A: Then forgive her mistake ., <utt> 
B: Oh . I just can't forget it 

Predicted subsequent event:

David's girlfriend apologized to david for her mistake.

Cause

What is or could be the cause of target? <sep> 
target: Thanks. Will I be able to take a retest ? <sep> 
context: A: Did I do well on my test ?, <utt> 
B: Do you want to know the honest answer ?, <utt> 
A: Why wouldn't I want to know ?, <utt> 
B: You had pretty bad scores ., <utt> 
A: Exactly what do you mean by bad ?, <utt> 
B: You failed ., <utt> 
A: How'd I fail it ?, <utt> 
B: There are a couple of reasons why you didn't pass ., <utt> 
A: What did I do wrong ?, <utt> 
B: To sum it all up , you really just don't know how to drive ., <utt> 
A: Thanks. Will I be able to take a retest ?, <utt> 
B: Sure you can , in about two and a half weeks . 

Predicted cause:

The speaker has failed the driving test.

Emotional Reaction

What is the possible emotional reaction of the listener in response to target? <sep> 
target: Oh . I just can't forget it .<sep> 
context: A: David , why didn't you clean the room ?, <utt> 
B: I'm not in the mood ., <utt> 
A: Why are you feeling depressed ?, <utt>
B: I was told my girlfriend was speaking ill of me. That \u2019 s a real let-down ., <utt> 
A: I don t think she will do such a thing ., <utt> 
B: But she did and made me disappointed ., <utt> 
A: Oh , cheer up . A girlfriend is not everything ., <utt> 
B: But she means a lot to me ., <utt> 
A: Then forgive her mistake ., <utt> 
B: Oh . I just can't forget it 

Predicted emotional reaction:

The listener is hopeful that david will forgive his girlfriend for her mistake.

Inference:

The input text should be formatted as follows:

Question <sep> target: target_utt <sep> context: A: utterance 1 <utt> B: utterance 2 <utt> A: utterance 3 <utt> B: utterance 4

Question: The question against which we want to make the inference.

A, B are speaker identifiers

The target_utt should be anyone between utterance 1, utterance 2, utterance 3, or utterance 4. Do not use the speaker identifier in the target_utt

Some samples are provided in the Hosted inference API box examples.

BibTeX entry and citation info

If you use the model, you can cite:

@article{Shen2022MultiviewCC,
  title={Multiview Contextual Commonsense Inference: A New Dataset and Task},
  author={Siqi Shen and Deepanway Ghosal and Navonil Majumder and Henry Lim and Rada Mihalcea and Soujanya Poria},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.02890}
}
Downloads last month
17
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.