license: apache-2.0
datasets:
- raquiba/Sarcasm_News_Headline
language:
- en
metrics:
- perplexity
Model Card for sarcasm_plus
This model is a facebook/bart-large
fine-tuned on sarcastic comments from raquiba/Sarcasm_News_Headline
dataset.
Model Details
This model is not intended to be used for plain inference as it is very likely to predict non-sarcastic content.
It is intended to be used instead as "utility model" for detecting and fixing sarcastic content as its token probability distributions will likely differ from comparable models not trained/fine-tuned over sarcastic data.
Its name sarcasm_plus
refers to the G+ model in Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts.
Model Description
- Developed by: [tteofili]
- Shared by : [tteofili]
- License: [apache-2.0]
- Finetuned from model : facebook/bart-large
Bias, Risks, and Limitations
This model is fine-tuned over non-sarcastic comments from
raquiba/Sarcasm_News_Headline
and it is very likely to produce non-sarcastic content. For this reason this model should only be used in combination with other models for the sake of detecting / fixing sarcastic content, see for example Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts.Evaluation
This section describes the evaluation protocols and provides the results.
Testing Data, Factors & Metrics
Testing Data
This model was tested on
raquiba/Sarcasm_News_Headline
testset.Metrics
Model was evaluated using
perplexity
(on the MLM task).Results
Perplexity: 1.09