This is an off-the-shelf roberta-large model finetuned on WANLI, the Worker-AI Collaborative NLI dataset (Liu et al., 2022). It outperforms the roberta-large-mnli
model on seven out-of-domain test sets, including by 11% on HANS and 9% on Adversarial NLI.
@misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
}