File size: 1,500 Bytes
cc1d12d 9624ab4 cc1d12d 34b857f 0c5328f 9624ab4 cc1d12d 8077c9b 13599d1 544158e 8a5043c 13599d1 be79718 13599d1 05abe3b 13599d1 8077c9b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
---
language:
- en
tags:
- text-classification
widget:
- text: "I almost forgot to eat lunch.</s></s>I didn't forget to eat lunch."
- text: "I almost forgot to eat lunch.</s></s>I forgot to eat lunch."
- text: "I ate lunch.</s></s>I almost forgot to eat lunch."
datasets:
- alisawuffles/WANLI
---
This is an off-the-shelf roberta-large model finetuned on WANLI, the Worker-AI Collaborative NLI dataset ([Liu et al., 2022](https://arxiv.org/abs/2201.05955)). It outperforms the `roberta-large-mnli` model on seven out-of-domain test sets, including by 11% on HANS and 9% on Adversarial NLI.
### How to use
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
model = RobertaForSequenceClassification.from_pretrained('alisawuffles/roberta-large-wanli')
tokenizer = RobertaTokenizer.from_pretrained('alisawuffles/roberta-large-wanli')
x = tokenizer("I almost forgot to eat lunch.", "I didn't forget to eat lunch.", return_tensors='pt', max_length=128, truncation=True)
logits = model(**x).logits
probs = logits.softmax(dim=1).squeeze(0)
label_id = torch.argmax(probs).item()
prediction = model.config.id2label[label_id]
```
### Citation
```
@misc{liu-etal-2022-wanli,
title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
author = "Liu, Alisa and
Swayamdipta, Swabha and
Smith, Noah A. and
Choi, Yejin",
month = jan,
year = "2022",
url = "https://arxiv.org/pdf/2201.05955",
}
``` |