File size: 1,280 Bytes
cc1d12d
 
13599d1
cc1d12d
 
8077c9b
 
13599d1
 
8a5043c
 
13599d1
 
 
 
 
 
 
05abe3b
13599d1
 
 
8077c9b
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
widget:
 - text: "I believe I will get into UW. </s></s> I will get into UW."
---

This is an off-the-shelf roberta-large model finetuned on WANLI, the Worker-AI Collaborative NLI dataset ([Liu et al., 2022](https://arxiv.org/abs/2201.05955)). It outperforms the `roberta-large-mnli` model on seven out-of-domain test sets, including by 11% on HANS and 9% on Adversarial NLI.

### How to use
```
from transformers import RobertaTokenizer, RobertaForSequenceClassification

model = RobertaForSequenceClassification.from_pretrained('alisawuffles/roberta-large-wanli')
tokenizer = RobertaTokenizer.from_pretrained('alisawuffles/roberta-large-wanli')

x = tokenizer("I believe I will get into UW.", "I will get into UW.", hypothesis, return_tensors='pt', max_length=128, truncation=True)
logits = model(**x).logits
probs = logits.softmax(dim=1).squeeze(0)
label_id = torch.argmax(probs).item()
prediction = model.config.id2label[label_id]
```

### Citation
```
@misc{liu-etal-2022-wanli,
    title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
    author = "Liu, Alisa  and
      Swayamdipta, Swabha  and
      Smith, Noah A.  and
      Choi, Yejin",
    month = jan,
    year = "2022",
    url = "https://arxiv.org/pdf/2201.05955",
}
```