alisawuffles commited on
Commit
13599d1
1 Parent(s): cc1d12d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -1,10 +1,23 @@
1
  ---
2
  widget:
3
- - text: "It is quite likely that we will build a space station that allows people to live and work in space. </s></s> We will build a space station that allows people to live and work in space."
4
  ---
5
 
6
  This is an off-the-shelf roberta-large model finetuned on WANLI, the Worker-AI Collaborative NLI dataset ([Liu et al., 2022](https://arxiv.org/abs/2201.05955)). It outperforms the `roberta-large-mnli` model on seven out-of-domain test sets, including by 11% on HANS and 9% on Adversarial NLI.
7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ```
9
  @misc{liu-etal-2022-wanli,
10
  title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
 
1
  ---
2
  widget:
3
+ - text: "I believe I will get into UW. </s></s> I will get into UW."
4
  ---
5
 
6
  This is an off-the-shelf roberta-large model finetuned on WANLI, the Worker-AI Collaborative NLI dataset ([Liu et al., 2022](https://arxiv.org/abs/2201.05955)). It outperforms the `roberta-large-mnli` model on seven out-of-domain test sets, including by 11% on HANS and 9% on Adversarial NLI.
7
 
8
+ ### How to use
9
+ ```
10
+ model = RobertaForSequenceClassification.from_pretrained('alisawuffles/roberta-large-wanli')
11
+ tokenizer = RobertaTokenizer.from_pretrained('alisawuffles/roberta-large-wanli')
12
+
13
+ x = tokenizer("I believe I will get into UW.", "I will get into UW.", hypothesis, return_tensors='pt', max_length=128, truncation=True)
14
+ logits = model(**x).logits
15
+ probs = logits.softmax(dim=1).squeeze(0)
16
+ label_id = torch.argmax(probs).item()
17
+ prediction = nli_model.config.id2label[label_id]
18
+ ```
19
+
20
+ ### Citation
21
  ```
22
  @misc{liu-etal-2022-wanli,
23
  title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",