Commit
·
14a5ed8
1
Parent(s):
1222138
Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,32 @@ The input to the model is defined as:
|
|
18 |
[CLS] cand. question [q] gold answer [r] pred answer [c] context
|
19 |
```
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
# Citations
|
22 |
|
23 |
```
|
|
|
18 |
[CLS] cand. question [q] gold answer [r] pred answer [c] context
|
19 |
```
|
20 |
|
21 |
+
|
22 |
+
# Generation
|
23 |
+
|
24 |
+
You can use the following script to get the semantic similarity of the predicted answer given the gold answer, context, and question.
|
25 |
+
|
26 |
+
```
|
27 |
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
28 |
+
sp_scorer = AutoModelForSequenceClassification.from_pretrained('alirezamsh/quip-512-mocha')
|
29 |
+
tokenizer_sp = AutoTokenizer.from_pretrained('alirezamsh/quip-512-mocha')
|
30 |
+
sp_scorer.eval()
|
31 |
+
|
32 |
+
pred_answer = ""
|
33 |
+
gold_answer = ""
|
34 |
+
question = ""
|
35 |
+
context = ""
|
36 |
+
|
37 |
+
input_sp = f"{question} <q> {gold_answer} <r>" \
|
38 |
+
f" {pred_answer} <c> {context}"
|
39 |
+
|
40 |
+
inputs = tokenizer_sp(input_sp, max_length=512, truncation=True, \
|
41 |
+
padding="max_length", return_tensors="pt")
|
42 |
+
|
43 |
+
outputs = sp_scorer(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"])
|
44 |
+
print(outputs)
|
45 |
+
```
|
46 |
+
|
47 |
# Citations
|
48 |
|
49 |
```
|