Update README.md
Browse files
README.md
CHANGED
@@ -61,3 +61,17 @@ To sum up,my model performs nearly as well as the SOTA rule-based model evaluate
|
|
61 |
|
62 |
(It's worth mentioning that even though I tried my best to conduct objective tests, the testsets I could find were more or less different from what they introduced in the paper.)
|
63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
|
62 |
(It's worth mentioning that even though I tried my best to conduct objective tests, the testsets I could find were more or less different from what they introduced in the paper.)
|
63 |
|
64 |
+
## How to use
|
65 |
+
|
66 |
+
|
67 |
+
```python
|
68 |
+
from transformers import BartTokenizer, BartForConditionalGeneration
|
69 |
+
tokenizer = BartTokenizer.from_pretrained("MarkS/QA2D")
|
70 |
+
model = BartForConditionalGeneration.from_pretrained("MarkS/QA2D")
|
71 |
+
|
72 |
+
input_text = "question: what day is it today? answer: Tuesday"
|
73 |
+
input = tokenizer(input_text, return_tensors='pt')
|
74 |
+
output = model.generate(input.input_ids)
|
75 |
+
result = tokenizer.batch_decode(output, skip_special_tokens=True)
|
76 |
+
```
|
77 |
+
|