Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ model = transformers.AutoModelForSeq2SeqLM.from_pretrained("kleinay/qanom-seq2se
|
|
28 |
tokenizer = transformers.AutoTokenizer.from_pretrained("kleinay/qanom-seq2seq-model-baseline")
|
29 |
```
|
30 |
|
31 |
-
However, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's "task prefix", incorporating the predicate type and/or the verbal
|
32 |
In order to use the model for QANom parsing easily, we suggest downloading the [`pipeline.py`](https://huggingface.co/kleinay/qanom-seq2seq-model-joint/blob/main/pipeline.py) file from this repository, and then use the `QASRL_Pipeline` class:
|
33 |
|
34 |
```python
|
|
|
28 |
tokenizer = transformers.AutoTokenizer.from_pretrained("kleinay/qanom-seq2seq-model-baseline")
|
29 |
```
|
30 |
|
31 |
+
However, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's "task prefix", incorporating the predicate type and/or the verbal form of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs).
|
32 |
In order to use the model for QANom parsing easily, we suggest downloading the [`pipeline.py`](https://huggingface.co/kleinay/qanom-seq2seq-model-joint/blob/main/pipeline.py) file from this repository, and then use the `QASRL_Pipeline` class:
|
33 |
|
34 |
```python
|