Update README.md
Browse files
README.md
CHANGED
@@ -24,4 +24,9 @@ To generate text using the model:
|
|
24 |
|
25 |
tokenizer = MBart50TokenizerFast.from_pretrained("MRNH/mbart-english-grammar-corrector", src_lang="en_XX", tgt_lang="en_XX")
|
26 |
input = tokenizer("I was here yesterday to studying",text_target="I was here yesterday to study", return_tensors='pt')
|
27 |
-
output = model.generate(input["input_ids"],attention_mask=input["attention_mask"],forced_bos_token_id=tokenizer_it.lang_code_to_id["en_XX"])
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
tokenizer = MBart50TokenizerFast.from_pretrained("MRNH/mbart-english-grammar-corrector", src_lang="en_XX", tgt_lang="en_XX")
|
26 |
input = tokenizer("I was here yesterday to studying",text_target="I was here yesterday to study", return_tensors='pt')
|
27 |
+
output = model.generate(input["input_ids"],attention_mask=input["attention_mask"],forced_bos_token_id=tokenizer_it.lang_code_to_id["en_XX"])
|
28 |
+
|
29 |
+
|
30 |
+
Training of the model is performed using the following loss computatation:
|
31 |
+
|
32 |
+
hidden_state.logits, hidden_state.loss = model(input_ids=input["input_ids"],attention_mask=input["attention_mask"],labels=input["labels"])
|