output

#7
by gyf666 - opened

<|im_start|> user
Translate the following text from English into Chinese.English: PLM-based methods represent entities and relations using their corresponding text. These methods introduce PLM to encode the text and use the PLM output to evaluate the plausibility of the given fact. On SKGC, Yao et al. (2019) encode the combined texts of a fact, then a binary classifier is employed to determine the plausibility. To reduce the inference cost in Yao et al. (2019), Wang et al. (2021a) exploit Siamese network to encode (h, r) and t separately. Unlike previous encode-only model, Xie et al. (2022); Saxena et al. (2022) explore the Seq2Seq PLM models to directly generate target entity text on KGC task.
Chinese:<|im_end|>
<|im_start|> assistant
基于 PLM 的方法使用对应文本来表示实体和关系。这些方法将 PLM 用于编码文本,并使用 PLM 的输出来评估给定事实的可信度。在 SKGC 上,姚等人 (2019) 编码事实的组合文本,然后使用二值分类器来判断可信度。为了降低 Yao et al. (2019) 中的推理成本,王等人 (2021a) 利用 siamese 网络编码 (h, r) 和 t 。与之前的仅编码模型不同,Xie et al. (2022);Saxena et al. (2022) 探索了 Seq2Seq PLM 模型,直接在 KGC 任务中生成目标实体文本。<|im_end|>
This is one of my translation results, but how to remove in the generated results:
"<|im_start|> user
Translate the following text from English into Chinese.English: PLM-based methods represent entities and relations using their corresponding text. These methods introduce PLM to encode the text and use the PLM output to evaluate the plausibility of the given fact. On SKGC, Yao et al. (2019) encode the combined texts of a fact, then a binary classifier is employed to determine the plausibility. To reduce the inference cost in Yao et al. (2019), Wang et al. (2021a) exploit Siamese network to encode (h, r) and t separately. Unlike previous encode-only model, Xie et al. (2022); Saxena et al. (2022) explore the Seq2Seq PLM models to directly generate target entity text on KGC task.
Chinese:<|im_end|>
<|im_start|> assistant"

Unbabel org

Hello!

Following the example in the model card, you can slice the output from the input's length onward, i.e.,

[...]
print(outputs[0]["generated_text"][len(prompt):])

Alternatively, here's the equivalent example using vllm:

from vllm import LLM, SamplingParams

s = SamplingParams(max_tokens=1024, temperature=0.0)
model = LLM(model="Unbabel/TowerInstruct-Mistral-7B-v0.2")
messages = [
    {"role": "user", "content": "Translate the following text from Portuguese into English.\nPortuguese: Um grupo de investigadores lançou um novo modelo para tarefas relacionadas com tradução.\nEnglish:"},
]

model_output = model.chat(messages, s, use_tqdm=True)
generations = [output.outputs[0].text for output in model_output]
print(generations[0])

There are many other ways to achieve that. Thanks for using the model!

Thank you, it 's really helpful

gyf666 changed discussion status to closed

Sign up or log in to comment