|
--- |
|
language: |
|
- zh |
|
library_name: transformers |
|
pipeline_tag: text2text-generation |
|
--- |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("svjack/T5-dialogue-collect-v5") |
|
|
|
model = AutoModelForSeq2SeqLM.from_pretrained("svjack/T5-dialogue-collect-v5") |
|
|
|
text = ''' |
|
根据下面的上下文进行分段: |
|
上下文 他 喜欢 吃 汉堡 是 但 我 可 不 喜欢。 |
|
答案: |
|
''' |
|
|
|
tokenizer.decode( |
|
model.generate( |
|
tokenizer.encode( |
|
text, return_tensors="pt", add_special_tokens=True |
|
))[0], |
|
skip_special_tokens = True |
|
) |
|
|
|
''' |
|
'分段:他喜欢吃汉堡 分段:是的,但我可不喜欢。' |
|
''' |
|
|
|
``` |