Edit model card
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("svjack/bloom-daliy-dialogue")

model = AutoModelForCausalLM.from_pretrained("svjack/bloom-daliy-dialogue")

tokenizer.decode(
model.generate(
    tokenizer.encode(
            "你饿吗?", return_tensors="pt", add_special_tokens=True
        ), max_length = 128)[0],
skip_special_tokens = True
).split("\n-----\n")

'''
['你饿吗?',
 '我饿了',
 '你想吃点什么吗?',
 '我可能也饿了',
 '你想吃块蛋糕吗?',
 '这听起来不错',
 '你要什么蛋糕?',
 '我也不知道啊 第一次来这里',
 '让我看看... 这是什么蛋糕?',
 '是软软糖圣菲利斯蛋糕。',
 '你吃了多少?',
 '我大概应该吃一小百二十块。',
 '你通常吃哪种蛋糕?',
 '我通常只吃一点。',
 '你知道,我不喜欢吃软糖圣菲利斯蛋糕。',
 '你有什么特别喜欢']
'''
Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using svjack/bloom-daliy-dialogue 1