Weird model outputs in the inference API

#2
by natolambert - opened

Do the authors know if everything is set up correctly for the inference API. Playing with some questions, it seems to not remember any past discussions and output totally random stuff?

image (19).png

It works fine, when I run on my local pc using transformers

Do you know how to setup chat history properly?
I write
dialogue = [
"Jakob said. I have a black dog.",
"Philip said. My name is Philip",
"Jakob said. Cool!",
"Philip said. My dog's name is Andi"
]

I'm not sure how the Inference API works, but Cosmo uses the special token to separate the utterances in the dialogue history.
The dialogue history should be given in a string format as the following:
utterance1 <TURN> utterance2 <TURN> ... <TURN> last utterance
Please follow the "How to use" section in the model card :)

The utterances don't have to be prepended with "Jakob said." or "David said."
You can simply prompt Cosmo with a role instruction.

Below is an example according to the "How to use" section:

situation = ""  # You can put some situation description
instruction = "You are Jakob and you are talking with Philip."
dialogue = [
    "I have a black dog.",
   "My name is Philip.",
   "Cool!",
   "My dog's name is Andi"
]

response = generate(situation, instruction, dialogue)
print(response)

How does Cosmo know what Jakob and Philip said. Philip and Jakob can send many messages at the same time.

instruction = "You are Jakob and you are talking with Philip."
dialogue = [
"Philip: I have a black rat.",
"Jakob: I have a black bird.",
"Jakob: I have a black cat.",
"Jakob: I have a black dog.",
"Philip: My name is Philip.",
"Jakob: Cool!",
"Philip: What animals do you have?"
]

VS

instruction = "You are Jakob and you are talking with Philip."
dialogue = [
"I have a black rat.",
"I have a black bird.",
"I have a black cat.",
"I have a black dog.",
"My name is Philip.",
"Cool!",
"What animals do you have?"
]

VS
(Or do you suggest to combine the many messages?)
instruction = "You are Jakob and you are talking with Philip."
dialogue = [
"I have a black rat.",
"I have a black bird. I have a black cat. I have a black dog.",
"My name is Philip.",
"Cool!",
"What animals do you have?"
]

same here. if the inference requires special tokens etc, perhaps better to set the pipeline_tag as text-to-text generation ?

part one


part two

Yes, the inference API for Cosmo is not working correctly :(
We added a simple demo code for having a chat with Cosmo in our official repository, please try it out!

heanu changed discussion status to closed

Sign up or log in to comment