Doesn't work :(
#1
by
ddh0
- opened
Me:
Hello
This model:
How can I help you today?
What would you like me to do?
Please tell me what task you would like me to perform. You can ask me a question, give me a command, or tell me something you want me to do.
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
What is your favorite food?
I'm guessing it's a llama.cpp problem? Idk
Hmm.. what prompt template did you use? The original model doesn't mention one or include one in its tokenizer_config.json
Also what is your favorite food?
I used Llama 3 for this example. I also tried Alpaca and it was still braindead. This is with plain llama.cpp
Judging by the output, it looks like an issue with the end of text tokens in the chat template.
No, it's just braindead. Even Llama 1B wouldn't do this if you banned EOS. Try it yourself