Performance oddities of the 3M model.

#3
by MartialTerran - opened

I noticed something odd about the performance of your 3M model. It really does not care what happens after the first sentence is fixed. It already knows after the first sentence is fixed what the rest of the response is going to be. If you type spaces after the first sentence, it will provide the same completion (as it did before you typed the spaces) but will just skip the tokens that were provided as spaces.
I started with :There was a dog
This produced a response: "There was a dog named Max. [plus two additional sentences, always the same: Max was a little girl named Lily. Max was a little girl who loved to play with her toys"]

[adding more spaces after the first sentence ("There was a dog named Max. ) simply produced a response that erased the next-tokens that would have been inserted into the response at those locations. Also, the model seems to be using only single-letter tokens only to produce words such as "was".] [This suggests that the entire response is already predetermined based upon the content of the first complete sentence only]

There was a dog named Max.
s a little girl named Lily. Max was a little girl who loved to play with her toys

Sign up or log in to comment