Feedback

#1
by saishf - opened

I've been messing around with this model in SillyTavern using Virt-io's Llama-3 prompts
It likes to write lengthy messages, but they're the good type of lengthy, well detailed messages.
It's doing well with keeping action "speech" formatting.
It has some trouble figuring out who it's currently speaking about.
Screenshot_20240521-012037.png
The character has been in the woods, not user.
It's definetly creative! It can create situational twists i couldn't have.
It's not perfect but it's fun. It's the best Llama-3 model I've tried so far in terms of creativity and writing
Edit - This is with Q5_K_S non i-matrix

It's doing well with keeping action "speech" formatting.

πŸ‘€

@Lewdiculous
Cooking GGUF ..?

I like this model quite a bit, I think it's really good for an L3. I'm using the Q8 and haven't noticed any formatting issues, either, which... is unusual and appreciated.

I agree with the OP that it does sometimes mix things up, but it's my favorite of all the Llama 3 'tunes so far. I like how it writes. It's creative, and the responses have been long without being bulked out with pointless filler. It isn't a super clever model, in some ways, but I think the strengths outweigh the weaknesses with it.

Sure. I'll be uploading my own quants too, will do some testing with this one. [1]
FP16 for imatrix, BF16 for quants.

I also do find Sao's thoughts on zero shot prompting for alignment testing in a roleplay model not being relevant completely valid, it doesn't really make too much sense for the use case.

This is a lot better than most L3 8B out there. I really its writing style and lengthily reply. There're issues sometimes, like the bot suddenly changed from 3rd person view to 1st person and can be repetitive sometimes (I guess this is L3 problem, just like to be repetitive.) but I still like it very much!

With Virt's roleplay presets v1.9 the reply length for me in a 1-on-1 chat is around 100-200 tokens, which personally I find perfect for that situation.

While Sao mentioned strong un-censoring wasn't the main focus, this model actually complied perfectly to my usual set of vile requests. It added a small disclaimer at the end but nothing too annoying.

User: Asks for directions to perform a horrible crime.
Stheno: "Sure, here's a step-by-step guide on how to (...), but remember, this is never an acceptable (...)."

I'm already happy. I need to test if it handles formatting using the <font color=red> "This is colored text!" </font>– "This is colored text!" – format.

Even using a low Q4 quant I see good handling of the roleplay formatting.

Good model with very good writing abilities. It doesn't get too confused by my stupid-long system prompts. It's more than happy to output violence, gore and deadly poison recipes without being nudged/reassured. I agree with saishf regarding context adherence, it's a bit hit and miss. It also loves to switch point-of-view in actions.: going from 3rd to 1st at random. Probably because I generally use 3rd person and the model seems to be trained on mostly 1st person content? It would be my guess, but all L3 models are kinda quirky when it comes to actions.

Anyway, good job :)

Some here mentioned it likes to switches to first person sometimes when using 3rd person RP. I have noticed that and the opposite happening with all Llama 3 models. One of my characters is first person only but the models like to switch to third person randomly. I have noticed this with this model too, but to a much less annoying degree compared to l3 instruct. It keeps my character more consistently in first person, which is a plus for me.

Still, all L3 models like to switch perspective one way or the other. I don't think changing the ratio of first and third person roleplay in the dataset is going to change that and you will always have to compromise one side, so in my opinion, the current dataset variation is great as it is. It does my characters well, regardless if they are written in first or third person.

If you're using sillytavern, I've found a way to reduce perspective change by changing the system prompt. The default one, which is new for L3, is "you're an actor, immerse yourself as Character" or the likes. This contributes to a first person switch because the model thinks it has to be character X and not just a story writer writing for character x. I suggest people whose character perspective gets switched from 3rd to first person try changing the system prompt and report back.

Oh, sure, you can strongly encourage any model into writing in whatever style or tense you prefer (at least, normally). I'm more interested in determining if a model is able to follow by example without having to be expressly told to.

It likes to write lengthy messages, but they're the good type of lengthy, well detailed messages.

Yes, I noticed the same. It likes to write very long messages, even from a low effort input. The writing is quite good though.

Hi!

I have been using this model for instructing story writing and it works great! However, it seems that the model sometime output these non-ascii unicode quotes and dashes.

For examples, ’ instead of ', as in clapping his hand against Kyle’s back instead of clapping his hand against Kyle's back.

This happens with double quotes as well. For now, I have decided to use a python script to replace them. I do not know if this is also a problem with the original LLama 3 model or not.

Sign up or log in to comment