The new `Lumimaid-v0.2` models might be worth a look?

#6
by jukofyork - opened

Tested 123b one, Undi's tune killed model's ability of writing poems. Likely the same issue as with NeverSleep/MiquMaid-v2-70B-DPO, which got 0.5 for poems. Didn't hurt stylized writing(S column) ability, third model to get a perfect score on it. Got better at being evil. Also really wants to stick to llama-chat2 format unlike the original which worked better with alpaca. Are 70b and 12b ones fixed in llama.cpp or should I wait another week before testing?

UPD: 123b is a bit brain damaged? Also wants to force the use of * everywhere, even if the previous messages didn't have it. Too horny for my taste.

Are 70b and 12b ones fixed in llama.cpp or should I wait another week before testing?

I think they pushed the new RoPE scaling PR yesterday so likely good (unless some other problem arises - which seems quite likely based off other recent models lol!).

image.png
They were absolutely worth a look, but for different reasons than you and I expected. Every single one of those tunes ruined poem-writing abilities of the models. Stylized writing for 70b and 123b almost didn't change. Normal Nemo performed really good, but got hurt really badly with by tune, felt like I was using a completely different model during my tests. If 70b and 12b are just as horny and brain-damaged as 123b, I don't think they are worth using.

image.png
Another worrisome development is lack of improvement in llama3.1 compared to llama3. In 70b stylized writing got hurt a bit, in 8b poem writing has degraded. Yes, they got smarter, but at what cost?

Yeah, I'm not impressed with the new llama-3.1 models. They are really terrible at writing compared to the alternatives and even for coding they seem to be quite a way behind:

https://prollm.toqan.ai/leaderboard/coding-assistant

I've finally settled on command-r-plus and mistral-large being the two best current models for writing. For my use case and tests; they are so far ahead of everything else it's not really worth using other models...

I'm still unsure about the gemma-2 models though: they do seem to write "differently" to anything else, but in a broken-frankenmergy-way...

@jukofyork Fully agree with everything you've said.

ChuckMcSneed changed discussion status to closed

Sign up or log in to comment