Why fine-tune the instruct version?

#2
by deleted - opened
deleted

The instruct version of Gemma is astonishingly bad. It basically contains nothing but 1000s of 'As an AI model I can't answer that' alignment. As a result it scores about 10 points lower than their foundational models. And even the Gemma foundational models are stripped of basic information that isn't the least bit illegal or amoral.

If you're going to get anything remotely useful out of Gemma you need to start with the foundational model and train it with a lot of basic information about reality that Google stripped from the training tokens.

I'm seeing a lot more censorship/alignment/moralizing/... in non-Meta/Mistral LLMs and it's starting to concern me. Stripping basic facts out of AI models is the action of tyrannical governments such as Russia, China and Iran. It has no place in free countries like US and European nations. Gemma is dead on arrival. No AI model should strip basic facts from its foundational model, then strip even more from its chat models. Gemma is an embarrassment.

@Phil337 Your wrong, most people were using the gguf files that google uploaded, but the file was broken. The gguf files found here are not broken and work surprisingly well. Even better than mistral-7b

https://huggingface.co/dranger003/gemma-7b-it-iMat.GGUF/tree/main

deleted
This comment has been hidden
deleted

@rombodawg Do you still think gemma-7b-it is "even better than Mistral-7b"?

I'm holding off judging the foundational model until it's properly fine-tuned by 3rd parties, but I tried over a half dozen implementation of gemma-7b-it, and not just the problematic GGUF, and it's astonishingly bad at times. The majority of testers are saying the same, plus it scored a full 10 points lower than its foundational model on the leaderboard.

If I was an engineer who worked on Gemma I'd be furious. The alignment team lobotomized it, making it perform far worse and less reliably than it otherwise would. It's almost comical what it won't respond to and how bad he responses often are.

Sign up or log in to comment