Hallucination on basic publicly available information
Q: In the TV show The Expanse, there are mentions of 'Remember the Cat?' Do you know anything about it?
Gemma-2 Response:
"The reference to "Remember the Cat?" remains an unsolved mystery in The Expanse TV show. It could be a simple memory or a complex coded message, leaving viewers to speculate and interpret its true meaning."
##Mistral model response:
In the "Expanse" TV series and novels, "Remember the Canterbury" is a phrase that has a significant meaning. It refers to the Canterbury-class ice miner called the Canterbury Station (or Canterbury Abrasive Manufacturing Cooperative), which was destroyed by protomolecule in Caliban's War (Book 3 of The Expanse series).
Hi @rzgar ,
Provide additional context in the prompt about The Expanse TV show to reduce the likelihood of hallucination. Experiment with different prompts to clarify the question further. If the model continues to hallucinate, consider testing with a higher-capacity version, such as Gemma-7B-it, to improve the accuracy of the response.
Thank you.
Hi @rzgar ,
Provide additional context in the prompt about The Expanse TV show to reduce the likelihood of hallucination. Experiment with different prompts to clarify the question further. If the model continues to hallucinate, consider testing with a higher-capacity version, such as Gemma-7B-it, to improve the accuracy of the response.
Thank you.
Dolphin-Mistral response to your reply:
Thank you for your response. While I understand the value of prompt engineering, I believe the issue here goes beyond user input. Comparing Gemma-2's output with other models of similar capacity, such as Mistral, it's evident that these competitors consistently provide more accurate and contextually relevant responses. This suggests the problem lies more with the model's knowledge base or reasoning capabilities than the prompt itself.
Google has been at the forefront of information collection and AI development for decades, and users naturally expect your models to outperform others at the same scale. Instead of redirecting users to refine their inputs, I urge your team to focus on improving model training, contextual understanding, and reasoning capabilities to meet user expectations.
Thank you for considering this feedback. I hope to see improvements in future iterations of the Gemini models.
My response: instead of selling users' information to the CIA, NSA, etc. focus on not becoming the next Nokia,a company that completely lost its dominance in the smartphone market by failing to adapt, and release better models.