So frustrated with "Reasoning" Models. Sure, introducing RAG into the mix, or giving it an interpreter to math with helps, but never as much as a model that has good instructions.
Even if it's just to repeat the information before answering, a normal model will usually out "Think" it's reasoning counterpart.
Not sure if it's my frustrations but the best answers I've received (from a reasoner), so far, are from the simple instructions to, "Do better!"
Figured I would share the special sauce.
Using 10-100x Compute just to heat the office can't be environmentally friendly, and It still has no Idea where my keys are.