Join the conversation
Join the community of Machine Learners and AI enthusiasts.
Sign Up![](https://cdn-avatars.huggingface.co/v1/production/uploads/673fd3588bb7a091b2c9f4f5/MFGRfiotAtoBWBxjuCCYI.png)
note: benchmarks will be out on like tuesday
colox will be delayed due to bugs and issues
It would be a huge deal if you actually have a model smarter than o1, meaning smarter in general not just "better" in one specific benchmark. Your adapter_config.json says the adapter is trained on the LLaMa2-7B version so it would be an even bigger deal if you have a model that small outperform o1. It seems pretty unlikely that you actually achieved what you claim especially when a big claim like that is posted without anything to back it up so I'm really looking forward to seeing the data and experiencing the performance of this model that led you to claim it's better than Open AI's o1
i really did tests, it is not smarter due to the way i trained it. i am working on it a lot as it did not meet my expectations.