๐จ How green is your model? ๐ฑ Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research! ๐ open-llm-leaderboard/comparator Now, you can not only compare models by performance, but also by their environmental footprint!
๐ The Comparator calculates COโ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... ๐ ๏ธ Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
๐ New feature of the Comparator of the ๐ค Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!
๐ ๏ธ Here's how to use it: 1. Select your model from the leaderboard. 2. Load its model tree. 3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison. 4. Press Load. See side-by-side performance metrics instantly!
Ready to dive in? ๐ Try the ๐ค Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator ๐
Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
๐จ Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? ๐ Compare models: open-llm-leaderboard/comparator
Need an LLM assistant but unsure which hashtag#smolLM to run locally? With so many models available, how can you decide which one suits your needs best? ๐ค
If the model youโre interested in is evaluated on the Hugging Face Open LLM Leaderboard, thereโs an easy way to compare them: use the model Comparator tool: open-llm-leaderboard/comparator Letโs walk through an example๐
Letโs compare two solid options: - Qwen2.5-1.5B-Instruct from Alibaba Cloud Qwen (1.5B params) - gemma-2-2b-it from Google (2.5B params)
For an assistant, you want a model thatโs great at instruction following. So, how do these two models stack up on the IFEval task?
What about other evaluations? Both models are close in performance on many other tasks, showing minimal differences. Surprisingly, the 1.5B Qwen model performs just as well as the 2.5B Gemma in many areas, even though it's smaller in size! ๐
This is a great example of how parameter size isnโt everything. With efficient design and training, a smaller model like Qwen2.5-1.5B can match or even surpass larger models in certain tasks.
Looking for other comparisons? Drop your model suggestions below! ๐