Post
1177
We built a new small language model SmolLM2-MedIT-Upscale-2B, based on SmolLM2-1.7B-Instruct from Hugging Face. The premise was simple - increasing the vector in attention layers would positively impact the model's capabilities.
What did we prove?
In total, not much really, since we don't have the original trained under the same conditions as our upscale. However...
1. We scaled up the model without losing its quality
2. We confirmed that the method we devised works
3. After extremely short fine-tuning, the model achieved much better results in IFEval compared to the original (53.68 vs 64.29) and a higher overall average score in Open LLM Leaderboard (14.75 vs 15.17)
I consider this a big success π, since surpassing the original in metrics is often very time-consuming, generates high costs, and doesn't always work out.
Meanwhile, we're moving forward, training SmolLM2 400M Instruct as an upscale of 136M.
We're curious about how increasing the base and intermediate vectors will affect the model's quality. We'll compare it to the original and the 360M Instruct version released by Hugging Face.
License: Apache 2.0ββββββββββββββββ
meditsolutions/SmolLM2-MedIT-Upscale-2B
What did we prove?
In total, not much really, since we don't have the original trained under the same conditions as our upscale. However...
1. We scaled up the model without losing its quality
2. We confirmed that the method we devised works
3. After extremely short fine-tuning, the model achieved much better results in IFEval compared to the original (53.68 vs 64.29) and a higher overall average score in Open LLM Leaderboard (14.75 vs 15.17)
I consider this a big success π, since surpassing the original in metrics is often very time-consuming, generates high costs, and doesn't always work out.
Meanwhile, we're moving forward, training SmolLM2 400M Instruct as an upscale of 136M.
We're curious about how increasing the base and intermediate vectors will affect the model's quality. We'll compare it to the original and the 360M Instruct version released by Hugging Face.
License: Apache 2.0ββββββββββββββββ
meditsolutions/SmolLM2-MedIT-Upscale-2B