Post
1803
Been reading about the "bigger models = better AI" narrative getting pushed back today.
@thomwolf tackled this head on at Web Summit and highlighted how important small models are (and why closed-source companies haven't pushed for this 😬). They're crushing it: today's 1B parameter models outperform last year's 10B models.
Fascinating to hear him talk about the secret sauce behind this approach.
@thomwolf tackled this head on at Web Summit and highlighted how important small models are (and why closed-source companies haven't pushed for this 😬). They're crushing it: today's 1B parameter models outperform last year's 10B models.
Fascinating to hear him talk about the secret sauce behind this approach.