Lewis Tunstall PRO

lewtun

AI & ML interests

LLMs, LLMs, LLMs

Recent Activity

Articles

Organizations

Hugging Face's profile picture AutoNLP's profile picture Natural Language Processing with Transformers's profile picture BigScience Workshop's profile picture Hugging Face Internal Testing Organization's profile picture Ought's profile picture Hugging Face Course's profile picture Testing Benchmarks on the Hub's profile picture NLP en ES's profile picture GEM benchmark's profile picture SetFit's profile picture GEM benchmark submissions's profile picture Benchmarks Hosting's profile picture ALPS test's profile picture Evaluation datasets's profile picture Deep Learning for Particle Physicists's profile picture fast.ai community's profile picture trl internal testing's profile picture DreamBooth Hackathon's profile picture SomosNLP's profile picture Marsyas  (Music Analysis, Retrieval and Synthesis for Audio Signals)'s profile picture ONNXConfig for all's profile picture HF Course Demos's profile picture How to teach Hugging Face?'s profile picture Jet Universe's profile picture Evaluation on the Hub's profile picture The ML Landscape of Top Taggers's profile picture HuggingFaceM4's profile picture HF Canonical Model Maintainers's profile picture TRL's profile picture BigCode's profile picture Hugging Face H4's profile picture Inference Endpoints's profile picture Hugging Face OSS Metrics's profile picture BigCode Data's profile picture Reading Group's profile picture Hugging Face H4 Community's profile picture Hugging Face TB Research's profile picture Hugging Face Smol Cluster's profile picture Open LLM Leaderboard's profile picture EPFL LLM Team's profile picture H4 Alignment Handbook's profile picture h4-argilla-collab's profile picture ZeroGPU Explorers's profile picture Project-Numina's profile picture ORPO Explorers's profile picture Kato's profile picture Distillation Hugs's profile picture Hugging Face Discord Community's profile picture Data Agents's profile picture nltpt's profile picture IOPO Experiments's profile picture Hugging Face FineVideo's profile picture Reliable Agents's profile picture Hugging Face Science's profile picture HF CMU Collab's profile picture

Posts 3

view post
Post
6413
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute 🔥

How? By combining step-wise reward models with tree search algorithms :)

We show that smol models can match or exceed the performance of their much larger siblings when given enough "time to think"

We're open sourcing the full recipe and sharing a detailed blog post.

In our blog post we cover:

📈 Compute-optimal scaling: How we implemented DeepMind's recipe to boost the mathematical capabilities of open models at test-time.

🎄 Diverse Verifier Tree Search (DVTS): An unpublished extension we developed to the verifier-guided tree search technique. This simple yet effective method improves diversity and delivers better performance, particularly at large test-time compute budgets.

🧭 Search and Learn: A lightweight toolkit for implementing search strategies with LLMs and built for speed with vLLM

Here's the links:

- Blog post: HuggingFaceH4/blogpost-scaling-test-time-compute

- Code: https://github.com/huggingface/search-and-learn

Enjoy!
view post
Post
5023
Introducing Zephyr 141B-A35B 🪁:

HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1

Yesterday, Mistral released their latest base model (via magnet link of course 😅) and the community quickly converted it to transformers format and pushed it to the Hub: mistral-community/Mixtral-8x22B-v0.1

Early evals of this model looked extremely strong, so we teamed up with Argilla and KAIST AI to cook up a Zephyr recipe with a few new alignment techniques that came out recently:

🧑‍🍳 Align the base model with Odds Ratio Preference Optimisation (ORPO). This novel algorithm developed by @JW17 and @nlee-208 and @j6mes and does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO.

🦫 Use a brand new dataset of 7k high-quality, multi-turn preferences that has been developed by our friends at Argilla. To create this dataset, they took the excellent Capybara SFT dataset from @LDJnr LDJnr/Capybara and converted it into a preference dataset by augmenting the final turn with responses from new LLMs that were then ranked by GPT-4.

What we find especially neat about this approach is that training on 7k samples only takes ~1.3h on 4 H100 nodes, yet produces a model that is very strong on chat benchmarks like IFEval and BBH.

Kudos to @alvarobartt @JW17 and @nlee-208 for this very nice and fast-paced collab!

For more details on the paper and dataset, checkout our collection: HuggingFaceH4/zephyr-orpo-6617eba2c5c0e2cc3c151524