Joseph

Joseph717171

AI & ML interests

None yet

Recent Activity

Organizations

Hugging Face Discord Community's profile picture

Joseph717171's activity

reacted to lewtun's post with ๐Ÿ”ฅโค๏ธ 3 days ago
view post
Post
6265
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute ๐Ÿ”ฅ

How? By combining step-wise reward models with tree search algorithms :)

We show that smol models can match or exceed the performance of their much larger siblings when given enough "time to think"

We're open sourcing the full recipe and sharing a detailed blog post.

In our blog post we cover:

๐Ÿ“ˆ Compute-optimal scaling: How we implemented DeepMind's recipe to boost the mathematical capabilities of open models at test-time.

๐ŸŽ„ Diverse Verifier Tree Search (DVTS): An unpublished extension we developed to the verifier-guided tree search technique. This simple yet effective method improves diversity and delivers better performance, particularly at large test-time compute budgets.

๐Ÿงญ Search and Learn: A lightweight toolkit for implementing search strategies with LLMs and built for speed with vLLM

Here's the links:

- Blog post: HuggingFaceH4/blogpost-scaling-test-time-compute

- Code: https://github.com/huggingface/search-and-learn

Enjoy!
  • 2 replies
ยท
reacted to m-ric's post with ๐Ÿ‘๐Ÿš€โค๏ธ๐Ÿ˜Ž๐Ÿ”ฅ๐Ÿค— 6 days ago
view post
Post
2127
๐—ฃ๐—ผ๐˜๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น ๐—ฝ๐—ฎ๐—ฟ๐—ฎ๐—ฑ๐—ถ๐—ด๐—บ ๐˜€๐—ต๐—ถ๐—ณ๐˜ ๐—ถ๐—ป ๐—Ÿ๐—Ÿ๐— ๐˜€: ๐—ป๐—ฒ๐˜„ ๐—ฝ๐—ฎ๐—ฝ๐—ฒ๐—ฟ ๐—ฏ๐˜† ๐— ๐—ฒ๐˜๐—ฎ ๐—ฐ๐—น๐—ฎ๐—ถ๐—บ๐˜€ ๐˜๐—ต๐—ฎ๐˜ ๐˜„๐—ฒ ๐—ฐ๐—ฎ๐—ป ๐—ด๐—ฒ๐˜ ๐—ฟ๐—ถ๐—ฑ ๐—ผ๐—ณ ๐˜๐—ผ๐—ธ๐—ฒ๐—ป๐—ถ๐˜‡๐—ฒ๐—ฟ๐˜€! ๐Ÿฅณ

Current LLMs process text by first splitting it into tokens. They use a module named "tokenizer", that -spl-it-s- th-e- te-xt- in-to- arbitrary tokens depending on a fixed dictionnary.
On the Hub you can find this dictionary in a model's files under tokenizer.json.

โžก๏ธ This process is called BPE tokenization. It is suboptimal, everyone says it. It breaks text into predefined chunks that often fail to capture the nuance of language. But it has been a necessary evil in language models since their inception.

๐Ÿ’ฅ In Byte Latent Transformer (BLT), Meta researchers propose an elegant solution by eliminating tokenization entirely, working directly with raw bytes while maintaining efficiency through dynamic "patches."

This had been tried before with different byte-level tokenizations, but it's the first time that an architecture of this type scales as well as BPE tokenization. And it could mean a real paradigm shift! ๐Ÿ‘๐Ÿ‘

๐Ÿ—๏ธ ๐—”๐—ฟ๐—ฐ๐—ต๐—ถ๐˜๐—ฒ๐—ฐ๐˜๐˜‚๐—ฟ๐—ฒ:
Instead of a lightweight tokenizer, BLT has a lightweight encoder that process raw bytes into patches. Then the patches are processed by the main heavy-duty transformers as we do normally (but for patches of bytes instead of tokens), before converting back to bytes.

๐Ÿงฉ ๐——๐˜†๐—ป๐—ฎ๐—บ๐—ถ๐—ฐ ๐—ฃ๐—ฎ๐˜๐—ฐ๐—ต๐—ถ๐—ป๐—ด:
Instead of fixed tokens, BLT groups bytes based on their predictability (measured by entropy) - using more compute for complex sequences and efficiently handling simple ones. This allows efficient processing while maintaining byte-level understanding.

I hope this breakthrough is confirmed and we can get rid of all the tokenizer stuff, it will make model handling easier!

Read their paper here ๐Ÿ‘‰ https://dl.fbaipublicfiles.com/blt/BLT__Patches_Scale_Better_Than_Tokens.pdf
  • 2 replies
ยท