AI & ML interests

None defined yet.

Recent Activity

albertvillanovaΒ  updated a dataset 4 months ago
openslr/openslr
albertvillanovaΒ  updated a dataset 4 months ago
openslr/librispeech_asr
View all activity

openslr's activity

anton-lΒ 
posted an update 3 days ago
view post
Post
1833
Introducing πŸ“π…π’π§πžπŒπšπ­π‘: the best public math pre-training dataset with 50B+ tokens!
HuggingFaceTB/finemath

Math remains challenging for LLMs and by training on FineMath we see considerable gains over other math datasets, especially on GSM8K and MATH.

We build the dataset by:
πŸ› οΈ carefully extracting math data from Common Crawl;
πŸ”Ž iteratively filtering and recalling high quality math pages using a classifier trained on synthetic annotations to identify math reasoning and deduction.

We conducted a series of ablations comparing the performance of Llama-3.2-3B-Base after continued pre-training on FineMath and observe notable gains compared to the baseline model and other public math datasets.

We hope this helps advance the performance of LLMs on math and reasoning! πŸš€
We’re also releasing all the ablation models as well as the evaluation code.

HuggingFaceTB/finemath-6763fb8f71b6439b653482c2
lhoestqΒ 
posted an update 10 days ago
view post
Post
1591
Made a HF Dataset editor a la gg sheets here: lhoestq/dataset-spreadsheets

With Dataset Spreadsheets:
✏️ Edit datasets in the UI
πŸ”— Share link with collaborators
🐍 Use locally in DuckDB or Python

Available for the 100,000+ parquet datasets on HF :)
albertvillanovaΒ 
posted an update about 1 month ago
view post
Post
1364
🚨 How green is your model? 🌱 Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research!
πŸ‘‰ open-llm-leaderboard/comparator
Now, you can not only compare models by performance, but also by their environmental footprint!

🌍 The Comparator calculates COβ‚‚ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... πŸ› οΈ
Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
albertvillanovaΒ 
posted an update about 2 months ago
view post
Post
1457
πŸš€ New feature of the Comparator of the πŸ€— Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!

πŸ› οΈ Here's how to use it:
1. Select your model from the leaderboard.
2. Load its model tree.
3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.
4. Press Load.
See side-by-side performance metrics instantly!

Ready to dive in? πŸ† Try the πŸ€— Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator 🌐
albertvillanovaΒ 
posted an update about 2 months ago
view post
Post
3114
πŸš€ Exciting update! You can now compare multiple models side-by-side with the Hugging Face Open LLM Comparator! πŸ“Š

open-llm-leaderboard/comparator

Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
albertvillanovaΒ 
posted an update about 2 months ago
view post
Post
1220
🚨 Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? πŸ“Š Compare models: open-llm-leaderboard/comparator
albertvillanovaΒ 
posted an update 2 months ago
view post
Post
1910
Finding the Best SmolLM for Your Project

Need an LLM assistant but unsure which hashtag#smolLM to run locally? With so many models available, how can you decide which one suits your needs best? πŸ€”

If the model you’re interested in is evaluated on the Hugging Face Open LLM Leaderboard, there’s an easy way to compare them: use the model Comparator tool: open-llm-leaderboard/comparator
Let’s walk through an exampleπŸ‘‡

Let’s compare two solid options:
- Qwen2.5-1.5B-Instruct from Alibaba Cloud Qwen (1.5B params)
- gemma-2-2b-it from Google (2.5B params)

For an assistant, you want a model that’s great at instruction following. So, how do these two models stack up on the IFEval task?

What about other evaluations?
Both models are close in performance on many other tasks, showing minimal differences. Surprisingly, the 1.5B Qwen model performs just as well as the 2.5B Gemma in many areas, even though it's smaller in size! πŸ“Š

This is a great example of how parameter size isn’t everything. With efficient design and training, a smaller model like Qwen2.5-1.5B can match or even surpass larger models in certain tasks.

Looking for other comparisons? Drop your model suggestions below! πŸ‘‡
albertvillanovaΒ 
posted an update 2 months ago
view post
Post
1946
🚨 We’ve just released a new tool to compare the performance of models in the πŸ€— Open LLM Leaderboard: the Comparator πŸŽ‰
open-llm-leaderboard/comparator

Want to see how two different versions of LLaMA stack up? Let’s walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. πŸ¦™πŸ§΅πŸ‘‡

1/ Load the Models' Results
- Go to the πŸ€— Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator
- Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns.
- Press the Load button. Ready to dive into the results!

2/ Compare Metric Results in the Results Tab πŸ“Š
- Head over to the Results tab.
- Here, you’ll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! 🌟
- Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.

3/ Check Config Alignment in the Configs Tab βš™οΈ
- To ensure you’re comparing apples to apples, head to the Configs tab.
- Review both models’ evaluation configurations, such as metrics, datasets, prompts, few-shot configs...
- If something looks off, it’s good to know before drawing conclusions! βœ…

4/ Compare Predictions by Sample in the Details Tab πŸ”
- Curious about how each model responds to specific inputs? The Details tab is your go-to!
- Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button.
- Check out the side-by-side predictions and dive into the nuances of each model’s outputs.

5/ With this tool, it’s never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether you’re a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.

πŸš€ Try the πŸ€— Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
albertvillanovaΒ 
posted an update 3 months ago
lhoestqΒ 
posted an update 5 months ago
view post
Post
4043
Hey ! I'm working on a 100% synthetic Dataset Hub here (you can search for any kind of datasets an the app invents them). The link is here: infinite-dataset-hub/infinite-dataset-hub

Question for the Community:

Which models should I use to generate images and audio samples for those datasets ? πŸ€—
  • 4 replies
Β·
albertvillanovaΒ 
posted an update 7 months ago
view post
Post
2699
Easily convert your script-based datasets to Parquet and explore them in the dataset viewer. 🌟

πŸ› οΈ Use @huggingface Datasets CLI:
$ 𝚍𝚊𝚝𝚊𝚜𝚎𝚝𝚜-πšŒπš•πš’ πšŒπš˜πš—πšŸπšŽπš›πš_𝚝𝚘_πš™πšŠπš›πššπšžπšŽπš πš„πš‚π™΄πšπ™½π™°π™Όπ™΄/π™³π™°πšƒπ™°πš‚π™΄πšƒ_𝙽𝙰𝙼𝙴

Learn more: https://huggingface.co/docs/datasets/main/en/cli#convert-to-parquet
#Data #AI
albertvillanovaΒ 
posted an update 8 months ago
view post
Post
4059
Recently, the Hugging Face πŸ€— datasets team met with the Language Technologies team led by Marta Villegas ( @mvillegas ) at Barcelona Supercomputing Center @BSC-LT . Eager to collaborate to promote AI across Catalan, Spanish, Basque, and Galician languages and share open-source datasets/models. 🀝 #AI #LanguageTech #OpenSource
  • 1 reply
Β·
albertvillanovaΒ 
posted an update 8 months ago
view post
Post
1661
πŸš€ We recently released datasets 2.19.0! πŸ“¦

πŸ”₯ What's New:
- Polars integration πŸ»β€β„οΈ
- fsspec support for conversion to JSON, CSV, and Parquet
- Mode parameter for Image feature
- CLI function to convert script-datasets to Parquet
- Dataset.take and Dataset.skip

Plus, a bunch of general improvements & bug fixes!

Check out the release notes: https://github.com/huggingface/datasets/releases/tag/2.19.0

Upgrade now and power up your data workflows! πŸ’₯
  • 2 replies
Β·
lhoestqΒ 
posted an update 9 months ago
view post
Post
3058
✨ Easy Synthetic Dataset File Generation using LLM DataGen ! Link: https://huggingface.co/spaces/lhoestq/LLM_DataGen

features + how it works:

✍️ Generate the dataset content you want just by entering a file name
πŸ’‘ Optionally specify the column names you need
πŸ’¨ The dataset is streamed and generated on-the-fly in JSON Lines format
βœ… Generation is constrained to always output valid JSON

How does this work ?
1/ Enter a file name
2/ The model generates column names for such a file. Using structured generation, it can generate 2 to 5 column names using lower case characters and underscores. I use a prompt that asks to generate column names for a realistic dataset and low temperature.
3/ The columns are used to update the Finite State Machine for the dataset content structured generation, so that it is used to generate JSON objects using those columns
4/ The model generates JSON objects using structured generation again, using the updated Finite State Machine. I use a prompt that asks for realistic data and a temperature of 1.

> Why update a Finite State Machine instead of re-creating one ?

Creating one can take up to 30sec, while updating one takes 0.1s (though it requires to manipulate a graph which is not easy to implement)

> Batched generation is faster, why not use it ?

Generate in batches is faster but tends to generate duplicates for this demo.
Further work can be to provide different prompts (one per sequence in the batch) to end up with a different distribution of sequences in each batch. Or implement a custom sampler that would forbid generating the same data in sequences of the same batch.

> How does structured generation work ?

I used the outlines library with transformers to to define a JSON schema that the generation has to follow. It uses a Finite State Machine with token_id as transitions.

Let me know what you think ! And feel free to duplicate/modify it to try other models/prompts or sampling methods :)