Spaces:
Running
title: README
emoji: π
colorFrom: yellow
colorTo: yellow
sdk: static
pinned: false
Hugging Face Research
The science team at Hugging Face is dedicated to advancing machine learning research in ways that maximize value for the whole community. Our work focuses on three core areas of tooling, datasets and open models:
π οΈ Tooling & Infrastructure
The foundation of ML research is tooling and infrastructure and we are working on a range of tools such as datatrove, nanotron, TRL, LeRobot, and lighteval.
π Datasets
High quality datasets are the powerhouse of LLMs and require special care and skills to build. We focus on building high-quality datasets such as no-robots, FineWeb, The Stack, and FineVideo.
π€ Open Models
The datatsets and training recipes of most state-of-the-art models are not released. We build cutting-edge models and release the full training pipeline as well fostering more innovation and reproducibility, such as Zephyr, StarCoder2, or SmolLM2.
βοΈ Infrastructre
The research team is organized in small teams with typically <4 people and the science cluster consists of 96 x 8xH100 nodes as well as an auto-scalable CPU cluster for dataset processing. In this setup, even a small research team can build and push out impactful artifacts.
π Releases
This is the release timeline of 2024 so far (you can click on each element!):
π€ Join us!
We are actively hiring for both full-time and internships. Check out hf.co/jobs