--- title: README emoji: 🐠 colorFrom: yellow colorTo: yellow sdk: static pinned: false --- # Hugging Face Research The science team at Hugging Face is dedicated to advancing machine learning research in ways that maximize value for the whole community. Our work focuses on three core areas of tooling, datasets and open models: ### πŸ› οΈ Tooling & Infrastructure The foundation of ML research is tooling and infrastructure and we are working on a range of tools such as [datatrove](www.github.com/huggingface/datatrove), [nanotron](www.github.com/huggingface/nanotron), [TRL](www.github.com/huggingface/trl), [LeRobot](www.github.com/huggingface/lerobot), and [lighteval](www.github.com/huggingface/lighteval). ### πŸ“‘ Datasets High quality datasets are the powerhouse of LLMs and require special care and skills to build. We focus on building high-quality datasets such as [no-robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots), [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), [The Stack](https://huggingface.co/datasets/bigcode/the-stack-v2), and [FineVideo](https://huggingface.co/datasets/HuggingFaceFV/finevideo). ### πŸ€– Open Models The datatsets and training recipes of most state-of-the-art models are not released. We build cutting-edge models and release the full training pipeline as well fostering more innovation and reproducibility, such as [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), [StarCoder2](https://huggingface.co/bigcode/starcoder2-15b), or [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct). ### 🌸 Collaborations Research and collaboration go hand in hand. That's why we like to organize and participate in large open collaborations such as [BigScience](https://bigscience.huggingface.co) and [BigCode](https://www.bigcode-project.org). ### βš™οΈ Infrastructre The research team is organized in small teams with typically <4 people and the science cluster consists of 96 x 8xH100 nodes as well as an auto-scalable CPU cluster for dataset processing. In this setup, even a small research team can build and push out impactful artifacts. ### πŸ“– Educational material Besides writing tech reports of research projects we also like to write more educational content to help newcomers get started to the field or practitioners. We built for example the [alignment handbook](https://github.com/huggingface/alignment-handbook), the [pretraining tutorial](https://www.youtube.com/watch?v=2-SPH9hIKT8), or the [FineWeb blog](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1). ### πŸš€ Releases This is the release timeline of 2024 so far (you can click on each element!):
πŸ”₯ Warming up
Jan
βš™οΈNanotron Release
⭐️The Stack v2
⭐️StarCoder2
Feb
πŸͺZephyr Gemma
πŸͺCosmopedia
Mar
🍷FineWeb
πŸ•΅οΈJAT Agent
🐢Idefics 2
Apr
πŸ“ˆWSD Analysis
May
🍷FineWeb-Edu
πŸ‘©β€πŸ«Stanford CS25
Jun
πŸ₯‡Win AIMO
🐢Docmatix
🀏SmolLM
Jul
🐢Idefics 3
Aug
πŸ—ΊοΈFineTasks
Oct
🀏SmolLM2
Nov
### πŸ€— Join us! We are actively hiring for both full-time and internships. Check out [hf.co/jobs](https://hf.co/jobs)