Spaces:
Running
Running
File size: 6,241 Bytes
c2f4b94 de605d5 c2f4b94 9436178 acf6dc7 4f82c07 acf6dc7 99f29b1 55c1369 4f82c07 99f29b1 a7b151f 06fd7dd a7b151f 1e7c3e2 99f29b1 704916d 16549c2 704916d 16549c2 7fd158b 16549c2 de605d5 cc9e550 de605d5 704916d f4d0085 704916d 95c1fb9 704916d 35b7429 cc134c1 7fd158b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
title: README
emoji: π¦
colorFrom: pink
colorTo: indigo
sdk: static
pinned: false
---
Hi, I am a Magpie π¦!
πΈοΈ **Project Website**: [https://magpie-align.github.io/](https://magpie-align.github.io/)
π **Technical Report**: [https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464)
π€ **HF Paper Page**: [https://huggingface.co/papers/2406.08464](https://huggingface.co/papers/2406.08464)
π¬ **Codes**: [https://github.com/magpie-align/magpie](https://github.com/magpie-align/magpie)
π€ **Magpie Demo**: [https://huggingface.co/spaces/davanstrien/magpie](https://huggingface.co/spaces/davanstrien/magpie) (Thanks a lot for the implementation from @davanstrien!)
π¦ **MagpieLM**: [MagpieLM-4B](https://huggingface.co/spaces/yuchenlin/MagpieLM-4B), [MagpieLM-8B](https://huggingface.co/spaces/yuchenlin/MagpieLM-8B)
**Questions?** Please contact [Zhangchen](mailto:zxu9@uw.edu) and/or [Yuchen](mailto:yuchenl@allenai.org) by email or raise an issue in [Github](https://github.com/magpie-align/magpie/issues/new/choose).
## [π§ Click here for full dataset navigation (SFT and DPO)](https://github.com/magpie-align/magpie/blob/main/navigation.md)
## Raw Datasets
|Model Name | Dataset | Type | Description |
|-------------|:-------|:-------|:-------|
| [Qwen2.5 72B Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) | [Magpie-Qwen2.5-Pro-1M](https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2.5-Pro-1M-v0.1) | SFT | 1M Raw conversations built with Qwen2.5 72B Instruct.
| [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) | [Magpie-Llama-3.1-Pro-1M](https://huggingface.co/datasets/Magpie-Align/Magpie-Llama-3.1-Pro-1M-v0.1) | SFT | 1M Raw conversations built with Meta Llama 3.1 70B.
| [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) | [Magpie-Pro-1M](https://huggingface.co/datasets/Magpie-Align/Llama-3-Magpie-Pro-1M-v0.1) | SFT | 1M Raw conversations built with Meta Llama 3 70B.
| [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | [Magpie-Air-3M](https://huggingface.co/datasets/Magpie-Align/Llama-3-Magpie-Air-3M-v0.1) | SFT | 3M Raw conversations built with Meta Llama 3 8B.
| [Qwen2 72B Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) | [Magpie-Qwen2-Pro-1M](https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2-Pro-1M-v0.1) | SFT | 1M Raw conversations built with Qwen2 72B Instruct.
| [Qwen2 7B Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) | [Magpie-Qwen2-Air-3M](https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2-Air-3M-v0.1) | SFT | 3M Raw conversations built with Qwen2 7B Instruct.
| [Phi-3 Medium Instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) | [Magpie-Phi3-Pro-1M](https://huggingface.co/datasets/Magpie-Align/Magpie-Phi3-Pro-1M-v0.1) | SFT | 1M Raw conversations built with Phi-3 Medium Instruct.
| [Gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it) | [Magpie-Gemma2-Pro-534K](https://huggingface.co/datasets/Magpie-Align/Magpie-Gemma2-Pro-534K-v0.1) | SFT | 534K conversations built with Gemma-2-27b-it.
| [Llama 3.1 405B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) | [Magpie-Ultra-v0.1](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1) | SFT | [Argilla] 50K Raw conversations built with Meta Llama 3.1 405B.
### Recommended Filtered Datasets
Here are some filtered datasets made by the authors, which are utilized in our [Magpie-Align models](https://huggingface.co/collections/Magpie-Align/magpie-models-668c4a8eea81ccc0db130bdf). We also encourage you to [create and apply your own filters to customize datasets](https://github.com/magpie-align/magpie?tab=readme-ov-file#4-design-and-apply-your-filter).
We've kept these datasets within the 200K-300K range for your convenience. We found this range represents a sweet spot balancing model performance and training time.
The full list of filtered datasets can be found [here](https://github.com/magpie-align/magpie/blob/main/navigation.md).
|Model Name | Dataset | Size | Type | Description |
|-------------|:-------|:-------|:-------|:-------|
| [Llama 3.1 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct) | [Magpie-Llama-3.1-Pro-MT-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Llama-3.1-Pro-MT-300K-Filtered) | 300K | SFT | (π Flexible License! π) Select 300K high quality multi-turn conversations from Magpie-Llama-3.1-Pro-MT-500K.
| [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) | [Magpie-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered) | 300K | SFT | Apply a filter and select 300K high quality conversations from Magpie-Pro-1M.
| [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) | [Magpie-Pro-MT-300K](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-MT-300K-v0.1) | 300K | SFT | Select 300K difficult questions from Magpie-Pro-1M and extend to multi-turn conversations.
| [Llama 3 70B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) | [Magpie-Reasoning-150K](https://huggingface.co/datasets/Magpie-Align/Magpie-Reasoning-150K) | 150K | SFT | Reasoning booster with 150K math + code + reasoning conversations. Recommend mixing with Magpie-Pro-MT-300K.
| [Qwen2 72B Instruct](https://huggingface.co/Qwen/Qwen2-72B-Instruct) | [Magpie-Qwen2-Pro-200K-Chinese](https://huggingface.co/datasets/Magpie-Align/Magpie-Qwen2-Pro-200K-Chinese) | 200K | SFT | Apply a filter and select 200K high quality Chinese conversations from Magpie-Qwen2-Pro-1M.
| [Gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it) | [Magpie-Gemma2-Pro-200K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Gemma2-Pro-200K-Filtered) | 200K | SFT | (π Flexible License! π) Apply a filter and select 200K conversations from Magpie-Gemma2-Pro-534K.
| [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | [Magpie-Air-DPO-100K](https://huggingface.co/datasets/Magpie-Align/Magpie-Air-DPO-100K-v0.1) | 100K | DPO | DPO dataset via Best-of-N sampling and rewards. |