Spaces:
Runtime error
Runtime error
yuchenlin
commited on
Commit
·
fb799a3
1
Parent(s):
56548ad
modify links
Browse files
app.py
CHANGED
@@ -10,7 +10,7 @@ header = """
|
|
10 |
|
11 |
💬 We've aligned Llama-3.1-8B and a 4B version (distilled by NVIDIA) using purely synthetic data generated by our [Magpie](https://arxiv.org/abs/2406.08464) method. Our open-source post-training recipe includes: SFT and DPO data, all training configs + logs. This allows everyone to reproduce the alignment process for their own research. Note that our data does not contain any GPT-generated data, and has a much friendly license for both commercial and academic use.
|
12 |
|
13 |
-
- **Magpie Collection**: [Magpie on Hugging Face](https://
|
14 |
- **Magpie Paper**: [Read the research paper](https://arxiv.org/abs/2406.08464)
|
15 |
|
16 |
Contact: [Zhangchen Xu](https://zhangchenxu.com) and [Bill Yuchen Lin](https://yuchenlin.xyz).
|
|
|
10 |
|
11 |
💬 We've aligned Llama-3.1-8B and a 4B version (distilled by NVIDIA) using purely synthetic data generated by our [Magpie](https://arxiv.org/abs/2406.08464) method. Our open-source post-training recipe includes: SFT and DPO data, all training configs + logs. This allows everyone to reproduce the alignment process for their own research. Note that our data does not contain any GPT-generated data, and has a much friendly license for both commercial and academic use.
|
12 |
|
13 |
+
- **Magpie Collection**: [Magpie on Hugging Face](https://huggingface.co/collections/Magpie-Align/magpielm-66e2221f31fa3bf05b10786a)
|
14 |
- **Magpie Paper**: [Read the research paper](https://arxiv.org/abs/2406.08464)
|
15 |
|
16 |
Contact: [Zhangchen Xu](https://zhangchenxu.com) and [Bill Yuchen Lin](https://yuchenlin.xyz).
|
app_8B.py
CHANGED
@@ -10,7 +10,7 @@ header = """
|
|
10 |
|
11 |
💬 We've aligned Llama-3.1-8B and a 4B version (distilled by NVIDIA) using purely synthetic data generated by our [Magpie](https://arxiv.org/abs/2406.08464) method. Our open-source post-training recipe includes: SFT and DPO data, all training configs + logs. This allows everyone to reproduce the alignment process for their own research. Note that our data does not contain any GPT-generated data, and has a much friendly license for both commercial and academic use.
|
12 |
|
13 |
-
- **Magpie Collection**: [Magpie on Hugging Face](https://
|
14 |
- **Magpie Paper**: [Read the research paper](https://arxiv.org/abs/2406.08464)
|
15 |
|
16 |
Contact: [Zhangchen Xu](https://zhangchenxu.com) and [Bill Yuchen Lin](https://yuchenlin.xyz).
|
|
|
10 |
|
11 |
💬 We've aligned Llama-3.1-8B and a 4B version (distilled by NVIDIA) using purely synthetic data generated by our [Magpie](https://arxiv.org/abs/2406.08464) method. Our open-source post-training recipe includes: SFT and DPO data, all training configs + logs. This allows everyone to reproduce the alignment process for their own research. Note that our data does not contain any GPT-generated data, and has a much friendly license for both commercial and academic use.
|
12 |
|
13 |
+
- **Magpie Collection**: [Magpie on Hugging Face](https://huggingface.co/collections/Magpie-Align/magpielm-66e2221f31fa3bf05b10786a)
|
14 |
- **Magpie Paper**: [Read the research paper](https://arxiv.org/abs/2406.08464)
|
15 |
|
16 |
Contact: [Zhangchen Xu](https://zhangchenxu.com) and [Bill Yuchen Lin](https://yuchenlin.xyz).
|