Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,21 @@ language:
|
|
18 |
- **Point of Contact:seongyun@kaist.ac.kr**
|
19 |
|
20 |
# TL; DR
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
### Training hyperparameters
|
24 |
|
|
|
18 |
- **Point of Contact:seongyun@kaist.ac.kr**
|
19 |
|
20 |
# TL; DR
|
21 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550c4f27bbfce1878f5f280/vrQl8D8FV3vqUJYbPgsiG.png)
|
22 |
+
|
23 |
+
Janus is a model trained using [Mistral-7B-v0.2](https://huggingface.co/mistral-community/Mistral-7B-v0.2) as its base model. Janus has been trained on [Multifaceted Collection](), a preference dataset with 192k combinations of values that go beyond generic helpfulness and harmlessness, spanning 65k user instructions. Janus not only excels at generating personalized responses on [Multifaceted Bench]() that cater to various human preferences but is also adept at producing responses that are generally preferred for being helpful and harmless.
|
24 |
+
|
25 |
+
# Model Details
|
26 |
+
|
27 |
+
## Model Description
|
28 |
+
|
29 |
+
- **Model type:** Language model
|
30 |
+
- **Language(s) (NLP):** English
|
31 |
+
- **License:** Apache 2.0
|
32 |
+
- **Related Models:** [Janus-66k-7B]() [Janus-DPO-7B](), [Janus-ORPO-7B](), [Janus-RM-7B]()
|
33 |
+
- **Resources for more information:**
|
34 |
+
- [Research paper]()
|
35 |
+
- [GitHub Repo](https://github.com/kaistAI/Janus)
|
36 |
|
37 |
### Training hyperparameters
|
38 |
|