Update README.md
Browse files
README.md
CHANGED
@@ -25,6 +25,8 @@ We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, S
|
|
25 |
Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table.
|
26 |
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements.
|
27 |
|
|
|
|
|
28 |
# **Instruction Fine-Tuning Strategy**
|
29 |
|
30 |
We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO) [1].
|
@@ -152,6 +154,21 @@ Hello, how can I assist you today? Please feel free to ask any questions or requ
|
|
152 |
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0): cc-by-nc-4.0
|
153 |
- Since some non-commercial datasets such as Alpaca are used for fine-tuning, we release this model as cc-by-nc-4.0.
|
154 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
155 |
### **The Upstage AI Team** ###
|
156 |
Upstage is creating the best LLM and DocAI. Please find more information at https://upstage.ai
|
157 |
|
|
|
25 |
Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table.
|
26 |
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements.
|
27 |
|
28 |
+
For full details of this model please read our [paper](https://arxiv.org/submit/5313698).
|
29 |
+
|
30 |
# **Instruction Fine-Tuning Strategy**
|
31 |
|
32 |
We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO) [1].
|
|
|
154 |
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0): cc-by-nc-4.0
|
155 |
- Since some non-commercial datasets such as Alpaca are used for fine-tuning, we release this model as cc-by-nc-4.0.
|
156 |
|
157 |
+
## How to Cite
|
158 |
+
|
159 |
+
Please cite this model using this format.
|
160 |
+
|
161 |
+
```bibtex
|
162 |
+
@misc{upstage2023solar,
|
163 |
+
title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling},
|
164 |
+
author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
|
165 |
+
year={2023},
|
166 |
+
eprint={Coming Soon},
|
167 |
+
archivePrefix={arXiv},
|
168 |
+
primaryClass={cs.CL}
|
169 |
+
}
|
170 |
+
```
|
171 |
+
|
172 |
### **The Upstage AI Team** ###
|
173 |
Upstage is creating the best LLM and DocAI. Please find more information at https://upstage.ai
|
174 |
|