File size: 11,379 Bytes
bb11a33 6560346 2b079b2 6560346 bb11a33 068743a 7f0e104 cd8ae35 382cef1 7f0e104 068743a f74b172 7698e3e 068743a 00d886f 068743a 00d886f 068743a 00d886f f74b172 068743a dcfd917 4e4bf51 00d886f d9aae8f 60adef1 2b079b2 6625d6d 2b079b2 9442df0 2b079b2 068743a 0dfb32e ea64f23 068743a 7d698b4 068743a 00ae028 068743a d3167df ee0b265 a8b6a90 12acc01 53b019b 5c6538b 4e4bf51 dcfd917 4e4bf51 dcfd917 4e4bf51 ee0b265 068743a ee0b265 6560346 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 |
---
language:
- en
license: cc-by-nc-4.0
base_model:
- upstage/SOLAR-10.7B-v1.0
datasets:
- c-s-ale/alpaca-gpt4-data
- Open-Orca/OpenOrca
- Intel/orca_dpo_pairs
- allenai/ultrafeedback_binarized_cleaned
model-index:
- name: SOLAR-10.7B-Instruct-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.08
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=upstage/SOLAR-10.7B-Instruct-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=upstage/SOLAR-10.7B-Instruct-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.21
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=upstage/SOLAR-10.7B-Instruct-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.43
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=upstage/SOLAR-10.7B-Instruct-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=upstage/SOLAR-10.7B-Instruct-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.75
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=upstage/SOLAR-10.7B-Instruct-v1.0
name: Open LLM Leaderboard
---
<p align="left">
<a href="https://go.upstage.ai/solar-obt-hf-modelcardv1-instruct">
<img src="https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0/resolve/main/solar-api-banner.png" width="100%"/>
</a>
<p>
# **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
**(This model is [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) fine-tuned version for single-turn conversation.)**
# **Introduction**
We introduce SOLAR-10.7B, an advanced large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
We present a methodology for scaling LLMs called depth up-scaling (DUS) , which encompasses architectural modifications and continued pretraining. In other words, we integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table.
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements.
For full details of this model please read our [paper](https://arxiv.org/abs/2312.15166).
# **Instruction Fine-Tuning Strategy**
We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO) [1].
We used a mixture of the following datasets
- c-s-ale/alpaca-gpt4-data (SFT)
- Open-Orca/OpenOrca (SFT)
- in-house generated data utilizing Metamath [2] (SFT, DPO)
- Intel/orca_dpo_pairs (DPO)
- allenai/ultrafeedback_binarized_cleaned (DPO)
where we were careful of data contamination by not using GSM8K samples when generating data and filtering tasks when applicable via the following list.
```python
filtering_task_list = [
'task228_arc_answer_generation_easy',
'ai2_arc/ARC-Challenge:1.0.0',
'ai2_arc/ARC-Easy:1.0.0',
'task229_arc_answer_generation_hard',
'hellaswag:1.1.0',
'task1389_hellaswag_completion',
'cot_gsm8k',
'cot_gsm8k_ii',
'drop:2.0.0',
'winogrande:1.1.0'
]
```
Using the datasets mentioned above, we applied SFT and iterative DPO training, a proprietary alignment strategy, to maximize the performance of our resulting model.
[1] Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning, C.D. and Finn, C., 2023. Direct preference optimization: Your language model is secretly a reward model. NeurIPS.
[2] Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok, J.T., Li, Z., Weller, A. and Liu, W., 2023. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284.
# **Data Contamination Test Results**
Recently, there have been contamination issues in some models on the LLM leaderboard.
We note that we made every effort to exclude any benchmark-related datasets from training.
We also ensured the integrity of our model by conducting a data contamination test [3] that is also used by the HuggingFace team [4, 5].
Our results, with `result < 0.1, %:` being well below 0.9, indicate that our model is free from contamination.
*The data contamination test results of HellaSwag and Winograde will be added once [3] supports them.*
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **SOLAR-10.7B-Instruct-v1.0**| result < 0.1, %: 0.06 |result < 0.1, %: 0.15 | result < 0.1, %: 0.28 | result < 0.1, %: 0.70 |
[3] https://github.com/swj0419/detect-pretrain-code-contamination
[4] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06
[5] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230
# **Evaluation Results**
| Model | H6 | Model Size |
|----------------------------------------|-------|------------|
| **SOLAR-10.7B-Instruct-v1.0** | **74.20** | **~ 11B** |
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.62 | ~ 46.7B |
| 01-ai/Yi-34B-200K | 70.81 | ~ 34B |
| 01-ai/Yi-34B | 69.42 | ~ 34B |
| mistralai/Mixtral-8x7B-v0.1 | 68.42 | ~ 46.7B |
| meta-llama/Llama-2-70b-hf | 67.87 | ~ 70B |
| tiiuae/falcon-180B | 67.85 | ~ 180B |
| **SOLAR-10.7B-v1.0** | **66.04** | **~11B** |
| mistralai/Mistral-7B-Instruct-v0.2 | 65.71 | ~ 7B |
| Qwen/Qwen-14B | 65.86 | ~ 14B |
| 01-ai/Yi-34B-Chat | 65.32 | ~34B |
| meta-llama/Llama-2-70b-chat-hf | 62.4 | ~ 70B |
| mistralai/Mistral-7B-v0.1 | 60.97 | ~ 7B |
| mistralai/Mistral-7B-Instruct-v0.1 | 54.96 | ~ 7B |
# **Usage Instructions**
This model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat.
### **Version**
Make sure you have the correct version of the transformers library installed:
```sh
pip install transformers==4.35.2
```
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Upstage/SOLAR-10.7B-Instruct-v1.0")
model = AutoModelForCausalLM.from_pretrained(
"Upstage/SOLAR-10.7B-Instruct-v1.0",
device_map="auto",
torch_dtype=torch.float16,
)
```
### **Conducting Single-Turn Conversation**
```python
conversation = [ {'role': 'user', 'content': 'Hello?'} ]
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, use_cache=True, max_length=4096)
output_text = tokenizer.decode(outputs[0])
print(output_text)
```
Below is an example of the output.
```
<s> ### User:
Hello?
### Assistant:
Hello, how can I assist you today? Please feel free to ask any questions or request help with a specific task.</s>
```
### **License**
- [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0): apache-2.0
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0): cc-by-nc-4.0
- Since some non-commercial datasets such as Alpaca are used for fine-tuning, we release this model as cc-by-nc-4.0.
### **How to Cite**
Please cite this model using this format.
```bibtex
@misc{kim2023solar,
title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling},
author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
year={2023},
eprint={2312.15166},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### **The Upstage AI Team** ###
Upstage is creating the best LLM and DocAI. Please find more information at https://upstage.ai
### **Contact Us** ###
Any questions and suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [contact@upstage.ai](mailto:contact@upstage.ai)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_upstage__SOLAR-10.7B-Instruct-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.20|
|AI2 Reasoning Challenge (25-Shot)|71.08|
|HellaSwag (10-Shot) |88.16|
|MMLU (5-Shot) |66.21|
|TruthfulQA (0-shot) |71.43|
|Winogrande (5-shot) |83.58|
|GSM8k (5-shot) |64.75|
|