Update README.md
Browse files
README.md
CHANGED
@@ -85,3 +85,17 @@ stream = client.chat.completions.create(
|
|
85 |
* Steiner’s current post-training data does not include examples for multi-turn dialogues. The best-performing version of the Steiner model (based on [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B)) lacks the ability to handle multi-turn conversations. The open-source Steiner-preview model (based on [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)) is compatible with chat formats but is still not recommended for multi-turn dialogues.
|
86 |
* Similar to OpenAI o1-2024-09-12, Steiner also does not recommend the use of custom system prompts or modifications to sampling parameters such as temperature. Steiner has not yet been trained on a diverse set of system prompts, and altering other parameters may lead to errors in the formatting of reasoning tokens.
|
87 |
* The language composition of Steiner's post-training data is approximately 90% English and 10% Chinese, but during the reasoning path data augmentation process, almost only English was used. Therefore, while the model's final responses demonstrate a certain degree of language following ability, the reasoning tokens may predominantly be generated in English.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
85 |
* Steiner’s current post-training data does not include examples for multi-turn dialogues. The best-performing version of the Steiner model (based on [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B)) lacks the ability to handle multi-turn conversations. The open-source Steiner-preview model (based on [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)) is compatible with chat formats but is still not recommended for multi-turn dialogues.
|
86 |
* Similar to OpenAI o1-2024-09-12, Steiner also does not recommend the use of custom system prompts or modifications to sampling parameters such as temperature. Steiner has not yet been trained on a diverse set of system prompts, and altering other parameters may lead to errors in the formatting of reasoning tokens.
|
87 |
* The language composition of Steiner's post-training data is approximately 90% English and 10% Chinese, but during the reasoning path data augmentation process, almost only English was used. Therefore, while the model's final responses demonstrate a certain degree of language following ability, the reasoning tokens may predominantly be generated in English.
|
88 |
+
|
89 |
+
## Citation
|
90 |
+
|
91 |
+
If you find my work helpful, please consider citing it in your research or projects. Your acknowledgment would be greatly appreciated!
|
92 |
+
|
93 |
+
```
|
94 |
+
@misc{ji2024steiner,
|
95 |
+
title = {A Small Step Towards Reproducing OpenAI o1: Progress Report on the Steiner Open Source Models},
|
96 |
+
url = {https://medium.com/@peakji/b9a756a00855},
|
97 |
+
author = {Yichao Ji},
|
98 |
+
month = {October},
|
99 |
+
year = {2024}
|
100 |
+
}
|
101 |
+
```
|