SijieCheng
commited on
Commit
·
622280a
1
Parent(s):
fe405f0
Update README.md
Browse files
README.md
CHANGED
@@ -101,7 +101,7 @@ pinned: false
|
|
101 |
|
102 |
# 📰 News
|
103 |
|
104 |
-
- [2023/12/10] We released the [OpenChat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) model, 15-point improvements in coding.
|
105 |
|
106 |
- [2023/11/01] We released the [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5) model, surpassing ChatGPT on various benchmarks 🔥.
|
107 |
|
@@ -130,9 +130,22 @@ pinned: false
|
|
130 |
|
131 |
# 💌Contact
|
132 |
|
133 |
-
We are a student team working on OpenChat, a project that requires additional computing power or LLMs API keys for further development. If you are interested in our project and would like to offer support, please feel free to reach out to us:
|
134 |
|
135 |
* Wang Guan [imonenext@gmail.com]
|
136 |
* Cheng Sijie [csj23@mails.tsinghua.edu.cn]
|
137 |
|
138 |
-
We look forward to hearing you and collaborating on this exciting project!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
|
102 |
# 📰 News
|
103 |
|
104 |
+
- [2023/12/10] We released the [OpenChat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210) model, 15-point improvements in coding.
|
105 |
|
106 |
- [2023/11/01] We released the [OpenChat-3.5-7B](https://huggingface.co/openchat/openchat_3.5) model, surpassing ChatGPT on various benchmarks 🔥.
|
107 |
|
|
|
130 |
|
131 |
# 💌Contact
|
132 |
|
133 |
+
We are a student team at Tsinghua University, working on OpenChat, a project that requires additional computing power or LLMs API keys for further development. If you are interested in our project and would like to offer support, please feel free to reach out to us:
|
134 |
|
135 |
* Wang Guan [imonenext@gmail.com]
|
136 |
* Cheng Sijie [csj23@mails.tsinghua.edu.cn]
|
137 |
|
138 |
+
We look forward to hearing you and collaborating on this exciting project!
|
139 |
+
|
140 |
+
# Acknowledgements (Continuously updating)
|
141 |
+
**Main Contributors:**
|
142 |
+
- [Xianyuan Zhan](https://scholar.google.com.hk/citations?user=pDMnGloAAAAJ&hl=zh-CN): Provided invaluable advice on paper writing.
|
143 |
+
- [Alpay Ariyak](https://github.com/alpayariyak): Responsible for data collection and PR for `openchat-3.5-1210`, including updates to model and organization cards.
|
144 |
+
- LDJ: Tasked with partial data collection for `openchat-3.5`.
|
145 |
+
|
146 |
+
**Sponsors:**
|
147 |
+
- [Sen Song](https://scholar.google.com/citations?user=cYgtRP4AAAAJ&hl=en) (Tsinghua University), [Yang Liu](https://nlp.csai.tsinghua.edu.cn/~ly/) (Tsinghua University), [01.AI Company](https://www.lingyiwanwu.com/en), [RunPod](https://www.runpod.io/), Changling Liu (GPT Desk Pte. Ltd.), Qiying Yu (Tsinghua University), AutoMeta (Alignment Lab AI).
|
148 |
+
|
149 |
+
**Special Thanks:**
|
150 |
+
- We express our profound gratitude to Alignment Lab AI, Nous Research, and Pygmalion AI for their significant contributions to data collection and model training. We also extend our special thanks to Xiangang Li, Baochang Ma, and Hao Wan from 01.AI company for their generous provision of resources. We also thank Jianxiong Li and Peng Li at Tsinghua University for their insightful discussions that have enriched our work.
|
151 |
+
- We acknowledge and thank the developers behind these projects: [Mistral](https://mistral.ai/), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), [Llama 2](https://ai.meta.com/llama/), [Self-Instruct](https://arxiv.org/abs/2212.10560), [FastChat (Vicuna)](https://github.com/lm-sys/FastChat), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca.git), and [StarCoder](https://github.com/bigcode-project/starcoder). Their work has been instrumental in driving our research forward.
|