Update README.md
Browse files
README.md
CHANGED
@@ -33,4 +33,29 @@ This is the **Full-Weight** of WizardLM-13B V1.2 model, this model is trained fr
|
|
33 |
|
34 |
|
35 |
- π₯π₯π₯ [7/25/2023] We released **WizardLM V1.2** models. The **WizardLM-13B-V1.2** is here ([Demo_13B-V1.2](https://b7a19878988c8c73.gradio.app), [Demo_13B-V1.2_bak-1](https://d0a37a76e0ac4b52.gradio.app/), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.2)). Please checkout the [paper](https://arxiv.org/abs/2304.12244).
|
36 |
-
- π₯π₯π₯ [7/25/2023] The **WizardLM-13B-V1.2** achieves **7.06** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **89.17%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **101.4%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
|
35 |
- π₯π₯π₯ [7/25/2023] We released **WizardLM V1.2** models. The **WizardLM-13B-V1.2** is here ([Demo_13B-V1.2](https://b7a19878988c8c73.gradio.app), [Demo_13B-V1.2_bak-1](https://d0a37a76e0ac4b52.gradio.app/), [Full Model Weight](https://huggingface.co/WizardLM/WizardLM-13B-V1.2)). Please checkout the [paper](https://arxiv.org/abs/2304.12244).
|
36 |
+
- π₯π₯π₯ [7/25/2023] The **WizardLM-13B-V1.2** achieves **7.06** on [MT-Bench Leaderboard](https://chat.lmsys.org/?leaderboard), **89.17%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/), and **101.4%** on [WizardLM Eval](https://github.com/nlpxucan/WizardLM/blob/main/WizardLM/data/WizardLM_testset.jsonl). (Note: MT-Bench and AlpacaEval are all self-test, will push update and request review. All tests are completed under their official settings.)
|
37 |
+
|
38 |
+
β<b>Note for model system prompts usage:</b>
|
39 |
+
|
40 |
+
|
41 |
+
<b>WizardLM</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
|
42 |
+
|
43 |
+
```
|
44 |
+
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
|
45 |
+
USER: Hi
|
46 |
+
ASSISTANT: Hello.
|
47 |
+
USER: Who are you?
|
48 |
+
ASSISTANT: I am WizardLM.
|
49 |
+
......
|
50 |
+
```
|
51 |
+
|
52 |
+
β<b>To commen concern about dataset:</b>
|
53 |
+
|
54 |
+
Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
|
55 |
+
|
56 |
+
|
57 |
+
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
|
58 |
+
|
59 |
+
Our researchers have no authority to publicly release them without authorization.
|
60 |
+
|
61 |
+
Thank you for your understanding.
|