JosephusCheung
commited on
Commit
•
106a5e0
1
Parent(s):
ba595a9
Update README.md
Browse files
README.md
CHANGED
@@ -6,8 +6,11 @@ tags:
|
|
6 |
- llama
|
7 |
- llama2
|
8 |
- qwen
|
|
|
9 |
---
|
10 |
|
|
|
|
|
11 |
Advance notice regarding the deletion of Qwen:
|
12 |
|
13 |
**I remain unaware as to the reasons behind Qwen's deletion. Should this repository be found in violation of any terms stipulated by Qwen that necessitate its removal, I earnestly request you to establish contact with me. I pledge to expunge all references to Qwen and maintain the tokenizer and associated weights as an autonomous model, inherently distinct from Qwen. I will then proceed to christen this model with a new identifier.**
|
@@ -24,4 +27,4 @@ The model has been edited to be white-labelled, meaning the model will no longer
|
|
24 |
|
25 |
Up until now, the model has undergone numerical alignment of weights and preliminary reinforcement learning in order to align with the original model. Some errors and outdated knowledge have been addressed through model editing methods. This model remains completely equivalent to the original version, without having any dedicated supervised finetuning on downstream tasks or other extensive conversation datasets.
|
26 |
|
27 |
-
PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
|
|
|
6 |
- llama
|
7 |
- llama2
|
8 |
- qwen
|
9 |
+
license: gpl-3.0
|
10 |
---
|
11 |
|
12 |
+
Given the discontinuation of the Qwen model, I will provisionally assign the license for this model as GPL-3.0. It should be noted that the weights and tokenizer utilized in this model diverge from those of the Qwen model. The inference code employed originates from Meta LLaMA / Hugging Face Transformers. The inclusion of "qwen" in the repository name bears no significance and any similarity to other entities or concepts is purely coincidental.
|
13 |
+
|
14 |
Advance notice regarding the deletion of Qwen:
|
15 |
|
16 |
**I remain unaware as to the reasons behind Qwen's deletion. Should this repository be found in violation of any terms stipulated by Qwen that necessitate its removal, I earnestly request you to establish contact with me. I pledge to expunge all references to Qwen and maintain the tokenizer and associated weights as an autonomous model, inherently distinct from Qwen. I will then proceed to christen this model with a new identifier.**
|
|
|
27 |
|
28 |
Up until now, the model has undergone numerical alignment of weights and preliminary reinforcement learning in order to align with the original model. Some errors and outdated knowledge have been addressed through model editing methods. This model remains completely equivalent to the original version, without having any dedicated supervised finetuning on downstream tasks or other extensive conversation datasets.
|
29 |
|
30 |
+
PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
|