natolambert
commited on
Commit
•
70788fb
1
Parent(s):
8af01af
Update README.md
Browse files
README.md
CHANGED
@@ -110,7 +110,7 @@ print(outputs[0]["generated_text"])
|
|
110 |
|
111 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
112 |
|
113 |
-
Zephyr-7B-β has not been aligned to human preferences
|
114 |
It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
|
115 |
|
116 |
|
|
|
110 |
|
111 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
112 |
|
113 |
+
Zephyr-7B-β has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
|
114 |
It is also unknown what the size and composition of the corpus was used to train the base model (`mistralai/Mistral-7B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
|
115 |
|
116 |
|