Update README.md
Browse files
README.md
CHANGED
@@ -52,8 +52,12 @@ Greedy Decoding
|
|
52 |
|
53 |
**Conversation Template**: Please use the Llama 3 chat template for the best performance.
|
54 |
|
|
|
|
|
55 |
## 🧐 How to use it?
|
56 |
|
|
|
|
|
57 |
Please update transformers to the latest version by `pip install git+https://github.com/huggingface/transformers`.
|
58 |
|
59 |
You can then run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
|
|
|
52 |
|
53 |
**Conversation Template**: Please use the Llama 3 chat template for the best performance.
|
54 |
|
55 |
+
**Limitations**: This model primarily understands and generates content in English. Its outputs may contain factual errors, logical inconsistencies, or reflect biases present in the training data. While the model aims to improve instruction-following and helpfulness, it isn't specifically designed for complex reasoning tasks, potentially leading to suboptimal performance in these areas. Additionally, the model may produce unsafe or inappropriate content, as no specific safety training were implemented during the alignment process.
|
56 |
+
|
57 |
## 🧐 How to use it?
|
58 |
|
59 |
+
[![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/flydust/MagpieLM-8B)
|
60 |
+
|
61 |
Please update transformers to the latest version by `pip install git+https://github.com/huggingface/transformers`.
|
62 |
|
63 |
You can then run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
|