pdelobelle
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -140,6 +140,20 @@ Key improvements over Gemma-2B baseline:
|
|
140 |
|
141 |
**Note**: This is a base language model, not an instruction-tuned model. It is not optimized for chat or instruction following. For best results, use standard text completion rather than chat templates.
|
142 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
143 |
```python
|
144 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
145 |
|
|
|
140 |
|
141 |
**Note**: This is a base language model, not an instruction-tuned model. It is not optimized for chat or instruction following. For best results, use standard text completion rather than chat templates.
|
142 |
|
143 |
+
Also make sure you have the sentencepiece tokenizer installed:
|
144 |
+
|
145 |
+
```bash
|
146 |
+
pip install sentencepiece
|
147 |
+
```
|
148 |
+
|
149 |
+
```python
|
150 |
+
from transformers import pipeline
|
151 |
+
pipe = pipeline("text-generation", model="flair/bueble-lm-2b")
|
152 |
+
pipe("Ich bin")
|
153 |
+
```
|
154 |
+
|
155 |
+
Or with the full model api:
|
156 |
+
|
157 |
```python
|
158 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
159 |
|