Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,7 @@ model = LLaMAForCausalLM.from_pretrained(
|
|
28 |
model = PeftModel.from_pretrained(model, "bertin-project/bertin-alpaca-lora-7b")
|
29 |
```
|
30 |
|
31 |
-
Until `PEFT` is fully supported in Hugginface's pipelines, for generation we can either consolidate the LoRA weights into the LLaMA model weights, or use the adapter's `generate()` method. Remember that the
|
32 |
|
33 |
```python
|
34 |
# Generate responses
|
|
|
28 |
model = PeftModel.from_pretrained(model, "bertin-project/bertin-alpaca-lora-7b")
|
29 |
```
|
30 |
|
31 |
+
Until `PEFT` is fully supported in Hugginface's pipelines, for generation we can either consolidate the LoRA weights into the LLaMA model weights, or use the adapter's `generate()` method. Remember that the prompt still needs the English template:
|
32 |
|
33 |
```python
|
34 |
# Generate responses
|