Update README.md
Browse files
README.md
CHANGED
@@ -49,7 +49,8 @@ If you wish to only download the adapter, you will find it in the model reposito
|
|
49 |
The following code sample uses 4-bit quantization, you may load the model without it if you have enough VRAM.
|
50 |
|
51 |
```py
|
52 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM TrainingArguments, GenerationConfig
|
|
|
53 |
model_name = "4i-ai/Llama-2-7b-alpaca-es"
|
54 |
|
55 |
|
|
|
49 |
The following code sample uses 4-bit quantization, you may load the model without it if you have enough VRAM.
|
50 |
|
51 |
```py
|
52 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, TrainingArguments, GenerationConfig
|
53 |
+
import torch
|
54 |
model_name = "4i-ai/Llama-2-7b-alpaca-es"
|
55 |
|
56 |
|