bmarie4i commited on
Commit
75bb0af
·
1 Parent(s): 81a6a88

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -49,7 +49,8 @@ If you wish to only download the adapter, you will find it in the model reposito
49
  The following code sample uses 4-bit quantization, you may load the model without it if you have enough VRAM.
50
 
51
  ```py
52
- from transformers import AutoTokenizer, AutoModelForCausalLM TrainingArguments, GenerationConfig
 
53
  model_name = "4i-ai/Llama-2-7b-alpaca-es"
54
 
55
 
 
49
  The following code sample uses 4-bit quantization, you may load the model without it if you have enough VRAM.
50
 
51
  ```py
52
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, TrainingArguments, GenerationConfig
53
+ import torch
54
  model_name = "4i-ai/Llama-2-7b-alpaca-es"
55
 
56