Text2Text Generation
Transformers
Inference Endpoints
machineteacher commited on
Commit
c16ab8d
1 Parent(s): 3ff80f6

Update README.md

Browse files

Add dependencies docs

Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -104,6 +104,16 @@ Note that the tokens and the task description need not be in the language of the
104
 
105
  ### Run the model
106
 
 
 
 
 
 
 
 
 
 
 
107
  ```python
108
  from transformers import AutoTokenizer, AutoModelForCausalLM
109
 
@@ -119,7 +129,7 @@ inputs = tokenizer(prompt, return_tensors='pt')
119
 
120
  outputs = model.generate(**inputs, max_new_tokens=20)
121
 
122
- print(tokenizer.decode(outputs[0], skip_special_tokens=True)
123
 
124
  # --> I have a small cat ,
125
 
 
104
 
105
  ### Run the model
106
 
107
+ **Make sure you have the following libraries installed:**
108
+ ```
109
+ - peft
110
+ - protobuf
111
+ - sentencepiece
112
+ - tokenizers
113
+ - torch
114
+ - transformers
115
+ ```
116
+
117
  ```python
118
  from transformers import AutoTokenizer, AutoModelForCausalLM
119
 
 
129
 
130
  outputs = model.generate(**inputs, max_new_tokens=20)
131
 
132
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
133
 
134
  # --> I have a small cat ,
135