RaushanTurganbay HF staff commited on
Commit
1090956
·
verified ·
1 Parent(s): 3e9e865

update for chat template

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -151,6 +151,25 @@ conversation = [
151
  ]
152
  ```
153
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
154
  ### Model optimization
155
 
156
  #### 4-bit quantization through `bitsandbytes` library
 
151
  ]
152
  ```
153
 
154
+ -----------
155
+ From transformers>=v4.48, you can also pass image url or local path to the conversation history, and let the chat template handle the rest.
156
+ Chat template will load the image for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()`
157
+
158
+ ```python
159
+ messages = [
160
+ {
161
+ "role": "user",
162
+ "content": [
163
+ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
164
+ {"type": "text", "text": "What is shown in this image?"},
165
+ ],
166
+ },
167
+ ]
168
+
169
+ inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
170
+ output = model.generate(**inputs, max_new_tokens=50)
171
+ ```
172
+
173
  ### Model optimization
174
 
175
  #### 4-bit quantization through `bitsandbytes` library