RaushanTurganbay HF staff commited on
Commit
0f07e6c
·
verified ·
1 Parent(s): 2a45bd9

update for chat template

Browse files
Files changed (1) hide show
  1. README.md +19 -0
README.md CHANGED
@@ -150,6 +150,25 @@ conversation = [
150
  ]
151
  ```
152
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
  ### Model optimization
154
 
155
  #### 4-bit quantization through `bitsandbytes` library
 
150
  ]
151
  ```
152
 
153
+ -----------
154
+ From transformers>=v4.48, you can also pass image url or local path to the conversation history, and let the chat template handle the rest.
155
+ Chat template will load the image for you and return inputs in `torch.Tensor` which you can pass directly to `model.generate()`
156
+
157
+ ```python
158
+ messages = [
159
+ {
160
+ "role": "user",
161
+ "content": [
162
+ {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
163
+ {"type": "text", "text": "What is shown in this image?"},
164
+ ],
165
+ },
166
+ ]
167
+
168
+ inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors"pt")
169
+ output = model.generate(**inputs, max_new_tokens=50)
170
+ ```
171
+
172
  ### Model optimization
173
 
174
  #### 4-bit quantization through `bitsandbytes` library