Sri-Vigneshwar-DJ commited on
Commit
846fe4a
1 Parent(s): 94f4edc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -0
README.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ pipeline_tag: image-text-to-text
5
+ language:
6
+ - en
7
+ base_model:
8
+ - HuggingFaceTB/SmolLM2-1.7B-Instruct
9
+ - google/siglip-so400m-patch14-384
10
+ ---
11
+
12
+ ## SmolVLM
13
+
14
+ SmolVLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images. Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on visual content. The model does not support image generation.
15
+
16
+ To fine-tune SmolVLM on a specific task, you can follow the fine-tuning tutorial.
17
+ <!-- todo: add link to fine-tuning tutorial -->
18
+
19
+ ### Technical Summary
20
+
21
+ SmolVLM leverages the lightweight SmolLM2 language model to provide a compact yet powerful multimodal experience. It introduces several changes compared to previous Idefics models:
22
+
23
+ - **Image compression:** We introduce a more radical image compression compared to Idefics3 to enable the model to infer faster and use less RAM.
24
+ - **Visual Token Encoding:** SmolVLM uses 81 visual tokens to encode image patches of size 384×384. Larger images are divided into patches, each encoded separately, enhancing efficiency without compromising performance.
25
+
26
+ More details about the training and architecture are available in our technical report.
27
+
28
+
29
+ ### How to get started
30
+
31
+ You can use transformers to load, infer and fine-tune SmolVLM.
32
+
33
+ ```python
34
+ import torch
35
+ from PIL import Image
36
+ from transformers import AutoProcessor, AutoModelForVision2Seq
37
+ from transformers.image_utils import load_image
38
+
39
+ DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
40
+
41
+ # Load images
42
+ image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
43
+ image2 = load_image("https://huggingface.co/spaces/merve/chameleon-7b/resolve/main/bee.jpg")
44
+
45
+ # Initialize processor and model
46
+ processor = AutoProcessor.from_pretrained("HuggingFaceTB/SmolVLM-Instruct")
47
+ model = AutoModelForVision2Seq.from_pretrained(
48
+ "HuggingFaceTB/SmolVLM-Instruct",
49
+ torch_dtype=torch.bfloat16,
50
+ _attn_implementation="flash_attention_2" if DEVICE == "cuda" else "eager",
51
+ ).to(DEVICE)
52
+
53
+ # Create input messages
54
+ messages = [
55
+ {
56
+ "role": "user",
57
+ "content": [
58
+ {"type": "image"},
59
+ {"type": "image"},
60
+ {"type": "text", "text": "Can you describe the two images?"}
61
+ ]
62
+ },
63
+ ]
64
+
65
+ # Prepare inputs
66
+ prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
67
+ inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt")
68
+ inputs = inputs.to(DEVICE)
69
+
70
+ # Generate outputs
71
+ generated_ids = model.generate(**inputs, max_new_tokens=500)
72
+ generated_texts = processor.batch_decode(
73
+ generated_ids,
74
+ skip_special_tokens=True,
75
+ )
76
+
77
+ print(generated_texts[0])
78
+ """
79
+ Assistant: The first image shows a green statue of the Statue of Liberty standing on a stone pedestal in front of a body of water.
80
+ The statue is holding a torch in its right hand and a tablet in its left hand. The water is calm and there are no boats or other objects visible.
81
+ The sky is clear and there are no clouds. The second image shows a bee on a pink flower.
82
+ The bee is black and yellow and is collecting pollen from the flower. The flower is surrounded by green leaves.
83
+ """
84
+ ```
85
+
86
+
87
+ ### Model optimizations
88
+
89
+ **Precision**: For better performance, load and run the model in half-precision (`torch.float16` or `torch.bfloat16`) if your hardware supports it.
90
+
91
+ ```python
92
+ from transformers import AutoModelForVision2Seq
93
+ import torch
94
+
95
+ model = AutoModelForVision2Seq.from_pretrained(
96
+ "HuggingFaceTB/SmolVLM-Instruct",
97
+ torch_dtype=torch.bfloat16
98
+ ).to("cuda")
99
+ ```
100
+
101
+ You can also load SmolVLM with 4/8-bit quantization using bitsandbytes, torchao or Quanto. Refer to [this page](https://huggingface.co/docs/transformers/en/main_classes/quantization) for other options.
102
+
103
+ ```python
104
+ from transformers import AutoModelForVision2Seq, BitsAndBytesConfig
105
+ import torch
106
+
107
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
108
+ model = AutoModelForVision2Seq.from_pretrained(
109
+ "HuggingFaceTB/SmolVLM-Instruct",
110
+ quantization_config=quantization_config,
111
+ )
112
+ ```
113
+
114
+ **Vision Encoder Efficiency**: Adjust the image resolution by setting `size={"longest_edge": N*384}` when initializing the processor, where N is your desired value. The default `N=4` works well, which results in input images of
115
+ size 1536×1536. For documents, `N=5` might be beneficial. Decreasing N can save GPU memory and is appropriate for lower-resolution images. This is also useful if you want to fine-tune on videos.
116
+