kimihailv commited on
Commit
1011db0
1 Parent(s): 9b514f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -47,8 +47,8 @@ The generative model can be used to caption images, answer questions about them.
47
  ```python
48
  from transformers import AutoModel, AutoProcessor
49
 
50
- model = AutoModel.from_pretrained("unum-cloud/uform-gen2-qwen-halfB", trust_remote_code=True)
51
- processor = AutoProcessor.from_pretrained("unum-cloud/uform-gen2-qwen-halfB", trust_remote_code=True)
52
 
53
  prompt = "Question or Instruction"
54
  image = Image.open("image.jpg")
@@ -76,7 +76,7 @@ For captioning evaluation we measure CLIPScore and RefCLIPScore¹.
76
 
77
  | Model | LLM Size | SQA | MME | MMBench | Average¹ |
78
  | :---------------------------------- | -------: | -----:| ------:| --------:| --------:|
79
- | UForm-Gen2-Qwen-halfB | 0.5B | 45.5 | 880.1 | 42.0 | 29.31 |
80
  | MobileVLM v2 | 1.4B | 52.1 | 1302.8 | 57.7 | 36.81 |
81
  | LLaVA-Phi | 2.7B | 68.4 | 1335.1 | 59.8 | 42.95 |
82
 
 
47
  ```python
48
  from transformers import AutoModel, AutoProcessor
49
 
50
+ model = AutoModel.from_pretrained("unum-cloud/uform-gen2-qwen-500m", trust_remote_code=True)
51
+ processor = AutoProcessor.from_pretrained("unum-cloud/uform-gen2-qwen-500m", trust_remote_code=True)
52
 
53
  prompt = "Question or Instruction"
54
  image = Image.open("image.jpg")
 
76
 
77
  | Model | LLM Size | SQA | MME | MMBench | Average¹ |
78
  | :---------------------------------- | -------: | -----:| ------:| --------:| --------:|
79
+ | UForm-Gen2-Qwen-500m | 0.5B | 45.5 | 880.1 | 42.0 | 29.31 |
80
  | MobileVLM v2 | 1.4B | 52.1 | 1302.8 | 57.7 | 36.81 |
81
  | LLaVA-Phi | 2.7B | 68.4 | 1335.1 | 59.8 | 42.95 |
82