teowu commited on
Commit
fb0edb1
1 Parent(s): f094187

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -1,14 +1,16 @@
1
  ## Performance
2
 
 
 
3
  ### Low-level Question-Answering
4
 
5
- This model has reached 75.12\%(*12\% better than previous version*)/74.98\%(*8.5\% better than previous version*) on Q-Bench A1 *dev/test* (multi-choice questions).
6
 
7
  It also outperforms the following close-source models with much larger model capacities:
8
 
9
  | Model | *dev* | *test* |
10
  | ---- | ---- | ---- |
11
- | **Co-Instruct-Preview (mPLUG-Owl2) (This Model)** | **75.12\%** | **74.98\%** |
12
  | \*GPT-4V-Turbo | 74.41\% | 74.10\% |
13
  | \*Qwen-VL-**Max** | 73.63\% | 73.90\% |
14
  | \*GPT-4V (Nov. 2023) | 71.78\% | 73.44\% |
@@ -23,8 +25,8 @@ It also outperforms the following close-source models with much larger model cap
23
 
24
  | Model | live | agi | livec | test_spaq | csiq | test_kadid | test_koniq | konvid | maxwell_test |
25
  |--------------------------|--------------|--------------|-------------|-------------|-------------|-------------|-------------|-------------|--------------|
26
- |**Co-Instruct-Preview (mPLUG-Owl2) (This Model)** | **0.771/0.751** | **0.727/0.749** | **0.861/0.865** | **0.946/0.938** | **0.735/0.748** | **0.782/0.770** | **0.908/0.941** | **0.818/0.790** | **0.735/0.714** |
27
- | Q-Instruct (mPLUG-Owl2, Nov. 2023) | 0.749/0.747 | 0.710/0.753 | 0.781/0.791 | 0.921/0.917 | 0.693/0.723 | 0.670/0.665 | 0.904/0.921 | 0.766/0.738 | 0.650/0.649 |
28
 
29
 
30
  We are also constructing multi-image benchmark sets (image pairs, triple-quadruple images), and the results on multi-image benchmarks will be released soon!
@@ -38,7 +40,7 @@ from transformers import AutoModelForCausalLM
38
  model = AutoModelForCausalLM.from_pretrained("q-future/co-instruct-preview",
39
  trust_remote_code=True,
40
  torch_dtype=torch.float16,
41
- attn_implementation="flash_attention_2",
42
  device_map={"":"cuda:0"})
43
  ```
44
 
 
1
  ## Performance
2
 
3
+ *Updated Feb 1st.*
4
+
5
  ### Low-level Question-Answering
6
 
7
+ This model has reached 75.90\%(*13\% better than previous version*)/76.52\%(*10\% better than previous version*) on Q-Bench A1 *dev/test* (multi-choice questions).
8
 
9
  It also outperforms the following close-source models with much larger model capacities:
10
 
11
  | Model | *dev* | *test* |
12
  | ---- | ---- | ---- |
13
+ | **Co-Instruct-Preview (mPLUG-Owl2) (This Model)** | **75.90\%** | **76.52\%** |
14
  | \*GPT-4V-Turbo | 74.41\% | 74.10\% |
15
  | \*Qwen-VL-**Max** | 73.63\% | 73.90\% |
16
  | \*GPT-4V (Nov. 2023) | 71.78\% | 73.44\% |
 
25
 
26
  | Model | live | agi | livec | test_spaq | csiq | test_kadid | test_koniq | konvid | maxwell_test |
27
  |--------------------------|--------------|--------------|-------------|-------------|-------------|-------------|-------------|-------------|--------------|
28
+ |**Co-Instruct-Preview (mPLUG-Owl2) (This Model)** | **0.803/0.756** | **0.719**/0.732 | **0.827/0.835** | **0.946/0.937** | **0.711/0.727** | **0.782/0.766** | 0.886/**0.935** | **0.818/0.790** | **0.735/0.714** |
29
+ | Q-Instruct (mPLUG-Owl2, Nov. 2023) | 0.749/0.747 | 0.710/**0.753** | 0.781/0.791 | 0.921/0.917 | 0.693/0.723 | 0.670/0.665 | **0.904**/0.921 | 0.766/0.738 | 0.650/0.649 |
30
 
31
 
32
  We are also constructing multi-image benchmark sets (image pairs, triple-quadruple images), and the results on multi-image benchmarks will be released soon!
 
40
  model = AutoModelForCausalLM.from_pretrained("q-future/co-instruct-preview",
41
  trust_remote_code=True,
42
  torch_dtype=torch.float16,
43
+ attn_implementation="eager",
44
  device_map={"":"cuda:0"})
45
  ```
46