File size: 1,528 Bytes
2c523d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: mit
---


The model corresponds to [Compare2Score](https://compare2score.github.io/).

## Quick Start with AutoModel

<!-- For this image, ![](https://raw.githubusercontent.com/Q-Future/Q-Align/main/fig/singapore_flyer.jpg) start an AutoModel scorer with `transformers==4.36.1`:
 -->
```python
import requests
import torch
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("q-future/Compare2Score", trust_remote_code=True, attn_implementation="eager", 
                                             torch_dtype=torch.float16, device_map="auto")

from PIL import Image
image_path_url = "https://raw.githubusercontent.com/Q-Future/Q-Align/main/fig/singapore_flyer.jpg"
print("The quality score of this image is {}".format(model.score(image_path_url)) 
```

## Evaluation with GitHub
```shell
git clone https://github.com/Q-Future/Compare2Score.git
cd Compare2Score
pip install -e .
```

```python
from q_align import Compare2Scorer
from PIL import Image

scorer = Compare2Scorer()
image_path = "figs/i04_03_4.bmp"
print("The quality score of this image is {}.".format(scorer(image_path)))
```

## Citation

```bibtex
@article{zhu2024adaptive,
  title={Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare},
  author={Zhu, Hanwei and Wu, Haoning and Li, Yixuan and Zhang, Zicheng and Chen, Baoliang and Zhu, Lingyu and Fang, Yuming and Zhai, Guangtao and Lin, Weisi and Wang, Shiqi},
  journal={arXiv preprint arXiv:2405.19298},
  year={2024},
}
```