PAN / README.md
zhengrongzhang's picture
init model
3135a01
---
license: apache-2.0
datasets:
- Set5
- Div2K
language:
- en
tags:
- RyzenAI
- PAN
- Pytorch
- Super Resolution
- Vision
pipeline_tag: image-to-image
---
## Model description
PAN is an lightwight image super-resolution method with pixel pttention. It was introduced in the paper [Efficient Image Super-Resolution Using Pixel Attention](https://arxiv.org/abs/2010.01073) by Hengyuan Zhao et al. and first released in [this repository](https://github.com/zhaohengyuan1/PAN).
We changed the negative slope of the leaky ReLU of the original model and replaced the sigmoid activation with hard sigmoid to make the model compatible with [AMD Ryzen AI](https://onnxruntime.ai/docs/execution-providers/Vitis-AI-ExecutionProvider.html). We loaded the published model parameters and fine-tuned them on the DIV2K dataset.
## Intended uses & limitations
You can use the raw model for super resolution. See the [model hub](https://huggingface.co/models?search=amd/pan) to look for all available PAN models.
## How to use
### Installation
Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
Run the following script to install pre-requisites for this model.
```bash
pip install -r requirements.txt
```
### Data Preparation (optional: for accuracy evaluation)
1. Download the benchmark(https://cv.snu.ac.kr/research/EDSR/benchmark.tar) dataset.
3. Unzip the dataset and put it under the project folder. Organize the dataset directory as follows:
```Plain
PAN
└── dataset
└── benchmark
β”œβ”€β”€ Set5
β”œβ”€β”€ HR
| β”œβ”€β”€ baby.png
| β”œβ”€β”€ ...
└── LR_bicubic
└──X2
β”œβ”€β”€babyx2.png
β”œβ”€β”€ ...
β”œβ”€β”€ Set14
β”œβ”€β”€ ...
```
### Test & Evaluation
- Code snippet from [`infer_onnx.py`](infer_onnx.py) on how to use
```python
parser = argparse.ArgumentParser(description='PAN SR')
parser.add_argument('--onnx_path',
type=str,
default='PAN_int8.onnx',
help='Onnx path')
parser.add_argument('--image_path',
type=str,
default='test_data/test.png',
help='Path to your input image.')
parser.add_argument('--output_path',
type=str,
default='test_data/sr.png',
help='Path to your output image.')
parser.add_argument('--provider_config',
type=str,
default="vaip_config.json",
help="Path of the config file for seting provider_options.")
parser.add_argument('--ipu', action='store_true', help='Use Ipu for interence.')
args = parser.parse_args()
onnx_file_name = args.onnx_path
image_path = args.image_path
output_path = args.output_path
if args.ipu:
providers = ["VitisAIExecutionProvider"]
provider_options = [{"config_file": args.provider_config}]
else:
providers = ['CPUExecutionProvider']
provider_options = None
ort_session = onnxruntime.InferenceSession(onnx_file_name, providers=providers, provider_options=provider_options)
lr = cv2.imread(image_path)[np.newaxis,:,:,:].transpose((0,3,1,2)).astype(np.float32)
sr = tiling_inference(ort_session, lr, 8, (56, 56))
sr = np.clip(sr, 0, 255)
sr = sr.squeeze().transpose((1,2,0)).astype(np.uint8)
sr = cv2.imwrite(output_path, sr)
```
- Run inference for a single image
```python
python infer_onnx.py --onnx_path PAN_int8.onnx --image_path /Path/To/Your/Image --ipu --provider_config Path\To\vaip_config.json
```
- Test accuracy of the quantized model
```python
python eval_onnx.py --onnx_path PAN_int8.onnx --data_test Set5 --ipu --provider_config Path\To\vaip_config.json
```
Note: **vaip_config.json** is located at the setup package of Ryzen AI (refer to [Installation](https://huggingface.co/amd/yolox-s#installation))
### Performance
| Method | Scale | Flops | Set5 |
|------------|-------|-------|--------------|
|PAN (float) |X2 |141G |38.00 / 0.961|
|PAN_amd (float) |X2 |141G |37.859 / 0.960|
|PAN_amd (int8) |X2 |141G |37.18 / 0.952|
- Note: the Flops is calculated with the output resolution is 360x640
```bibtex
@inproceedings{zhao2020efficient,
title={Efficient image super-resolution using pixel attention},
author={Zhao, Hengyuan and Kong, Xiangtao and He, Jingwen and Qiao, Yu and Dong, Chao},
booktitle={European Conference on Computer Vision},
pages={56--72},
year={2020},
organization={Springer}
}
```