|
--- |
|
license: apache-2.0 |
|
tags: |
|
- RyzenAI |
|
- object-detection |
|
- vision |
|
- YOLO |
|
- Pytorch |
|
datasets: |
|
- COCO |
|
metrics: |
|
- mAP |
|
--- |
|
# YOLOv5s model trained on COCO |
|
|
|
YOLOv5s is the small version of YOLOv5 model trained on COCO object detection (118k annotated images) at resolution 640x640. It was released in [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5). |
|
|
|
We develop a modified version that could be supported by [AMD Ryzen AI](https://onnxruntime.ai/docs/execution-providers/Vitis-AI-ExecutionProvider.html). |
|
|
|
|
|
## Model description |
|
|
|
YOLOv5 π is the world's most loved vision AI, representing Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development. |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=amd/yolov5) to look for all available YOLOv5 models. |
|
|
|
|
|
## How to use |
|
|
|
### Installation |
|
|
|
Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI. |
|
Run the following script to install pre-requisites for this model. |
|
```bash |
|
pip install -r requirements.txt |
|
``` |
|
|
|
|
|
### Data Preparation (optional: for accuracy evaluation) |
|
|
|
The dataset MSCOCO2017 contains 118287 images for training and 5000 images for validation. |
|
|
|
Download COCO dataset and create directories in your code like this: |
|
```plain |
|
βββ datasets |
|
βββ coco |
|
βββ annotations |
|
| βββ instances_val2017.json |
|
| βββ ... |
|
βββ labels |
|
| βββ val2017 |
|
| | βββ 000000000139.txt |
|
| βββ 000000000285.txt |
|
| βββ ... |
|
βββ images |
|
| βββ val2017 |
|
| | βββ 000000000139.jpg |
|
| βββ 000000000285.jpg |
|
βββ val2017.txt |
|
``` |
|
1. put the val2017 image folder under images directory or use a softlink |
|
2. the labels folder and val2017.txt above are generate by **general_json2yolo.py** |
|
3. modify the coco.yaml like this: |
|
```markdown |
|
path: /path/to/your/datasets/coco # dataset root dir |
|
train: train2017.txt # train images (relative to 'path') 118287 images |
|
val: val2017.txt # val images (relative to 'path') 5000 images |
|
``` |
|
|
|
|
|
### Test & Evaluation |
|
|
|
- Code snippet from [`onnx_inference.py`](onnx_inference.py) on how to use |
|
```python |
|
args = make_parser().parse_args() |
|
onnx_path = args.model |
|
onnx_model = onnxruntime.InferenceSession(onnx_path) |
|
grid = np.load("./grid.npy", allow_pickle=True) |
|
anchor_grid = np.load("./anchor_grid.npy", allow_pickle=True) |
|
path = args.image_path |
|
new_path = args.output_path |
|
conf_thres, iou_thres, classes, agnostic_nms, max_det = 0.25, 0.45, None, False, 1000 |
|
|
|
img0 = cv2.imread(path) |
|
img = pre_process(img0) |
|
onnx_input = {onnx_model.get_inputs()[0].name: img} |
|
onnx_output = onnx_model.run(None, onnx_input) |
|
onnx_output = post_process(onnx_output) |
|
pred = non_max_suppression( |
|
onnx_output[0], conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det |
|
) |
|
colors = Colors() |
|
det = pred[0] |
|
im0 = img0.copy() |
|
annotator = Annotator(im0, line_width=2, example=str(names)) |
|
if len(det): |
|
# Rescale boxes from img_size to im0 size |
|
det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() |
|
|
|
# Write results |
|
for *xyxy, conf, cls in reversed(det): |
|
c = int(cls) # integer class |
|
label = f"{names[c]} {conf:.2f}" |
|
annotator.box_label(xyxy, label, color=colors(c, True)) |
|
# Stream results |
|
im0 = annotator.result() |
|
cv2.imwrite(new_path, im0) |
|
``` |
|
|
|
- Run inference for a single image |
|
```python |
|
python onnx_inference.py -m ./yolov5s_qat.onnx -i /Path/To/Your/Image --ipu --provider_config /Path/To/Your/Provider_config |
|
``` |
|
*Note: __vaip_config.json__ is located at the setup package of Ryzen AI (refer to [Installation](#installation))* |
|
- Test accuracy of the quantized model |
|
```python |
|
python onnx_eval.py -m ./yolov5s_qat.onnx --ipu --provider_config /Path/To/Your/Provider_config |
|
``` |
|
|
|
### Performance |
|
|
|
|Metric |Accuracy on IPU| |
|
| :----: | :----: | |
|
|AP\@0.50:0.95|0.356| |
|
|
|
|
|
```bibtex |
|
@software{glenn_jocher_2021_5563715, |
|
author = {Glenn Jocher et. al.}, |
|
title = {{ultralytics/yolov5: v6.0 - YOLOv5n 'Nano' models, |
|
Roboflow integration, TensorFlow export, OpenCV |
|
DNN support}}, |
|
month = oct, |
|
year = 2021, |
|
publisher = {Zenodo}, |
|
version = {v6.0}, |
|
doi = {10.5281/zenodo.5563715}, |
|
url = {https://doi.org/10.5281/zenodo.5563715} |
|
} |
|
``` |
|
|