YOLOv5 / README.md
qc903113684's picture
Update README.md
7df8fa6 verified
|
raw
history blame
5.54 kB
metadata
license: apache-2.0

YOLOv5: Target Detection

Yolov5 is a one-stage structure target detection network framework, in which the main structure consists of 4 parts, including the network backbone composed of modified CSPNet, the high-resolution feature fusion module composed of FPN (Feature Paramid Network), composed of SPP (Spatial Pyramid Pooling) constitutes a pooling module, and three different detection heads are used to detect targets of different sizes.

The YOLOv5 model can be found here

CONTENTS

Source Model

The steps followed the yolov5 tutorials to get the source model in ONNX format.

The source model YOLOv5s.onnx also can be found here.

Environment Preparation

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Export to ONNX

python export.py --weights yolov5s.pt --include torchscript onnx --opset 12

Performance

Device SoC Runtime Model Size (pixels) Inference Time (ms) Precision Compute Unit Model Download
AidBox QCS6490 QCS6490 QNN YOLOv5s(cutoff) 640 6.7 INT8 NPU model download
AidBox QCS6490 QCS6490 QNN YOLOv5s(cutoff) 640 15.2 INT16 NPU model download
AidBox QCS6490 QCS6490 SNPE YOLOv5s(cutoff) 640 5.5 INT8 NPU model download
AidBox QCS6490 QCS6490 SNPE YOLOv5s(cutoff) 640 13.4 INT16 NPU model download
APLUX QCS8550 QCS8550 QNN YOLOv5s(cutoff) 640 4.1 INT8 NPU model download
APLUX QCS8550 QCS8550 QNN YOLOv5s(cutoff) 640 13.4 INT16 NPU model download
APLUX QCS8550 QCS8550 SNPE YOLOv5s(cutoff) 640 2.3 INT8 NPU model download
APLUX QCS8550 QCS8550 SNPE YOLOv5s(cutoff) 640 5.8 INT16 NPU model download
AidBox GS865 QCS8250 SNPE YOLOv5s(cutoff) 640 21 INT8 NPU model download

Model Conversion

Demo models converted from AIMO(AI Model Optimizier).

The demo model conversion step on AIMO can be found blow:

Device SoC Runtime Model Size (pixels) Precision Compute Unit AIMO Conversion Steps
AidBox QCS6490 QCS6490 QNN YOLOv5s(cutoff) 640 INT8 NPU View Steps
AidBox QCS6490 QCS6490 QNN YOLOv5s(cutoff) 640 INT16 NPU View Steps
AidBox QCS6490 QCS6490 SNPE YOLOv5s(cutoff) 640 INT8 NPU View Steps
AidBox QCS6490 QCS6490 SNPE YOLOv5s(cutoff) 640 INT16 NPU View Steps
APLUX QCS8550 QCS8550 QNN YOLOv5s(cutoff) 640 INT8 NPU View Steps
APLUX QCS8550 QCS8550 QNN YOLOv5s(cutoff) 640 INT16 NPU View Steps
APLUX QCS8550 QCS8550 SNPE YOLOv5s(cutoff) 640 INT8 NPU View Steps
APLUX QCS8550 QCS8550 SNPE YOLOv5s(cutoff) 640 INT16 NPU View Steps
AidBox GS865 QCS8250 SNPE YOLOv5s(cutoff) 640 INT8 NPU View Steps

Inference

Step1: convert model

a. Prepare source model in onnx format. The source model can be found here.

b. Login AIMO and convert source model to target format. The model conversion step can follow AIMO Conversion Step in Model Conversion Sheet.

c. After conversion task done, download target model file.

note: you can skip convert model step, and directly download converted model in Performance Sheet.

Step2: install AidLite SDK

The installation guide of AidLite SDK can be found here.

Step3: run demo program