YOLOv8: Target Detection
YOLO algorithm is the most typical representative of one-stage target detection algorithm.
It is based on deep neural network for object recognition and positioning. It runs very fast and can be used in real-time systems. YOLOv8 is currently the most advanced algorithm of the YOLO series, surpassing the previous YOLO series in terms of accuracy and speed.
The model can be found here
CONTENTS
Performance
Device | SoC | Runtime | Model | Size (pixels) | Inference Time (ms) | Precision | Compute Unit | Model Download |
---|---|---|---|---|---|---|---|---|
AidBox QCS6490 | QCS6490 | QNN | YOLOv8s(cutoff) | 640 | 11.1 | INT8 | NPU | model download |
AidBox QCS6490 | QCS6490 | QNN | YOLOv8s(cutoff) | 640 | 24.8 | INT16 | NPU | model download |
AidBox QCS6490 | QCS6490 | SNPE | YOLOv8s(cutoff) | 640 | 9.6 | INT8 | NPU | model download |
AidBox QCS6490 | QCS6490 | SNPE | YOLOv8s(cutoff) | 640 | 22.1 | INT16 | NPU | model download |
APLUX QCS8550 | QCS8550 | QNN | YOLOv8s(cutoff) | 640 | 8.7 | INT8 | NPU | model download |
APLUX QCS8550 | QCS8550 | QNN | YOLOv8s(cutoff) | 640 | 20.3 | INT16 | NPU | model download |
APLUX QCS8550 | QCS8550 | SNPE | YOLOv8s(cutoff) | 640 | 3.8 | INT8 | NPU | model download |
APLUX QCS8550 | QCS8550 | SNPE | YOLOv8s(cutoff) | 640 | 9.3 | INT16 | NPU | model download |
AidBox GS865 | QCS8250 | SNPE | YOLOv8s(cutoff) | 640 | 35 | INT8 | NPU | model download |
Models Conversion
Demo models converted from AIMO(AI Model Optimizier).
The source model YOLOv8s.onnx can be found here.
The demo model conversion step on AIMO can be found blow:
Device | SoC | Runtime | Model | Size (pixels) | Precision | Compute Unit | AIMO Conversion Steps |
---|---|---|---|---|---|---|---|
AidBox QCS6490 | QCS6490 | QNN | YOLOv8s(cutoff) | 640 | INT8 | NPU | View Steps |
AidBox QCS6490 | QCS6490 | QNN | YOLOv8s(cutoff) | 640 | INT16 | NPU | View Steps |
AidBox QCS6490 | QCS6490 | SNPE | YOLOv8s(cutoff) | 640 | INT8 | NPU | View Steps |
AidBox QCS6490 | QCS6490 | SNPE | YOLOv8s(cutoff) | 640 | INT16 | NPU | View Steps |
APLUX QCS8550 | QCS8550 | QNN | YOLOv8s(cutoff) | 640 | INT8 | NPU | View Steps |
APLUX QCS8550 | QCS8550 | QNN | YOLOv8s(cutoff) | 640 | INT16 | NPU | View Steps |
APLUX QCS8550 | QCS8550 | SNPE | YOLOv8s(cutoff) | 640 | INT8 | NPU | View Steps |
APLUX QCS8550 | QCS8550 | SNPE | YOLOv8s(cutoff) | 640 | INT16 | NPU | View Steps |
AidBox GS865 | QCS8250 | SNPE | YOLOv8s(cutoff) | 640 | INT8 | NPU | View Steps |
Inference
Step1: convert model
a. Prepare source model in onnx format. The source model can be found here.
b. Login AIMO and convert source model to target format. The model conversion step can follow AIMO Conversion Step in Model Conversion Sheet.
c. After conversion task done, download target model file.
Step2: install AidLite SDK
The installation guide of AidLite SDK can be found here.