|
--- |
|
license: apache-2.0 |
|
--- |
|
<h1>YOLOv8: Target Detection</h1> |
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/64c1fef5b9d81735a12c3fcc/7OYFgkPTO2Os98lS9Mvil.png" width=800> |
|
|
|
YOLO algorithm is the most typical representative of one-stage target detection algorithm. |
|
|
|
It is based on deep neural network for object recognition and positioning. It runs very fast and can be used in real-time systems. YOLOv8 is currently the most advanced algorithm of the YOLO series, surpassing the previous YOLO series in terms of accuracy and speed. |
|
|
|
The model can be found [here](https://github.com/ultralytics/ultralytics) |
|
|
|
## CONTENTS |
|
- [Performance](#performance) |
|
- [Model Conversion](#model-conversion) |
|
- [Inference](#inference) |
|
|
|
**Performance** |
|
|
|
|Device|SoC|Runtime|Model|Size (pixels)|Inference Time (ms)|Precision|Compute Unit|Model Download| |
|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |
|
|AidBox QCS6490|QCS6490|QNN|YOLOv8s(cutoff)|640|11.1|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv8/blob/main/Models/QCS6490/cutoff_yolov8s_int8.qnn.serialized.bin)| |
|
|AidBox QCS6490|QCS6490|QNN|YOLOv8s(cutoff)|640|24.8|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv8/blob/main/Models/QCS6490/cutoff_yolov8s_int16.qnn.serialized.bin)| |
|
|AidBox QCS6490|QCS6490|SNPE|YOLOv8s(cutoff)|640|9.6|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv8/blob/main/Models/QCS6490/cutoff_yolov8s_int8_htp_snpe2.dlc)| |
|
|AidBox QCS6490|QCS6490|SNPE|YOLOv8s(cutoff)|640|22.1|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv8/blob/main/Models/QCS6490/cutoff_yolov8s_int16_htp_snpe2.dlc)| |
|
|APLUX QCS8550|QCS8550|QNN|YOLOv8s(cutoff)|640|8.7|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv8/blob/main/Models/QCS8550/cutoff_yolov8s_int8.qnn.serialized.bin)| |
|
|APLUX QCS8550|QCS8550|QNN|YOLOv8s(cutoff)|640|20.3|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv8/blob/main/Models/QCS8550/cutoff_yolov8s_int16.qnn.serialized.bin)| |
|
|APLUX QCS8550|QCS8550|SNPE|YOLOv8s(cutoff)|640|3.8|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv8/blob/main/Models/QCS8550/cutoff_yolov8s_int8_htp_snpe2.dlc)| |
|
|APLUX QCS8550|QCS8550|SNPE|YOLOv8s(cutoff)|640|9.3|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv8/blob/main/Models/QCS8550/cutoff_yolov8s_int16_htp_snpe2.dlc)| |
|
|AidBox GS865|QCS8250|SNPE|YOLOv8s(cutoff)|640|35|INT8|NPU|[model download]()| |
|
|
|
**Models Conversion** |
|
|
|
Demo models converted from [**AIMO(AI Model Optimizier)**](https://aidlux.com/en/product/aimo). |
|
|
|
The source model **YOLOv8s.onnx** can be found [here](https://huggingface.co/aplux/YOLOv8/blob/main/yolov8s.onnx). |
|
|
|
The demo model conversion step on AIMO can be found blow: |
|
|
|
|Device|SoC|Runtime|Model|Size (pixels)|Precision|Compute Unit|AIMO Conversion Steps| |
|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |
|
|AidBox QCS6490|QCS6490|QNN|YOLOv8s(cutoff)|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/YOLOv8/blob/main/AIMO/QCS6490/aimo_yolov8s_qnn_int8.png)| |
|
|AidBox QCS6490|QCS6490|QNN|YOLOv8s(cutoff)|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/YOLOv8/blob/main/AIMO/QCS6490/aimo_yolov8s_qnn_int16.png)| |
|
|AidBox QCS6490|QCS6490|SNPE|YOLOv8s(cutoff)|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/YOLOv8/blob/main/AIMO/QCS6490/aimo_yolov8s_snpe_int8.png)| |
|
|AidBox QCS6490|QCS6490|SNPE|YOLOv8s(cutoff)|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/YOLOv8/blob/main/AIMO/QCS6490/aimo_yolov8s_snpe_int16.png)| |
|
|APLUX QCS8550|QCS8550|QNN|YOLOv8s(cutoff)|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/YOLOv8/blob/main/AIMO/QCS8550/aimo_yolov8s_qnn_int8.png)| |
|
|APLUX QCS8550|QCS8550|QNN|YOLOv8s(cutoff)|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/YOLOv8/blob/main/AIMO/QCS8550/aimo_yolov8s_qnn_int16.png)| |
|
|APLUX QCS8550|QCS8550|SNPE|YOLOv8s(cutoff)|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/YOLOv8/blob/main/AIMO/QCS8550/aimo_yolov8s_snpe_int8.png)| |
|
|APLUX QCS8550|QCS8550|SNPE|YOLOv8s(cutoff)|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/YOLOv8/blob/main/AIMO/QCS8550/aimo_yolov8s_snpe_int16.png)| |
|
|AidBox GS865|QCS8250|SNPE|YOLOv8s(cutoff)|640|INT8|NPU|[View Steps]()| |
|
|
|
## Inference |
|
|
|
### Step1: convert model |
|
|
|
a. Prepare source model in onnx format. The source model can be found [here](https://huggingface.co/aplux/YOLOv8/blob/main/yolov8s.onnx). |
|
|
|
b. Login [AIMO](https://aidlux.com/en/product/aimo) and convert source model to target format. The model conversion step can follow **AIMO Conversion Step** in [Model Conversion Sheet](#model-conversion). |
|
|
|
c. After conversion task done, download target model file. |
|
|
|
### Step2: install AidLite SDK |
|
|
|
The installation guide of AidLite SDK can be found [here](https://huggingface.co/datasets/aplux/AIToolKit/blob/main/AidLite%20SDK%20Development%20Documents.md#installation). |
|
|
|
### Step3: run demo program |
|
|