File size: 5,541 Bytes
7db008b 5eec3ee bd4198b 1c6e3bd bd4198b c3cb036 0b1c8a7 c3cb036 0debee3 c3cb036 0b1c8a7 7df8fa6 0b1c8a7 c3cb036 81d083b 5e27ba2 e2d3e0a 5e27ba2 c21f902 c3cb036 81d083b 1dee2f5 5e27ba2 e2d3e0a 1c6e3bd c3cb036 1c6e3bd 6c5c4e6 435d163 901308c 435d163 2389ceb 6c5c4e6 435d163 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/64c1fef5b9d81735a12c3fcc/sebNQgVO1hUapijWvVwTl.jpeg" width=600>
# YOLOv5: Target Detection
Yolov5 is a one-stage structure target detection network framework, in which the main structure consists of 4 parts, including the network backbone composed of modified CSPNet, the high-resolution feature fusion module composed of FPN (Feature Paramid Network), composed of SPP (Spatial Pyramid Pooling) constitutes a pooling module, and three different detection heads are used to detect targets of different sizes.
The YOLOv5 model can be found [here](https://github.com/ultralytics/yolov5)
## CONTENTS
- [Source Model](#source-model)
- [Performance](#performance)
- [Model Conversion](#model-conversion)
- [Inference](#inference)
## Source Model
The steps followed the [yolov5 tutorials](https://docs.ultralytics.com/yolov5/tutorials/model_export/) to get the source model in ONNX format.
> The source model **YOLOv5s.onnx** also can be found [here](https://huggingface.co/aplux/YOLOv5/blob/main/yolov5s.onnx).
**Environment Preparation**
```bash
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
```
**Export to ONNX**
```bash
python export.py --weights yolov5s.pt --include torchscript onnx --opset 12
```
## Performance
|Device|SoC|Runtime|Model|Size (pixels)|Inference Time (ms)|Precision|Compute Unit|Model Download|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|AidBox QCS6490|QCS6490|QNN|YOLOv5s(cutoff)|640|6.7|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/Models/QCS6490/cutoff_yolov5s_int8.qnn.serialized.bin)|
|AidBox QCS6490|QCS6490|QNN|YOLOv5s(cutoff)|640|15.2|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/Models/QCS6490/cutoff_yolov5s_int16.qnn.serialized.bin)|
|AidBox QCS6490|QCS6490|SNPE|YOLOv5s(cutoff)|640|5.5|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/Models/QCS6490/cutoff_yolov5s_int8_htp_snpe2.dlc)|
|AidBox QCS6490|QCS6490|SNPE|YOLOv5s(cutoff)|640|13.4|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/Models/QCS6490/cutoff_yolov5s_int16_htp_snpe2.dlc)|
|APLUX QCS8550|QCS8550|QNN|YOLOv5s(cutoff)|640|4.1|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/Models/QCS8550/cutoff_yolov5s_640_int8.qnn.serialized.bin)|
|APLUX QCS8550|QCS8550|QNN|YOLOv5s(cutoff)|640|13.4|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/Models/QCS8550/cutoff_yolov5s_640_int16.qnn.serialized.bin)|
|APLUX QCS8550|QCS8550|SNPE|YOLOv5s(cutoff)|640|2.3|INT8|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/Models/QCS8550/cutoff_yolov5s_int8_htp_snpe2.dlc)|
|APLUX QCS8550|QCS8550|SNPE|YOLOv5s(cutoff)|640|5.8|INT16|NPU|[model download](https://huggingface.co/aidlux/YOLOv5/blob/main/Models/QCS8550/cutoff_yolov5s_int16_htp_snpe2.dlc)|
|AidBox GS865|QCS8250|SNPE|YOLOv5s(cutoff)|640|21|INT8|NPU|[model download]()|
## Model Conversion
Demo models converted from [**AIMO(AI Model Optimizier)**](https://aidlux.com/en/product/aimo).
The demo model conversion step on AIMO can be found blow:
|Device|SoC|Runtime|Model|Size (pixels)|Precision|Compute Unit|AIMO Conversion Steps|
|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
|AidBox QCS6490|QCS6490|QNN|YOLOv5s(cutoff)|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/YOLOv5/blob/main/AIMO/QCS6490/aimo_yolov5s_qnn_int8.png)|
|AidBox QCS6490|QCS6490|QNN|YOLOv5s(cutoff)|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/YOLOv5/blob/main/AIMO/QCS6490/aimo_yolov5s_qnn_int16.png)|
|AidBox QCS6490|QCS6490|SNPE|YOLOv5s(cutoff)|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/YOLOv5/blob/main/AIMO/QCS6490/aimo_yolov5s_snpe_int8.png)|
|AidBox QCS6490|QCS6490|SNPE|YOLOv5s(cutoff)|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/YOLOv5/blob/main/AIMO/QCS6490/aimo_yolov5s_snpe_int16.png)|
|APLUX QCS8550|QCS8550|QNN|YOLOv5s(cutoff)|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/YOLOv5/blob/main/AIMO/QCS8550/aimo_yolov5s_qnn_int8.png)|
|APLUX QCS8550|QCS8550|QNN|YOLOv5s(cutoff)|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/YOLOv5/blob/main/AIMO/QCS8550/aimo_yolov5s_qnn_int16.png)|
|APLUX QCS8550|QCS8550|SNPE|YOLOv5s(cutoff)|640|INT8|NPU|[View Steps](https://huggingface.co/aplux/YOLOv5/blob/main/AIMO/QCS8550/aimo_yolov5s_snpe_int8.png)|
|APLUX QCS8550|QCS8550|SNPE|YOLOv5s(cutoff)|640|INT16|NPU|[View Steps](https://huggingface.co/aplux/YOLOv5/blob/main/AIMO/QCS8550/aimo_yolov5s_snpe_int16.png)|
|AidBox GS865|QCS8250|SNPE|YOLOv5s(cutoff)|640|INT8|NPU|[View Steps]()|
## Inference
### Step1: convert model
a. Prepare source model in onnx format. The source model can be found [here](https://huggingface.co/aplux/YOLOv5/blob/main/yolov5s.onnx).
b. Login [AIMO](https://aidlux.com/en/product/aimo) and convert source model to target format. The model conversion step can follow **AIMO Conversion Step** in [Model Conversion Sheet](#model-conversion).
c. After conversion task done, download target model file.
> note: you can skip convert model step, and directly download converted model in [Performance Sheet](#performance).
### Step2: install AidLite SDK
The installation guide of AidLite SDK can be found [here](https://huggingface.co/datasets/aplux/AIToolKit/blob/main/AidLite%20SDK%20Development%20Documents.md#installation).
### Step3: run demo program
|