qaihm-bot commited on
Commit
104c5dc
·
verified ·
1 Parent(s): 71e82da

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +51 -208
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ---
2
  library_name: pytorch
3
  license: agpl-3.0
4
- pipeline_tag: object-detection
5
  tags:
6
  - real_time
7
  - android
 
8
 
9
  ---
10
 
@@ -19,10 +19,7 @@ Ultralytics YOLOv8 is a machine learning model that predicts bounding boxes and
19
  This model is an implementation of YOLOv8-Detection found [here](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/detect).
20
 
21
 
22
- This repository provides scripts to run YOLOv8-Detection on Qualcomm® devices.
23
- More details on model performance across various devices, can be found
24
- [here](https://aihub.qualcomm.com/models/yolov8_det).
25
-
26
 
27
  ### Model Details
28
 
@@ -35,209 +32,36 @@ More details on model performance across various devices, can be found
35
 
36
  | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
  |---|---|---|---|---|---|---|---|---|
38
- | YOLOv8-Detection | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 5.178 ms | 0 - 17 MB | FP16 | NPU | [YOLOv8-Detection.tflite](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.tflite) |
39
- | YOLOv8-Detection | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 5.26 ms | 5 - 19 MB | FP16 | NPU | [YOLOv8-Detection.so](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.so) |
40
- | YOLOv8-Detection | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 6.202 ms | 1 - 35 MB | FP16 | NPU | [YOLOv8-Detection.onnx](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.onnx) |
41
- | YOLOv8-Detection | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 3.714 ms | 0 - 44 MB | FP16 | NPU | [YOLOv8-Detection.tflite](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.tflite) |
42
- | YOLOv8-Detection | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 3.68 ms | 5 - 56 MB | FP16 | NPU | [YOLOv8-Detection.so](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.so) |
43
- | YOLOv8-Detection | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 4.316 ms | 5 - 65 MB | FP16 | NPU | [YOLOv8-Detection.onnx](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.onnx) |
44
- | YOLOv8-Detection | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 3.692 ms | 0 - 42 MB | FP16 | NPU | [YOLOv8-Detection.tflite](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.tflite) |
45
- | YOLOv8-Detection | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 2.995 ms | 5 - 52 MB | FP16 | NPU | Use Export Script |
46
- | YOLOv8-Detection | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 3.965 ms | 5 - 55 MB | FP16 | NPU | [YOLOv8-Detection.onnx](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.onnx) |
47
- | YOLOv8-Detection | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 5.148 ms | 0 - 17 MB | FP16 | NPU | [YOLOv8-Detection.tflite](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.tflite) |
48
- | YOLOv8-Detection | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 5.076 ms | 5 - 8 MB | FP16 | NPU | Use Export Script |
49
- | YOLOv8-Detection | SA7255P ADP | SA7255P | QNN | 70.855 ms | 1 - 9 MB | FP16 | NPU | Use Export Script |
50
- | YOLOv8-Detection | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 5.168 ms | 0 - 17 MB | FP16 | NPU | [YOLOv8-Detection.tflite](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.tflite) |
51
- | YOLOv8-Detection | SA8255 (Proxy) | SA8255P Proxy | QNN | 4.986 ms | 5 - 8 MB | FP16 | NPU | Use Export Script |
52
- | YOLOv8-Detection | SA8295P ADP | SA8295P | TFLITE | 9.951 ms | 0 - 35 MB | FP16 | NPU | [YOLOv8-Detection.tflite](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.tflite) |
53
- | YOLOv8-Detection | SA8295P ADP | SA8295P | QNN | 8.997 ms | 0 - 14 MB | FP16 | NPU | Use Export Script |
54
- | YOLOv8-Detection | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 5.159 ms | 0 - 16 MB | FP16 | NPU | [YOLOv8-Detection.tflite](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.tflite) |
55
- | YOLOv8-Detection | SA8650 (Proxy) | SA8650P Proxy | QNN | 5.073 ms | 5 - 7 MB | FP16 | NPU | Use Export Script |
56
- | YOLOv8-Detection | SA8775P ADP | SA8775P | TFLITE | 8.134 ms | 0 - 36 MB | FP16 | NPU | [YOLOv8-Detection.tflite](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.tflite) |
57
- | YOLOv8-Detection | SA8775P ADP | SA8775P | QNN | 8.015 ms | 0 - 10 MB | FP16 | NPU | Use Export Script |
58
- | YOLOv8-Detection | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 8.579 ms | 0 - 35 MB | FP16 | NPU | [YOLOv8-Detection.tflite](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.tflite) |
59
- | YOLOv8-Detection | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 7.708 ms | 5 - 44 MB | FP16 | NPU | Use Export Script |
60
- | YOLOv8-Detection | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 6.684 ms | 5 - 5 MB | FP16 | NPU | [YOLOv8-Detection.onnx](https://huggingface.co/qualcomm/YOLOv8-Detection/blob/main/YOLOv8-Detection.onnx) |
61
-
62
-
63
-
64
-
65
- ## Installation
66
-
67
-
68
- Install the package via pip:
69
- ```bash
70
- pip install "qai-hub-models[yolov8-det]"
71
- ```
72
-
73
-
74
- ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
75
-
76
- Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
77
- Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
78
-
79
- With this API token, you can configure your client to run models on the cloud
80
- hosted devices.
81
- ```bash
82
- qai-hub configure --api_token API_TOKEN
83
- ```
84
- Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
85
-
86
-
87
-
88
- ## Demo off target
89
-
90
- The package contains a simple end-to-end demo that downloads pre-trained
91
- weights and runs this model on a sample input.
92
-
93
- ```bash
94
- python -m qai_hub_models.models.yolov8_det.demo
95
- ```
96
-
97
- The above demo runs a reference implementation of pre-processing, model
98
- inference, and post processing.
99
-
100
- **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
101
- environment, please add the following to your cell (instead of the above).
102
- ```
103
- %run -m qai_hub_models.models.yolov8_det.demo
104
- ```
105
-
106
-
107
- ### Run model on a cloud-hosted device
108
-
109
- In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
110
- device. This script does the following:
111
- * Performance check on-device on a cloud-hosted device
112
- * Downloads compiled assets that can be deployed on-device for Android.
113
- * Accuracy check between PyTorch and on-device outputs.
114
-
115
- ```bash
116
- python -m qai_hub_models.models.yolov8_det.export
117
- ```
118
- ```
119
- Profiling Results
120
- ------------------------------------------------------------
121
- YOLOv8-Detection
122
- Device : Samsung Galaxy S23 (13)
123
- Runtime : TFLITE
124
- Estimated inference time (ms) : 5.2
125
- Estimated peak memory usage (MB): [0, 17]
126
- Total # Ops : 290
127
- Compute Unit(s) : NPU (290 ops)
128
- ```
129
-
130
-
131
- ## How does this work?
132
-
133
- This [export script](https://aihub.qualcomm.com/models/yolov8_det/qai_hub_models/models/YOLOv8-Detection/export.py)
134
- leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
135
- on-device. Lets go through each step below in detail:
136
-
137
- Step 1: **Compile model for on-device deployment**
138
-
139
- To compile a PyTorch model for on-device deployment, we first trace the model
140
- in memory using the `jit.trace` and then call the `submit_compile_job` API.
141
 
142
- ```python
143
- import torch
144
 
145
- import qai_hub as hub
146
- from qai_hub_models.models.yolov8_det import Model
147
-
148
- # Load the model
149
- torch_model = Model.from_pretrained()
150
-
151
- # Device
152
- device = hub.Device("Samsung Galaxy S24")
153
-
154
- # Trace model
155
- input_shape = torch_model.get_input_spec()
156
- sample_inputs = torch_model.sample_inputs()
157
-
158
- pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
159
-
160
- # Compile model on a specific device
161
- compile_job = hub.submit_compile_job(
162
- model=pt_model,
163
- device=device,
164
- input_specs=torch_model.get_input_spec(),
165
- )
166
-
167
- # Get target model to run on-device
168
- target_model = compile_job.get_target_model()
169
-
170
- ```
171
-
172
-
173
- Step 2: **Performance profiling on cloud-hosted device**
174
-
175
- After compiling models from step 1. Models can be profiled model on-device using the
176
- `target_model`. Note that this scripts runs the model on a device automatically
177
- provisioned in the cloud. Once the job is submitted, you can navigate to a
178
- provided job URL to view a variety of on-device performance metrics.
179
- ```python
180
- profile_job = hub.submit_profile_job(
181
- model=target_model,
182
- device=device,
183
- )
184
-
185
- ```
186
-
187
- Step 3: **Verify on-device accuracy**
188
-
189
- To verify the accuracy of the model on-device, you can run on-device inference
190
- on sample input data on the same cloud hosted device.
191
- ```python
192
- input_data = torch_model.sample_inputs()
193
- inference_job = hub.submit_inference_job(
194
- model=target_model,
195
- device=device,
196
- inputs=input_data,
197
- )
198
- on_device_output = inference_job.download_output_data()
199
-
200
- ```
201
- With the output of the model, you can compute like PSNR, relative errors or
202
- spot check the output with expected output.
203
-
204
- **Note**: This on-device profiling and inference requires access to Qualcomm®
205
- AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
206
-
207
-
208
-
209
- ## Run demo on a cloud-hosted device
210
-
211
- You can also run the demo on-device.
212
-
213
- ```bash
214
- python -m qai_hub_models.models.yolov8_det.demo --on-device
215
- ```
216
-
217
- **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
218
- environment, please add the following to your cell (instead of the above).
219
- ```
220
- %run -m qai_hub_models.models.yolov8_det.demo -- --on-device
221
- ```
222
-
223
-
224
- ## Deploying compiled model to Android
225
-
226
-
227
- The models can be deployed using multiple runtimes:
228
- - TensorFlow Lite (`.tflite` export): [This
229
- tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
230
- guide to deploy the .tflite model in an Android application.
231
-
232
-
233
- - QNN (`.so` export ): This [sample
234
- app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
235
- provides instructions on how to use the `.so` shared library in an Android application.
236
-
237
-
238
- ## View on Qualcomm® AI Hub
239
- Get more details on YOLOv8-Detection's performance across various devices [here](https://aihub.qualcomm.com/models/yolov8_det).
240
- Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
241
 
242
 
243
  ## License
@@ -254,7 +78,26 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
254
 
255
 
256
  ## Community
257
- * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
258
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
259
 
260
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: pytorch
3
  license: agpl-3.0
 
4
  tags:
5
  - real_time
6
  - android
7
+ pipeline_tag: object-detection
8
 
9
  ---
10
 
 
19
  This model is an implementation of YOLOv8-Detection found [here](https://github.com/ultralytics/ultralytics/tree/main/ultralytics/models/yolo/detect).
20
 
21
 
22
+ More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/yolov8_det).
 
 
 
23
 
24
  ### Model Details
25
 
 
32
 
33
  | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
34
  |---|---|---|---|---|---|---|---|---|
35
+ | YOLOv8-Detection | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 5.164 ms | 0 - 17 MB | FP16 | NPU | -- |
36
+ | YOLOv8-Detection | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 5.053 ms | 5 - 7 MB | FP16 | NPU | -- |
37
+ | YOLOv8-Detection | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 6.183 ms | 5 - 39 MB | FP16 | NPU | -- |
38
+ | YOLOv8-Detection | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 3.706 ms | 0 - 47 MB | FP16 | NPU | -- |
39
+ | YOLOv8-Detection | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 3.46 ms | 5 - 20 MB | FP16 | NPU | -- |
40
+ | YOLOv8-Detection | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 4.371 ms | 5 - 64 MB | FP16 | NPU | -- |
41
+ | YOLOv8-Detection | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 3.686 ms | 0 - 44 MB | FP16 | NPU | -- |
42
+ | YOLOv8-Detection | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 3.635 ms | 5 - 56 MB | FP16 | NPU | -- |
43
+ | YOLOv8-Detection | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 3.347 ms | 5 - 57 MB | FP16 | NPU | -- |
44
+ | YOLOv8-Detection | SA7255P ADP | SA7255P | TFLITE | 71.655 ms | 0 - 35 MB | FP16 | NPU | -- |
45
+ | YOLOv8-Detection | SA7255P ADP | SA7255P | QNN | 70.864 ms | 1 - 7 MB | FP16 | NPU | -- |
46
+ | YOLOv8-Detection | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 5.167 ms | 0 - 16 MB | FP16 | NPU | -- |
47
+ | YOLOv8-Detection | SA8255 (Proxy) | SA8255P Proxy | QNN | 5.013 ms | 5 - 7 MB | FP16 | NPU | -- |
48
+ | YOLOv8-Detection | SA8295P ADP | SA8295P | TFLITE | 9.939 ms | 0 - 28 MB | FP16 | NPU | -- |
49
+ | YOLOv8-Detection | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 5.173 ms | 0 - 19 MB | FP16 | NPU | -- |
50
+ | YOLOv8-Detection | SA8650 (Proxy) | SA8650P Proxy | QNN | 4.998 ms | 5 - 7 MB | FP16 | NPU | -- |
51
+ | YOLOv8-Detection | SA8775P ADP | SA8775P | TFLITE | 8.129 ms | 0 - 35 MB | FP16 | NPU | -- |
52
+ | YOLOv8-Detection | SA8775P ADP | SA8775P | QNN | 7.974 ms | 0 - 8 MB | FP16 | NPU | -- |
53
+ | YOLOv8-Detection | QCS8275 (Proxy) | QCS8275 Proxy | TFLITE | 71.655 ms | 0 - 35 MB | FP16 | NPU | -- |
54
+ | YOLOv8-Detection | QCS8275 (Proxy) | QCS8275 Proxy | QNN | 70.864 ms | 1 - 7 MB | FP16 | NPU | -- |
55
+ | YOLOv8-Detection | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 5.145 ms | 0 - 17 MB | FP16 | NPU | -- |
56
+ | YOLOv8-Detection | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 5.0 ms | 5 - 7 MB | FP16 | NPU | -- |
57
+ | YOLOv8-Detection | QCS9075 (Proxy) | QCS9075 Proxy | TFLITE | 8.129 ms | 0 - 35 MB | FP16 | NPU | -- |
58
+ | YOLOv8-Detection | QCS9075 (Proxy) | QCS9075 Proxy | QNN | 7.974 ms | 0 - 8 MB | FP16 | NPU | -- |
59
+ | YOLOv8-Detection | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 8.597 ms | 0 - 31 MB | FP16 | NPU | -- |
60
+ | YOLOv8-Detection | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 7.603 ms | 5 - 40 MB | FP16 | NPU | -- |
61
+ | YOLOv8-Detection | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 5.419 ms | 5 - 5 MB | FP16 | NPU | -- |
62
+ | YOLOv8-Detection | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 6.696 ms | 5 - 5 MB | FP16 | NPU | -- |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
 
 
 
64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
 
67
  ## License
 
78
 
79
 
80
  ## Community
81
+ * Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
82
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
83
 
84
+ ## Usage and Limitations
85
+
86
+ Model may not be used for or in connection with any of the following applications:
87
+
88
+ - Accessing essential private and public services and benefits;
89
+ - Administration of justice and democratic processes;
90
+ - Assessing or recognizing the emotional state of a person;
91
+ - Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
92
+ - Education and vocational training;
93
+ - Employment and workers management;
94
+ - Exploitation of the vulnerabilities of persons resulting in harmful behavior;
95
+ - General purpose social scoring;
96
+ - Law enforcement;
97
+ - Management and operation of critical infrastructure;
98
+ - Migration, asylum and border control management;
99
+ - Predictive policing;
100
+ - Real-time remote biometric identification in public spaces;
101
+ - Recommender systems of social media platforms;
102
+ - Scraping of facial images (from the internet or otherwise); and/or
103
+ - Subliminal manipulation