qaihm-bot commited on
Commit
da87bb8
1 Parent(s): 9ee8bc4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +11 -5
README.md CHANGED
@@ -32,10 +32,13 @@ More details on model performance across various devices, can be found
32
  - Model size: 22.2 MB
33
 
34
 
 
 
35
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
  | ---|---|---|---|---|---|---|---|
37
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 13.093 ms | 20 - 65 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite)
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 12.869 ms | 3 - 20 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.so](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.so)
 
39
 
40
 
41
  ## Installation
@@ -96,15 +99,17 @@ python -m qai_hub_models.models.deeplabv3_plus_mobilenet.export
96
  Profile Job summary of DeepLabV3-Plus-MobileNet
97
  --------------------------------------------------
98
  Device: Snapdragon X Elite CRD (11)
99
- Estimated Inference Time: 16.50 ms
100
  Estimated Peak Memory Range: 3.02-3.02 MB
101
  Compute Units: NPU (124) | Total (124)
102
 
103
 
104
  ```
 
 
105
  ## How does this work?
106
 
107
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/DeepLabV3-Plus-MobileNet/export.py)
108
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
109
  on-device. Lets go through each step below in detail:
110
 
@@ -181,6 +186,7 @@ spot check the output with expected output.
181
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
182
 
183
 
 
184
  ## Run demo on a cloud-hosted device
185
 
186
  You can also run the demo on-device.
@@ -217,7 +223,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
217
  ## License
218
  - The license for the original implementation of DeepLabV3-Plus-MobileNet can be found
219
  [here](https://github.com/jfzhang95/pytorch-deeplab-xception/blob/master/LICENSE).
220
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
221
 
222
  ## References
223
  * [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)
 
32
  - Model size: 22.2 MB
33
 
34
 
35
+
36
+
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 13.047 ms | 20 - 22 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.tflite](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.tflite)
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 12.852 ms | 4 - 19 MB | FP16 | NPU | [DeepLabV3-Plus-MobileNet.so](https://huggingface.co/qualcomm/DeepLabV3-Plus-MobileNet/blob/main/DeepLabV3-Plus-MobileNet.so)
41
+
42
 
43
 
44
  ## Installation
 
99
  Profile Job summary of DeepLabV3-Plus-MobileNet
100
  --------------------------------------------------
101
  Device: Snapdragon X Elite CRD (11)
102
+ Estimated Inference Time: 16.51 ms
103
  Estimated Peak Memory Range: 3.02-3.02 MB
104
  Compute Units: NPU (124) | Total (124)
105
 
106
 
107
  ```
108
+
109
+
110
  ## How does this work?
111
 
112
+ This [export script](https://aihub.qualcomm.com/models/deeplabv3_plus_mobilenet/qai_hub_models/models/DeepLabV3-Plus-MobileNet/export.py)
113
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
114
  on-device. Lets go through each step below in detail:
115
 
 
186
  AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
187
 
188
 
189
+
190
  ## Run demo on a cloud-hosted device
191
 
192
  You can also run the demo on-device.
 
223
  ## License
224
  - The license for the original implementation of DeepLabV3-Plus-MobileNet can be found
225
  [here](https://github.com/jfzhang95/pytorch-deeplab-xception/blob/master/LICENSE).
226
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
227
 
228
  ## References
229
  * [Rethinking Atrous Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1706.05587)