qaihm-bot commited on
Commit
6b48648
1 Parent(s): b0a5f01

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -38,7 +38,8 @@ More details on model performance across various devices, can be found
38
 
39
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
  | ---|---|---|---|---|---|---|---|
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.585 ms | 0 - 2 MB | INT8 | NPU | [MobileNet-v3-Large-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Large-Quantized/blob/main/MobileNet-v3-Large-Quantized.tflite)
 
42
 
43
 
44
  ## Installation
@@ -98,10 +99,10 @@ python -m qai_hub_models.models.mobilenet_v3_large_quantized.export
98
  ```
99
  Profile Job summary of MobileNet-v3-Large-Quantized
100
  --------------------------------------------------
101
- Device: QCS8550 (Proxy) (12)
102
- Estimated Inference Time: 0.67 ms
103
- Estimated Peak Memory Range: 0.04-1.77 MB
104
- Compute Units: NPU (138) | Total (138)
105
 
106
 
107
  ```
 
38
 
39
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
40
  | ---|---|---|---|---|---|---|---|
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.357 ms | 0 - 3 MB | INT8 | NPU | [MobileNet-v3-Large-Quantized.tflite](https://huggingface.co/qualcomm/MobileNet-v3-Large-Quantized/blob/main/MobileNet-v3-Large-Quantized.tflite)
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.623 ms | 0 - 7 MB | INT8 | NPU | [MobileNet-v3-Large-Quantized.so](https://huggingface.co/qualcomm/MobileNet-v3-Large-Quantized/blob/main/MobileNet-v3-Large-Quantized.so)
43
 
44
 
45
  ## Installation
 
99
  ```
100
  Profile Job summary of MobileNet-v3-Large-Quantized
101
  --------------------------------------------------
102
+ Device: Snapdragon X Elite CRD (11)
103
+ Estimated Inference Time: 0.70 ms
104
+ Estimated Peak Memory Range: 0.50-0.50 MB
105
+ Compute Units: NPU (126) | Total (126)
106
 
107
 
108
  ```