shreyajn commited on
Commit
264dd6b
1 Parent(s): 3a7e798

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +65 -33
README.md CHANGED
@@ -36,10 +36,10 @@ More details on model performance across various devices, can be found
36
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.837 ms | 0 - 2 MB | FP16 | NPU | [MediaPipePoseDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseDetector.tflite)
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.011 ms | 0 - 2 MB | FP16 | NPU | [MediaPipePoseLandmarkDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseLandmarkDetector.tflite)
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.884 ms | 0 - 15 MB | FP16 | NPU | [MediaPipePoseDetector.so](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseDetector.so)
42
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.083 ms | 0 - 16 MB | FP16 | NPU | [MediaPipePoseLandmarkDetector.so](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseLandmarkDetector.so)
43
 
44
 
45
 
@@ -100,17 +100,17 @@ python -m qai_hub_models.models.mediapipe_pose.export
100
  ```
101
  Profile Job summary of MediaPipePoseDetector
102
  --------------------------------------------------
103
- Device: SA8255 (Proxy) (13)
104
- Estimated Inference Time: 0.89 ms
105
- Estimated Peak Memory Range: 0.02-5.94 MB
106
- Compute Units: NPU (140) | Total (140)
107
 
108
  Profile Job summary of MediaPipePoseLandmarkDetector
109
  --------------------------------------------------
110
- Device: SA8255 (Proxy) (13)
111
- Estimated Inference Time: 1.08 ms
112
- Estimated Peak Memory Range: 0.02-11.66 MB
113
- Compute Units: NPU (292) | Total (292)
114
 
115
 
116
  ```
@@ -131,29 +131,49 @@ in memory using the `jit.trace` and then call the `submit_compile_job` API.
131
  import torch
132
 
133
  import qai_hub as hub
134
- from qai_hub_models.models.mediapipe_pose import Model
135
 
136
  # Load the model
137
- torch_model = Model.from_pretrained()
 
 
 
138
 
139
  # Device
140
  device = hub.Device("Samsung Galaxy S23")
141
 
 
142
  # Trace model
143
- input_shape = torch_model.get_input_spec()
144
- sample_inputs = torch_model.sample_inputs()
145
 
146
- pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
147
 
148
  # Compile model on a specific device
149
- compile_job = hub.submit_compile_job(
150
- model=pt_model,
151
  device=device,
152
- input_specs=torch_model.get_input_spec(),
153
  )
154
 
155
  # Get target model to run on-device
156
- target_model = compile_job.get_target_model()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
157
 
158
  ```
159
 
@@ -165,10 +185,16 @@ After compiling models from step 1. Models can be profiled model on-device using
165
  provisioned in the cloud. Once the job is submitted, you can navigate to a
166
  provided job URL to view a variety of on-device performance metrics.
167
  ```python
168
- profile_job = hub.submit_profile_job(
169
- model=target_model,
170
- device=device,
171
- )
 
 
 
 
 
 
172
 
173
  ```
174
 
@@ -177,14 +203,20 @@ Step 3: **Verify on-device accuracy**
177
  To verify the accuracy of the model on-device, you can run on-device inference
178
  on sample input data on the same cloud hosted device.
179
  ```python
180
- input_data = torch_model.sample_inputs()
181
- inference_job = hub.submit_inference_job(
182
- model=target_model,
183
- device=device,
184
- inputs=input_data,
185
- )
186
-
187
- on_device_output = inference_job.download_output_data()
 
 
 
 
 
 
188
 
189
  ```
190
  With the output of the model, you can compute like PSNR, relative errors or
 
36
 
37
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
38
  | ---|---|---|---|---|---|---|---|
39
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.793 ms | 0 - 14 MB | FP16 | NPU | [MediaPipePoseDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseDetector.tflite)
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 0.839 ms | 0 - 174 MB | FP16 | NPU | [MediaPipePoseLandmarkDetector.tflite](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseLandmarkDetector.tflite)
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.851 ms | 0 - 102 MB | FP16 | NPU | [MediaPipePoseDetector.so](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseDetector.so)
42
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.906 ms | 0 - 9 MB | FP16 | NPU | [MediaPipePoseLandmarkDetector.so](https://huggingface.co/qualcomm/MediaPipe-Pose-Estimation/blob/main/MediaPipePoseLandmarkDetector.so)
43
 
44
 
45
 
 
100
  ```
101
  Profile Job summary of MediaPipePoseDetector
102
  --------------------------------------------------
103
+ Device: Snapdragon X Elite CRD (11)
104
+ Estimated Inference Time: 0.99 ms
105
+ Estimated Peak Memory Range: 1.61-1.61 MB
106
+ Compute Units: NPU (138) | Total (138)
107
 
108
  Profile Job summary of MediaPipePoseLandmarkDetector
109
  --------------------------------------------------
110
+ Device: Snapdragon X Elite CRD (11)
111
+ Estimated Inference Time: 1.11 ms
112
+ Estimated Peak Memory Range: 0.75-0.75 MB
113
+ Compute Units: NPU (290) | Total (290)
114
 
115
 
116
  ```
 
131
  import torch
132
 
133
  import qai_hub as hub
134
+ from qai_hub_models.models.mediapipe_pose import MediaPipePoseDetector,MediaPipePoseLandmarkDetector
135
 
136
  # Load the model
137
+ pose_detector_model = MediaPipePoseDetector.from_pretrained()
138
+
139
+ pose_landmark_detector_model = MediaPipePoseLandmarkDetector.from_pretrained()
140
+
141
 
142
  # Device
143
  device = hub.Device("Samsung Galaxy S23")
144
 
145
+
146
  # Trace model
147
+ pose_detector_input_shape = pose_detector_model.get_input_spec()
148
+ pose_detector_sample_inputs = pose_detector_model.sample_inputs()
149
 
150
+ traced_pose_detector_model = torch.jit.trace(pose_detector_model, [torch.tensor(data[0]) for _, data in pose_detector_sample_inputs.items()])
151
 
152
  # Compile model on a specific device
153
+ pose_detector_compile_job = hub.submit_compile_job(
154
+ model=traced_pose_detector_model ,
155
  device=device,
156
+ input_specs=pose_detector_model.get_input_spec(),
157
  )
158
 
159
  # Get target model to run on-device
160
+ pose_detector_target_model = pose_detector_compile_job.get_target_model()
161
+
162
+ # Trace model
163
+ pose_landmark_detector_input_shape = pose_landmark_detector_model.get_input_spec()
164
+ pose_landmark_detector_sample_inputs = pose_landmark_detector_model.sample_inputs()
165
+
166
+ traced_pose_landmark_detector_model = torch.jit.trace(pose_landmark_detector_model, [torch.tensor(data[0]) for _, data in pose_landmark_detector_sample_inputs.items()])
167
+
168
+ # Compile model on a specific device
169
+ pose_landmark_detector_compile_job = hub.submit_compile_job(
170
+ model=traced_pose_landmark_detector_model ,
171
+ device=device,
172
+ input_specs=pose_landmark_detector_model.get_input_spec(),
173
+ )
174
+
175
+ # Get target model to run on-device
176
+ pose_landmark_detector_target_model = pose_landmark_detector_compile_job.get_target_model()
177
 
178
  ```
179
 
 
185
  provisioned in the cloud. Once the job is submitted, you can navigate to a
186
  provided job URL to view a variety of on-device performance metrics.
187
  ```python
188
+
189
+ pose_detector_profile_job = hub.submit_profile_job(
190
+ model=pose_detector_target_model,
191
+ device=device,
192
+ )
193
+
194
+ pose_landmark_detector_profile_job = hub.submit_profile_job(
195
+ model=pose_landmark_detector_target_model,
196
+ device=device,
197
+ )
198
 
199
  ```
200
 
 
203
  To verify the accuracy of the model on-device, you can run on-device inference
204
  on sample input data on the same cloud hosted device.
205
  ```python
206
+ pose_detector_input_data = pose_detector_model.sample_inputs()
207
+ pose_detector_inference_job = hub.submit_inference_job(
208
+ model=pose_detector_target_model,
209
+ device=device,
210
+ inputs=pose_detector_input_data,
211
+ )
212
+ pose_detector_inference_job.download_output_data()
213
+ pose_landmark_detector_input_data = pose_landmark_detector_model.sample_inputs()
214
+ pose_landmark_detector_inference_job = hub.submit_inference_job(
215
+ model=pose_landmark_detector_target_model,
216
+ device=device,
217
+ inputs=pose_landmark_detector_input_data,
218
+ )
219
+ pose_landmark_detector_inference_job.download_output_data()
220
 
221
  ```
222
  With the output of the model, you can compute like PSNR, relative errors or