|
&&&& RUNNING TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --onnx=yolo_nas_pose_l_fp16.onnx --int8 --avgRuns=100 --duration=15 --saveEngine=yolo_nas_pose_l_fp16.onnx.int8.engine |
|
[12/28/2023-19:27:02] [I] === Model Options === |
|
[12/28/2023-19:27:02] [I] Format: ONNX |
|
[12/28/2023-19:27:02] [I] Model: yolo_nas_pose_l_fp16.onnx |
|
[12/28/2023-19:27:02] [I] Output: |
|
[12/28/2023-19:27:02] [I] === Build Options === |
|
[12/28/2023-19:27:02] [I] Max batch: explicit batch |
|
[12/28/2023-19:27:02] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default |
|
[12/28/2023-19:27:02] [I] minTiming: 1 |
|
[12/28/2023-19:27:02] [I] avgTiming: 8 |
|
[12/28/2023-19:27:02] [I] Precision: FP32+INT8 |
|
[12/28/2023-19:27:02] [I] LayerPrecisions: |
|
[12/28/2023-19:27:02] [I] Calibration: Dynamic |
|
[12/28/2023-19:27:02] [I] Refit: Disabled |
|
[12/28/2023-19:27:02] [I] Sparsity: Disabled |
|
[12/28/2023-19:27:02] [I] Safe mode: Disabled |
|
[12/28/2023-19:27:02] [I] DirectIO mode: Disabled |
|
[12/28/2023-19:27:02] [I] Restricted mode: Disabled |
|
[12/28/2023-19:27:02] [I] Build only: Disabled |
|
[12/28/2023-19:27:02] [I] Save engine: yolo_nas_pose_l_fp16.onnx.int8.engine |
|
[12/28/2023-19:27:02] [I] Load engine: |
|
[12/28/2023-19:27:02] [I] Profiling verbosity: 0 |
|
[12/28/2023-19:27:02] [I] Tactic sources: Using default tactic sources |
|
[12/28/2023-19:27:02] [I] timingCacheMode: local |
|
[12/28/2023-19:27:02] [I] timingCacheFile: |
|
[12/28/2023-19:27:02] [I] Heuristic: Disabled |
|
[12/28/2023-19:27:02] [I] Preview Features: Use default preview flags. |
|
[12/28/2023-19:27:02] [I] Input(s)s format: fp32:CHW |
|
[12/28/2023-19:27:02] [I] Output(s)s format: fp32:CHW |
|
[12/28/2023-19:27:02] [I] Input build shapes: model |
|
[12/28/2023-19:27:02] [I] Input calibration shapes: model |
|
[12/28/2023-19:27:02] [I] === System Options === |
|
[12/28/2023-19:27:02] [I] Device: 0 |
|
[12/28/2023-19:27:02] [I] DLACore: |
|
[12/28/2023-19:27:02] [I] Plugins: |
|
[12/28/2023-19:27:02] [I] === Inference Options === |
|
[12/28/2023-19:27:02] [I] Batch: Explicit |
|
[12/28/2023-19:27:02] [I] Input inference shapes: model |
|
[12/28/2023-19:27:02] [I] Iterations: 10 |
|
[12/28/2023-19:27:02] [I] Duration: 15s (+ 200ms warm up) |
|
[12/28/2023-19:27:02] [I] Sleep time: 0ms |
|
[12/28/2023-19:27:02] [I] Idle time: 0ms |
|
[12/28/2023-19:27:02] [I] Streams: 1 |
|
[12/28/2023-19:27:02] [I] ExposeDMA: Disabled |
|
[12/28/2023-19:27:02] [I] Data transfers: Enabled |
|
[12/28/2023-19:27:02] [I] Spin-wait: Disabled |
|
[12/28/2023-19:27:02] [I] Multithreading: Disabled |
|
[12/28/2023-19:27:02] [I] CUDA Graph: Disabled |
|
[12/28/2023-19:27:02] [I] Separate profiling: Disabled |
|
[12/28/2023-19:27:02] [I] Time Deserialize: Disabled |
|
[12/28/2023-19:27:02] [I] Time Refit: Disabled |
|
[12/28/2023-19:27:02] [I] NVTX verbosity: 0 |
|
[12/28/2023-19:27:02] [I] Persistent Cache Ratio: 0 |
|
[12/28/2023-19:27:02] [I] Inputs: |
|
[12/28/2023-19:27:02] [I] === Reporting Options === |
|
[12/28/2023-19:27:02] [I] Verbose: Disabled |
|
[12/28/2023-19:27:02] [I] Averages: 100 inferences |
|
[12/28/2023-19:27:02] [I] Percentiles: 90,95,99 |
|
[12/28/2023-19:27:02] [I] Dump refittable layers:Disabled |
|
[12/28/2023-19:27:02] [I] Dump output: Disabled |
|
[12/28/2023-19:27:02] [I] Profile: Disabled |
|
[12/28/2023-19:27:02] [I] Export timing to JSON file: |
|
[12/28/2023-19:27:02] [I] Export output to JSON file: |
|
[12/28/2023-19:27:02] [I] Export profile to JSON file: |
|
[12/28/2023-19:27:02] [I] |
|
[12/28/2023-19:27:02] [I] === Device Information === |
|
[12/28/2023-19:27:02] [I] Selected Device: Orin |
|
[12/28/2023-19:27:02] [I] Compute Capability: 8.7 |
|
[12/28/2023-19:27:02] [I] SMs: 8 |
|
[12/28/2023-19:27:02] [I] Compute Clock Rate: 0.624 GHz |
|
[12/28/2023-19:27:02] [I] Device Global Memory: 7471 MiB |
|
[12/28/2023-19:27:02] [I] Shared Memory per SM: 164 KiB |
|
[12/28/2023-19:27:02] [I] Memory Bus Width: 128 bits (ECC disabled) |
|
[12/28/2023-19:27:02] [I] Memory Clock Rate: 0.624 GHz |
|
[12/28/2023-19:27:02] [I] |
|
[12/28/2023-19:27:02] [I] TensorRT version: 8.5.2 |
|
[12/28/2023-19:27:07] [I] [TRT] [MemUsageChange] Init CUDA: CPU +220, GPU +0, now: CPU 249, GPU 2837 (MiB) |
|
[12/28/2023-19:27:11] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +302, GPU +285, now: CPU 574, GPU 3142 (MiB) |
|
[12/28/2023-19:27:11] [I] Start parsing network model |
|
[12/28/2023-19:27:12] [I] [TRT] ---------------------------------------------------------------- |
|
[12/28/2023-19:27:12] [I] [TRT] Input filename: yolo_nas_pose_l_fp16.onnx |
|
[12/28/2023-19:27:12] [I] [TRT] ONNX IR version: 0.0.8 |
|
[12/28/2023-19:27:12] [I] [TRT] Opset version: 17 |
|
[12/28/2023-19:27:12] [I] [TRT] Producer name: pytorch |
|
[12/28/2023-19:27:12] [I] [TRT] Producer version: 2.1.2 |
|
[12/28/2023-19:27:12] [I] [TRT] Domain: |
|
[12/28/2023-19:27:12] [I] [TRT] Model version: 0 |
|
[12/28/2023-19:27:12] [I] [TRT] Doc string: |
|
[12/28/2023-19:27:12] [I] [TRT] ---------------------------------------------------------------- |
|
[12/28/2023-19:27:13] [I] Finish parsing network model |
|
[12/28/2023-19:27:13] [I] FP32 and INT8 precisions have been specified - more performance might be enabled by additionally specifying --fp16 or --best |
|
&&&& FAILED TensorRT.trtexec [TensorRT v8502] # /usr/src/tensorrt/bin/trtexec --onnx=yolo_nas_pose_l_fp16.onnx --int8 --avgRuns=100 --duration=15 --saveEngine=yolo_nas_pose_l_fp16.onnx.int8.engine |
|
|