I expected that inferencing with onnxruntime should be faster than normal pytorch inferencing.but it is much slower than pytorch inf.I wonder how it could be.
could you share the inference code used by you?
· Sign up or log in to comment