Problem with the ONNX version and opset
I downloaded the ONNX version of the model and I'm having trouble simply opening it due to the opset. If possible can you provide insight to the ONNX version used to export the model? Also can you confirm you're able to open and use inference with the onnx version?
I have python 3.10.11, ONNX == 1.15.0, onnxruntime-gpu ==1.16.0, torch==2.2.0+cu121
This gives me an error:
import onnxruntime as ort
ort_session = ort.InferenceSession(<< model path here >>, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
Hi, I have checked the opsets version in the model schema and you need at least onnxruntime 1.17 because it asks for ai.onnx.ml v4.
I absolutely use onnxruntime(-gpu) both for the HF space and locally.
I made a fresh venv before exporting the model, so that explains why I haven't noticed the problem.
From what I see, my conversion scripts target opset 17, but onnxruntime adds a few unnecessary extras when I do a round of basic graph optimizations.
Thank you, This account was new so I couldn't reply earlier.
I had to do a fresh venv due to to dependencies, but I got it working after upgrading the onnxruntime. Now it's working perfectly.
It might be beneficial to list the minimum requirements for libraries in a txt file with the model or have them listed in the readme. Also adding a link to your tagger app, specifically a link to the code (https://huggingface.co/spaces/SmilingWolf/wd-tagger/tree/main), in the readme might be good for redirecting people with basic questions. <- I noticed the app after everything was completed. I don't think the onnxruntime problem would be solved by it, but having a good reference would have made my coding a bit easier.
I always assumed people found my tagger first, and models second, never the other way around.
I will amend the model READMEs to mention correct dependencies and preprocessing steps.