How to deploy it to get the fastest qps?

#10
by loovelj2 - opened

I want to deploy this model now, and I want to ask what is the fastest way to deploy inference,Triton or TGI or ONNX? It is better to use docker

Beijing Academy of Artificial Intelligence org

Sorry, we have not conducted specific efficiency tests across different tools. You may refer to some materials available within open-source communities, e.g., https://github.com/huggingface/text-embeddings-inference

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment