A model card with a customer handler to deploy the MSFT Kosmos2 model as an inference endpoint. Enjoy!
Hit me up on X/Twitter @yanMachX and let me know what you guys are building!
Expected Request payload
{
"inputs": "you can just leave this empty, for some reason, inference endpoint expects this",
# base64 encoded string representation of the image blob, here's an example
"image" : "iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAAABGdBTUEAALGPC"
}
There is a python script that provides example API call the inference endpoint.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.