binoua's picture
Add tags
a53bdfe verified
|
raw
history blame
No virus
1.87 kB
---
license: apache-2.0
tags:
- FHE
- concrete-ml
---
<p align="center">
<!-- product name logo -->
<img width=600 src="https://cdn-uploads.huggingface.co/production/uploads/6286462340423ef48fb6c45e/ElX3V79ViRx0BUcCPVJQG.png">
<a href="https://github.com/zama-ai/concrete-ml"> πŸ“ Github</a> | <a href="https://docs.zama.ai/concrete-ml"> πŸ“’ Documentation</a> | <a href="https://zama.ai/community"> πŸ’› Community support</a> | <a href="https://github.com/zama-ai/awesome-zama"> πŸ“š FHE resources by Zama</a>
</p>
<hr>
# Iris classification with a QNN with Concrete ML
In this repository, we allow Iris classification, without seing the inputs! Indeed, inputs are sent encrypted to the HF endpoints, and are classified (with a built-in small neural network) without the server seeing them in the clear, thanks to fully homomorphic encryption (FHE). This is done thanks to Zama's Concrete ML.
Concrete ML is Zama's open-source privacy-preserving ML package, FHE. We refer the reader to fhe.org or Zama's websites for more information on FHE.
## Deploying a compiled model on HF inference endpoint
If you would like to deploy, it is very easy.
- click on 'Deploy' button in HF interface
- chose "Inference endpoints"
- chose the right model repository
- (the rest of the options are classical to HF end points; we refer you to their documentation for more information)
and then click on 'Create endpoint'
And now, your model should be deployed, after few secunds of installation.
## Using HF entry points on privacy-preserving models
Now, this is the final step: using the entry point. You should:
- if your inference endpoint is private, set an environment variable HF_TOKEN with your HF token
- edit `play_with_endpoint.py`
- replace `API_URL` by your entry point URL
Finally, you'll be able to launch your application with `python play_with_endpoint.py`.