chore: clean a bit the README
Browse files
README.md
CHANGED
@@ -10,12 +10,10 @@ license: apache-2.0
|
|
10 |
|
11 |
# Iris classification with a QNN with Concrete ML
|
12 |
|
13 |
-
In this repository, we allow Iris classification, without seing the inputs! Indeed, inputs are sent encrypted to the HF endpoints, and are classified without the server seeing them in the clear, thanks to fully homomorphic encryption (FHE). This is done thanks to Zama's Concrete ML.
|
14 |
|
15 |
Concrete ML is Zama's open-source privacy-preserving ML package, FHE. We refer the reader to fhe.org or Zama's websites for more information on FHE.
|
16 |
|
17 |
-
This directory was creating from the template https://huggingface.co/zama-fhe/concrete-ml-template-alpha.
|
18 |
-
|
19 |
## Deploying a compiled model on HF inference endpoint
|
20 |
|
21 |
If you would like to deploy, it is very easy.
|
@@ -33,8 +31,6 @@ Now, this is the final step: using the entry point. You should:
|
|
33 |
- if your inference endpoint is private, set an environment variable HF_TOKEN with your HF token
|
34 |
- edit `play_with_endpoint.py`
|
35 |
- replace `API_URL` by your entry point URL
|
36 |
-
- replace the part between "# BEGIN: replace this part with your privacy-preserving application" and
|
37 |
-
"# END: replace this part with your privacy-preserving application" with your application
|
38 |
|
39 |
Finally, you'll be able to launch your application with `python play_with_endpoint.py`.
|
40 |
|
|
|
10 |
|
11 |
# Iris classification with a QNN with Concrete ML
|
12 |
|
13 |
+
In this repository, we allow Iris classification, without seing the inputs! Indeed, inputs are sent encrypted to the HF endpoints, and are classified (with a built-in small neural network) without the server seeing them in the clear, thanks to fully homomorphic encryption (FHE). This is done thanks to Zama's Concrete ML.
|
14 |
|
15 |
Concrete ML is Zama's open-source privacy-preserving ML package, FHE. We refer the reader to fhe.org or Zama's websites for more information on FHE.
|
16 |
|
|
|
|
|
17 |
## Deploying a compiled model on HF inference endpoint
|
18 |
|
19 |
If you would like to deploy, it is very easy.
|
|
|
31 |
- if your inference endpoint is private, set an environment variable HF_TOKEN with your HF token
|
32 |
- edit `play_with_endpoint.py`
|
33 |
- replace `API_URL` by your entry point URL
|
|
|
|
|
34 |
|
35 |
Finally, you'll be able to launch your application with `python play_with_endpoint.py`.
|
36 |
|