jbilcke-hf HF staff commited on
Commit
3db3a54
β€’
1 Parent(s): 82e30a4

improve readme

Browse files
Files changed (1) hide show
  1. README.md +15 -6
README.md CHANGED
@@ -19,12 +19,16 @@ it requires various components to run for the frontend, backend, LLM, SDXL etc.
19
 
20
  If you try to duplicate the project, you will see it requires some variables:
21
 
22
- - `HF_INFERENCE_ENDPOINT_URL`: This is the endpoint to call the LLM
23
- - `HF_API_TOKEN`: The Hugging Face token used to call the inference endpoint (if you intent to use a LLM hosted on Hugging Face)
24
- - `VIDEOCHAIN_API_URL`: This is the API that generates images
25
- - `VIDEOCHAIN_API_TOKEN`: Token used to call the rendering engine API (not used yet, but it's gonna be because [πŸ’Έ](https://en.wikipedia.org/wiki/No_such_thing_as_a_free_lunch))
26
-
27
- This is the architecture for the current production AI Comic Factory.
 
 
 
 
28
 
29
  -> If you intend to run it with local, cloud-hosted and/or proprietary models **you are going to need to code πŸ‘¨β€πŸ’»**.
30
 
@@ -41,6 +45,8 @@ This is a new option added recently, where you can use one of the models from th
41
  To activate it, create a `.env.local` configuration file:
42
 
43
  ```bash
 
 
44
  HF_API_TOKEN="Your Hugging Face token"
45
 
46
  # codellama/CodeLlama-7b-hf" is used by default, but you can change this
@@ -53,7 +59,10 @@ HF_INFERENCE_API_MODEL="codellama/CodeLlama-7b-hf"
53
  If your would like to run the AI Comic Factory on a private LLM running on the Hugging Face Inference Endpoint service, create a `.env.local` configuration file:
54
 
55
  ```bash
 
 
56
  HF_API_TOKEN="Your Hugging Face token"
 
57
  HF_INFERENCE_ENDPOINT_URL="path to your inference endpoint url"
58
  ```
59
 
 
19
 
20
  If you try to duplicate the project, you will see it requires some variables:
21
 
22
+ - `LLM_ENGINE`: can be either "INFERENCE_API" or "INFERENCE_ENDPOINT"
23
+ - `HF_API_TOKEN`: necessary if you decide to use an inference api model or a custom inference endpoint
24
+ - `HF_INFERENCE_ENDPOINT_URL`: necessary if you decide to use a custom inference endpoint
25
+ - `RENDERING_ENGINE`: can only be "VIDEOCHAIN" for now, unless you code your custom solution
26
+ - `VIDEOCHAIN_API_URL`: url to the VideoChain API server
27
+ - `VIDEOCHAIN_API_TOKEN`: secret token to access the VideoChain API server
28
+
29
+ Please read the `.env` default config file for more informations.
30
+ To customise a variable locally, you should create a `.env.local`
31
+ (do not commit this file as it will contain your secrets).
32
 
33
  -> If you intend to run it with local, cloud-hosted and/or proprietary models **you are going to need to code πŸ‘¨β€πŸ’»**.
34
 
 
45
  To activate it, create a `.env.local` configuration file:
46
 
47
  ```bash
48
+ LLM_ENGINE="INFERENCE_API"
49
+
50
  HF_API_TOKEN="Your Hugging Face token"
51
 
52
  # codellama/CodeLlama-7b-hf" is used by default, but you can change this
 
59
  If your would like to run the AI Comic Factory on a private LLM running on the Hugging Face Inference Endpoint service, create a `.env.local` configuration file:
60
 
61
  ```bash
62
+ LLM_ENGINE="INFERENCE_ENDPOINT"
63
+
64
  HF_API_TOKEN="Your Hugging Face token"
65
+
66
  HF_INFERENCE_ENDPOINT_URL="path to your inference endpoint url"
67
  ```
68