shishirpatil
commited on
Update README with the local inference update
Browse files
README.md
CHANGED
@@ -141,16 +141,16 @@ This is possible in OpenFunctions v2, because we ensure that the output includes
|
|
141 |
|
142 |
### End to End Example
|
143 |
|
144 |
-
Run the example code in `[
|
145 |
|
146 |
```bash
|
147 |
-
python
|
148 |
```
|
149 |
|
150 |
Expected Output:
|
151 |
|
152 |
```bash
|
153 |
-
(.py3) shishir@dhcp-132-64:~/Work/Gorilla/openfunctions/$ python
|
154 |
--------------------
|
155 |
Function call strings(s): get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')
|
156 |
--------------------
|
@@ -242,6 +242,12 @@ def format_response(response: str):
|
|
242 |
|
243 |
```
|
244 |
|
|
|
|
|
|
|
|
|
|
|
|
|
245 |
**Note:** Use the `get_prompt` and `format_response` only if you are hosting it Locally. If you are using the Berkeley hosted models through the Chat-completion API, we do this in the backend, so you don't have to do this. The model is supported in Hugging Face 🤗 Transformers and can be run up locally:
|
246 |
|
247 |
|
|
|
141 |
|
142 |
### End to End Example
|
143 |
|
144 |
+
Run the example code in `[inference_hosted.py](https://github.com/ShishirPatil/gorilla/tree/main/openfunctions)` to see how the model works.
|
145 |
|
146 |
```bash
|
147 |
+
python inference_hosted.py
|
148 |
```
|
149 |
|
150 |
Expected Output:
|
151 |
|
152 |
```bash
|
153 |
+
(.py3) shishir@dhcp-132-64:~/Work/Gorilla/openfunctions/$ python inference_hosted.py
|
154 |
--------------------
|
155 |
Function call strings(s): get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')
|
156 |
--------------------
|
|
|
242 |
|
243 |
```
|
244 |
|
245 |
+
In the current directory, run the example code in `inference_local.py` to see how the model works.
|
246 |
+
|
247 |
+
```bash
|
248 |
+
python inference_local.py
|
249 |
+
```
|
250 |
+
|
251 |
**Note:** Use the `get_prompt` and `format_response` only if you are hosting it Locally. If you are using the Berkeley hosted models through the Chat-completion API, we do this in the backend, so you don't have to do this. The model is supported in Hugging Face 🤗 Transformers and can be run up locally:
|
252 |
|
253 |
|