|
import gradio as gr |
|
|
|
with gr.Blocks() as demo: |
|
gr.Markdown(""" |
|
# Llama-2-70b-chat-hf Discord Bot Powered by Gradio and Hugging Face Endpoints |
|
|
|
Make sure you read the 'Special Consideration' section below first! 🦙 |
|
|
|
### First install the `gradio_client` |
|
|
|
```bash |
|
pip install gradio_client |
|
``` |
|
|
|
### Then deploy to discord in one line! ⚡️ |
|
|
|
```python |
|
grc.Client("ysharma/Explore_llamav2_with_TGI").deploy_discord(to_id="llama2-70b-discord-bot") |
|
``` |
|
|
|
""") |
|
with gr.Accordion(label="Special Considerations", open=False): |
|
gr.Markdown(""" |
|
This discord bot will use a FREE Inference Endpoint provided by Hugging Face. |
|
Hugging Face does not commit to paying for this endpoint in perpetuity so there is no guarantee your bot will always work. |
|
If you would like more control over the infrastructure backing the llama 70b model, consider deploying your own inference endpoint. |
|
""" |
|
) |
|
gr.Markdown(""" |
|
Note: As a derivate work of [Llama-2-70b-chat](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) by Meta, this demo is governed by the original [license](https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI/blob/main/LICENSE.txt) and [acceptable use policy](https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI/blob/main/USE_POLICY.md) |
|
""") |
|
|
|
|
|
demo.queue(concurrency_count=70).launch() |