# `gradio_awsbr_mmchatbot` This component enables multi-modal input for the Anthropic Claude v3 suite of models available from Amazon Bedrock ## Installation ```bash pip install gradio_awsbr_mmchatbot ``` ## Usage ```python import gradio as gr from gradio_awsbr_mmchatbot import MultiModalChatbot from gradio.data_classes import FileData from bedrock_utils import MultimodalInputHandler # A function to call the multi-modal input for Anthropic Claude v3 sonnet using Bedrock boto3 async def get_response(text, file): # If there is a file uploaded, then we will send it to Anthropic Claude v3 sonnet. # If there is no file uploaded, then we will send the text to Anthropic Claude v3 sonnet. try: userMsg = { "text": text, "files": [{"file": FileData(path=file)}] } except: userMsg = { "text": text, "files": [] } # Define a variable to store the response from the Anthropic Claude v3 sonnet llmResponse = "" handler = MultimodalInputHandler(text, file) # Loop through the response from Anthropic Claude v3 sonnet, and append it to our llmResponse variable. async for x in handler.handleInput(): llmResponse += x yield [[userMsg, {"text": llmResponse, "files": []}]] # Yield the response from Anthropic Claude v3 sonnet. This is unecessary as we can just yield the llmResponse variable in an iterative fashion as above. # But just in case.... let's yield the entire response object as well and overwrite the messages in the Chatbot. response = { "text": llmResponse, "files": [] } yield [[userMsg, response]] # Defining Gradio Interface using Blocks Structure with gr.Blocks() as demo: # Give it a Title gr.Markdown("## Gradio - MultiModal Chatbot") # Define the Chat Tab with gr.Tab(label="Chat"): with gr.Row(): with gr.Column(scale=3): # Set a variable equal to our MultiModalChatBot class chatBot = MultiModalChatbot(height=700, render_markdown=True, bubble_full_width=True) with gr.Row(): with gr.Column(scale=3): # Set a variable equal to our user message msg = gr.Textbox(placeholder='What is the meaning of life?', show_label=False) with gr.Column(scale=1): # Set a variable equal to our file upload fileInput = gr.File(label="Upload Files") with gr.Column(scale=1): # Define our submit button and invoke our 'get_response' function when it's clicked. gr.Button('Submit', variant='primary').click(get_response, inputs=[msg,fileInput], outputs=chatBot) # Same function as above, but with the 'enter' key being pressed inside the gr.Textbox() component instead of the 'submit' button. msg.submit(get_response, inputs=[msg, fileInput], outputs=chatBot) if __name__ == '__main__': demo.queue().launch() ``` ## `MultiModalChatbot` ### Initialization
name | type | default | description |
---|---|---|---|
value |
```python list[ list[ str | tuple[str] | tuple[str | pathlib.Path, str] | None ] ] | Callable | None ``` | None |
Default value to show in chatbot. If callable, the function will be called whenever the app loads to set the initial value of the component. |
label |
```python str | None ``` | None |
The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to. |
every |
```python float | None ``` | None |
If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. |
show_label |
```python bool | None ``` | None |
if True, will display label. |
container |
```python bool ``` | True |
If True, will place the component in a container - providing some extra padding around the border. |
scale |
```python int | None ``` | None |
relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True. |
min_width |
```python int ``` | 160 |
minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. |
visible |
```python bool ``` | True |
If False, component will be hidden. |
elem_id |
```python str | None ``` | None |
An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. |
elem_classes |
```python list[str] | str | None ``` | None |
An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. |
render |
```python bool ``` | True |
If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later. |
height |
```python int | str | None ``` | None |
The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. |
latex_delimiters |
```python list[dict[str, str | bool]] | None ``` | None |
A list of dicts of the form {"left": open delimiter (str), "right": close delimiter (str), "display": whether to display in newline (bool)} that will be used to render LaTeX expressions. If not provided, `latex_delimiters` is set to `[{ "left": "$$", "right": "$$", "display": True }]`, so only expressions enclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass in an empty list to disable LaTeX rendering. For more information, see the [KaTeX documentation](https://katex.org/docs/autorender.html). |
rtl |
```python bool ``` | False |
If True, sets the direction of the rendered text to right-to-left. Default is False, which renders text left-to-right. |
show_share_button |
```python bool | None ``` | None |
If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise. |
show_copy_button |
```python bool ``` | False |
If True, will show a copy button for each chatbot message. |
avatar_images |
```python tuple[ str | pathlib.Path | None, str | pathlib.Path | None ] | None ``` | None |
Tuple of two avatar image paths or URLs for user and bot (in that order). Pass None for either the user or bot image to skip. Must be within the working directory of the Gradio app or an external URL. |
sanitize_html |
```python bool ``` | True |
If False, will disable HTML sanitization for chatbot messages. This is not recommended, as it can lead to security vulnerabilities. |
render_markdown |
```python bool ``` | True |
If False, will disable Markdown rendering for chatbot messages. |
bubble_full_width |
```python bool ``` | True |
If False, the chat bubble will fit to the content of the message. If True (default), the chat bubble will be the full width of the component. |
line_breaks |
```python bool ``` | True |
If True (default), will enable Github-flavored Markdown line breaks in chatbot messages. If False, single new lines will be ignored. Only applies if `render_markdown` is True. |
likeable |
```python bool ``` | False |
Whether the chat messages display a like or dislike button. Set automatically by the .like method but has to be present in the signature for it to show up in the config. |
layout |
```python "panel" | "bubble" | None ``` | None |
If "panel", will display the chatbot in a llm style layout. If "bubble", will display the chatbot with message bubbles, with the user and bot messages on alterating sides. Will default to "bubble". |