[
{
"id": 0,
"parent": null,
"path": "03_building-with-blocks/02_controlling-layout.md",
"level": 1,
"title": "Controlling Layout",
"content": "By default, Components in Blocks are arranged vertically. Let's take a look at how we can rearrange Components. Under the hood, this layout structure uses the [flexbox model of web development](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Basic_Concepts_of_Flexbox)."
},
{
"id": 1,
"parent": 0,
"path": "03_building-with-blocks/02_controlling-layout.md",
"level": 2,
"title": "Rows",
"content": "Elements within a `with gr.Row` clause will all be displayed horizontally. For example, to display two Buttons side by side:\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row():\n btn1 = gr.Button(\"Button 1\")\n btn2 = gr.Button(\"Button 2\")\n```\n\nYou can set every element in a Row to have the same height. Configure this with the `equal_height` argument.\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row(equal_height=True):\n textbox = gr.Textbox()\n btn2 = gr.Button(\"Button 2\")\n```\n\nThe widths of elements in a Row can be controlled via a combination of `scale` and `min_width` arguments that are present in every Component.\n\n- `scale` is an integer that defines how an element will take up space in a Row. If scale is set to `0`, the element will not expand to take up space. If scale is set to `1` or greater, the element will expand. Multiple elements in a row will expand proportional to their scale. Below, `btn2` will expand twice as much as `btn1`, while `btn0` will not expand at all:\n\n```python\nwith gr.Blocks() as demo:\n with gr.Row():\n btn0 = gr.Button(\"Button 0\", scale=0)\n btn1 = gr.Button(\"Button 1\", scale=1)\n btn2 = gr.Button(\"Button 2\", scale=2)\n```\n\n- `min_width` will set the minimum width the element will take. The Row will wrap if there isn't sufficient space to satisfy all `min_width` values.\n\nLearn more about Rows in the [docs](https://gradio.app/docs/row)."
},
{
"id": 2,
"parent": 0,
"path": "03_building-with-blocks/02_controlling-layout.md",
"level": 2,
"title": "Columns and Nesting",
"content": "Components within a Column will be placed vertically atop each other. Since the vertical layout is the default layout for Blocks apps anyway, to be useful, Columns are usually nested within Rows. For example:\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n with gr.Row():\n text1 = gr.Textbox(label=\"t1\")\n slider2 = gr.Textbox(label=\"s2\")\n drop3 = gr.Dropdown([\"a\", \"b\", \"c\"], label=\"d3\")\n with gr.Row():\n with gr.Column(scale=1, min_width=300):\n text1 = gr.Textbox(label=\"prompt 1\")\n text2 = gr.Textbox(label=\"prompt 2\")\n inbtw = gr.Button(\"Between\")\n text4 = gr.Textbox(label=\"prompt 1\")\n text5 = gr.Textbox(label=\"prompt 2\")\n with gr.Column(scale=2, min_width=300):\n img1 = gr.Image(\"images/cheetah.jpg\")\n btn = gr.Button(\"Go\")\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_rows_and_columns\n\nSee how the first column has two Textboxes arranged vertically. The second column has an Image and Button arranged vertically. Notice how the relative widths of the two columns is set by the `scale` parameter. The column with twice the `scale` value takes up twice the width.\n\nLearn more about Columns in the [docs](https://gradio.app/docs/column)."
},
{
"id": 3,
"parent": null,
"path": "03_building-with-blocks/02_controlling-layout.md",
"level": 1,
"title": "Fill Browser Height / Width",
"content": "To make an app take the full width of the browser by removing the side padding, use `gr.Blocks(fill_width=True)`. \n\nTo make top level Components expand to take the full height of the browser, use `fill_height` and apply scale to the expanding Components.\n\n```python\nimport gradio as gr\n\nwith gr.Blocks(fill_height=True) as demo:\n gr.Chatbot(scale=1)\n gr.Textbox(scale=0)\n```"
},
{
"id": 4,
"parent": 3,
"path": "03_building-with-blocks/02_controlling-layout.md",
"level": 2,
"title": "Dimensions",
"content": "Some components support setting height and width. These parameters accept either a number (interpreted as pixels) or a string. Using a string allows the direct application of any CSS unit to the encapsulating Block element.\n\nBelow is an example illustrating the use of viewport width (vw):\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n im = gr.ImageEditor(width=\"50vw\")\n\ndemo.launch()\n```"
},
{
"id": 5,
"parent": 3,
"path": "03_building-with-blocks/02_controlling-layout.md",
"level": 2,
"title": "Tabs and Accordions",
"content": "You can also create Tabs using the `with gr.Tab('tab_name'):` clause. Any component created inside of a `with gr.Tab('tab_name'):` context appears in that tab. Consecutive Tab clauses are grouped together so that a single tab can be selected at one time, and only the components within that Tab's context are shown.\n\nFor example:\n\n```py\nimport numpy as np\nimport gradio as gr\n\ndef flip_text(x):\n return x[::-1]\n\ndef flip_image(x):\n return np.fliplr(x)\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"Flip text or image files using this demo.\")\n with gr.Tab(\"Flip Text\"):\n text_input = gr.Textbox()\n text_output = gr.Textbox()\n text_button = gr.Button(\"Flip\")\n with gr.Tab(\"Flip Image\"):\n with gr.Row():\n image_input = gr.Image()\n image_output = gr.Image()\n image_button = gr.Button(\"Flip\")\n\n with gr.Accordion(\"Open for More!\", open=False):\n gr.Markdown(\"Look at me...\")\n temp_slider = gr.Slider(\n 0, 1,\n value=0.1,\n step=0.1,\n interactive=True,\n label=\"Slide me\",\n )\n\n text_button.click(flip_text, inputs=text_input, outputs=text_output)\n image_button.click(flip_image, inputs=image_input, outputs=image_output)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_blocks_flipper\n\nAlso note the `gr.Accordion('label')` in this example. The Accordion is a layout that can be toggled open or closed. Like `Tabs`, it is a layout element that can selectively hide or show content. Any components that are defined inside of a `with gr.Accordion('label'):` will be hidden or shown when the accordion's toggle icon is clicked.\n\nLearn more about [Tabs](https://gradio.app/docs/tab) and [Accordions](https://gradio.app/docs/accordion) in the docs."
},
{
"id": 6,
"parent": 3,
"path": "03_building-with-blocks/02_controlling-layout.md",
"level": 2,
"title": "Visibility",
"content": "Both Components and Layout elements have a `visible` argument that can set initially and also updated. Setting `gr.Column(visible=...)` on a Column can be used to show or hide a set of Components.\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n name_box = gr.Textbox(label=\"Name\")\n age_box = gr.Number(label=\"Age\", minimum=0, maximum=100)\n symptoms_box = gr.CheckboxGroup([\"Cough\", \"Fever\", \"Runny Nose\"])\n submit_btn = gr.Button(\"Submit\")\n\n with gr.Column(visible=False) as output_col:\n diagnosis_box = gr.Textbox(label=\"Diagnosis\")\n patient_summary_box = gr.Textbox(label=\"Patient Summary\")\n\n def submit(name, age, symptoms):\n return {\n submit_btn: gr.Button(visible=False),\n output_col: gr.Column(visible=True),\n diagnosis_box: \"covid\" if \"Cough\" in symptoms else \"flu\",\n patient_summary_box: f\"{name}, {age} y/o\",\n }\n\n submit_btn.click(\n submit,\n [name_box, age_box, symptoms_box],\n [submit_btn, diagnosis_box, patient_summary_box, output_col],\n )\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_blocks_form"
},
{
"id": 7,
"parent": 3,
"path": "03_building-with-blocks/02_controlling-layout.md",
"level": 2,
"title": "Defining and Rendering Components Separately",
"content": "In some cases, you might want to define components before you actually render them in your UI. For instance, you might want to show an examples section using `gr.Examples` above the corresponding `gr.Textbox` input. Since `gr.Examples` requires as a parameter the input component object, you will need to first define the input component, but then render it later, after you have defined the `gr.Examples` object.\n\nThe solution to this is to define the `gr.Textbox` outside of the `gr.Blocks()` scope and use the component's `.render()` method wherever you'd like it placed in the UI.\n\nHere's a full code example:\n\n```python\ninput_textbox = gr.Textbox()\n\nwith gr.Blocks() as demo:\n gr.Examples([\"hello\", \"bonjour\", \"merhaba\"], input_textbox)\n input_textbox.render()\n```"
},
{
"id": 8,
"parent": null,
"path": "03_building-with-blocks/03_state-in-blocks.md",
"level": 1,
"title": "State in Blocks",
"content": "We covered [State in Interfaces](https://gradio.app/interface-state), this guide takes a look at state in Blocks, which works mostly the same."
},
{
"id": 9,
"parent": 8,
"path": "03_building-with-blocks/03_state-in-blocks.md",
"level": 2,
"title": "Global State",
"content": "Global state in Blocks works the same as in Interface. Any variable created outside a function call is a reference shared between all users."
},
{
"id": 10,
"parent": 8,
"path": "03_building-with-blocks/03_state-in-blocks.md",
"level": 2,
"title": "Session State",
"content": "Gradio supports session **state**, where data persists across multiple submits within a page session, in Blocks apps as well. To reiterate, session data is _not_ shared between different users of your model. To store data in a session state, you need to do three things:\n\n1. Create a `gr.State()` object. If there is a default value to this stateful object, pass that into the constructor.\n2. In the event listener, put the `State` object as an input and output as needed.\n3. In the event listener function, add the variable to the input parameters and the return value.\n\nLet's take a look at a simple example. We have a simple checkout app below where you add items to a cart. You can also see the size of the cart.\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n cart = gr.State([])\n items_to_add = gr.CheckboxGroup([\"Cereal\", \"Milk\", \"Orange Juice\", \"Water\"])\n\n def add_items(new_items, previous_cart):\n cart = previous_cart + new_items\n return cart\n\n gr.Button(\"Add Items\").click(add_items, [items_to_add, cart], cart)\n\n cart_size = gr.Number(label=\"Cart Size\")\n cart.change(lambda cart: len(cart), cart, cart_size)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n$demo_simple_state\n\nNotice how we do this with state:\n1. We store the cart items in a `gr.State()` object, initialized here to be an empty list.\n2. When adding items to the cart, the event listener uses the cart as both input and output - it returns the updated cart with all the items inside. \n3. We can attach a `.change` listener to cart, that uses the state variable as input as well.\n\nYou can think of `gr.State` as an invisible Component that can store any kind of value. Here, `cart` is not visible in the frontend but is used for calculations.\n\nThe `.change` listener for a state variable triggers after any event listener changes the value of a state variable. If the state variable holds a sequence (like a list, set, or dict), a change is triggered if any of the elements inside change. If it holds an object or primitive, a change is triggered if the **hash** of the value changes. So if you define a custom class and create a `gr.State` variable that is an instance of that class, make sure that the the class includes a sensible `__hash__` implementation.\n\nThe value of a session State variable is cleared when the user refreshes the page. The value is stored on in the app backend for 60 minutes after the user closes the tab (this can be configured by the `delete_cache` parameter in `gr.Blocks`).\n\nLearn more about `State` in the [docs](https://gradio.app/docs/gradio/state)."
},
{
"id": 11,
"parent": 8,
"path": "03_building-with-blocks/03_state-in-blocks.md",
"level": 2,
"title": "Local State",
"content": "Gradio also supports **local state**, where data persists in the browser's localStorage even after the page is refreshed or closed. This is useful for storing user preferences, settings, API keys, or other data that should persist across sessions. To use local state:\n\n1. Create a `gr.BrowserState()` object. You can optionally provide an initial default value and a key to identify the data in the browser's localStorage.\n2. Use it like a regular `gr.State` component in event listeners as inputs and outputs.\n\nHere's a simple example that saves a user's username and password across sessions:\n\n```py\nimport random\nimport string\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"Your Username and Password will get saved in the browser's local storage. \"\n \"If you refresh the page, the values will be retained.\")\n username = gr.Textbox(label=\"Username\")\n password = gr.Textbox(label=\"Password\", type=\"password\")\n btn = gr.Button(\"Generate Randomly\")\n local_storage = gr.BrowserState([\"\", \"\"])\n\n @btn.click(outputs=[username, password])\n def generate_randomly():\n u = \"\".join(random.choices(string.ascii_letters + string.digits, k=10))\n p = \"\".join(random.choices(string.ascii_letters + string.digits, k=10))\n return u, p\n\n @demo.load(inputs=[local_storage], outputs=[username, password])\n def load_from_local_storage(saved_values):\n print(\"loading from local storage\", saved_values)\n return saved_values[0], saved_values[1]\n\n @gr.on([username.change, password.change], inputs=[username, password], outputs=[local_storage])\n def save_to_local_storage(username, password):\n return [username, password]\n\ndemo.launch()\n\n```"
},
{
"id": 12,
"parent": null,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 1,
"title": "Blocks and Event Listeners",
"content": "We briefly descirbed the Blocks class in the [Quickstart](/main/guides/quickstart#custom-demos-with-gr-blocks) as a way to build custom demos. Let's dive deeper."
},
{
"id": 13,
"parent": 12,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 2,
"title": "Blocks Structure",
"content": "Take a look at the demo below.\n\n```py\nimport gradio as gr\n\n\ndef greet(name):\n return \"Hello \" + name + \"!\"\n\n\nwith gr.Blocks() as demo:\n name = gr.Textbox(label=\"Name\")\n output = gr.Textbox(label=\"Output Box\")\n greet_btn = gr.Button(\"Greet\")\n greet_btn.click(fn=greet, inputs=name, outputs=output, api_name=\"greet\")\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_hello_blocks\n\n- First, note the `with gr.Blocks() as demo:` clause. The Blocks app code will be contained within this clause.\n- Next come the Components. These are the same Components used in `Interface`. However, instead of being passed to some constructor, Components are automatically added to the Blocks as they are created within the `with` clause.\n- Finally, the `click()` event listener. Event listeners define the data flow within the app. In the example above, the listener ties the two Textboxes together. The Textbox `name` acts as the input and Textbox `output` acts as the output to the `greet` method. This dataflow is triggered when the Button `greet_btn` is clicked. Like an Interface, an event listener can take multiple inputs or outputs.\n\nYou can also attach event listeners using decorators - skip the `fn` argument and assign `inputs` and `outputs` directly:\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n name = gr.Textbox(label=\"Name\")\n output = gr.Textbox(label=\"Output Box\")\n greet_btn = gr.Button(\"Greet\")\n\n @greet_btn.click(inputs=name, outputs=output)\n def greet(name):\n return \"Hello \" + name + \"!\"\n\nif __name__ == \"__main__\":\n demo.launch()\n```"
},
{
"id": 14,
"parent": 12,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 2,
"title": "Event Listeners and Interactivity",
"content": "In the example above, you'll notice that you are able to edit Textbox `name`, but not Textbox `output`. This is because any Component that acts as an input to an event listener is made interactive. However, since Textbox `output` acts only as an output, Gradio determines that it should not be made interactive. You can override the default behavior and directly configure the interactivity of a Component with the boolean `interactive` keyword argument, e.g. `gr.Textbox(interactive=True)`.\n\n```python\noutput = gr.Textbox(label=\"Output\", interactive=True)\n```\n\n_Note_: What happens if a Gradio component is neither an input nor an output? If a component is constructed with a default value, then it is presumed to be displaying content and is rendered non-interactive. Otherwise, it is rendered interactive. Again, this behavior can be overridden by specifying a value for the `interactive` argument."
},
{
"id": 15,
"parent": 12,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 2,
"title": "Types of Event Listeners",
"content": "Take a look at the demo below:\n\n```py\nimport gradio as gr\n\ndef welcome(name):\n return f\"Welcome to Gradio, {name}!\"\n\nwith gr.Blocks() as demo:\n gr.Markdown(\n \"\"\"\n # Hello World!\n Start typing below to see the output.\n \"\"\")\n inp = gr.Textbox(placeholder=\"What is your name?\")\n out = gr.Textbox()\n inp.change(welcome, inp, out)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_blocks_hello\n\nInstead of being triggered by a click, the `welcome` function is triggered by typing in the Textbox `inp`. This is due to the `change()` event listener. Different Components support different event listeners. For example, the `Video` Component supports a `play()` event listener, triggered when a user presses play. See the [Docs](http://gradio.app/docs#components) for the event listeners for each Component."
},
{
"id": 16,
"parent": 12,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 2,
"title": "Multiple Data Flows",
"content": "A Blocks app is not limited to a single data flow the way Interfaces are. Take a look at the demo below:\n\n```py\nimport gradio as gr\n\ndef increase(num):\n return num + 1\n\nwith gr.Blocks() as demo:\n a = gr.Number(label=\"a\")\n b = gr.Number(label=\"b\")\n atob = gr.Button(\"a > b\")\n btoa = gr.Button(\"b > a\")\n atob.click(increase, a, b)\n btoa.click(increase, b, a)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_reversible_flow\n\nNote that `num1` can act as input to `num2`, and also vice-versa! As your apps get more complex, you will have many data flows connecting various Components.\n\nHere's an example of a \"multi-step\" demo, where the output of one model (a speech-to-text model) gets fed into the next model (a sentiment classifier).\n\n```py\nfrom transformers import pipeline\n\nimport gradio as gr\n\nasr = pipeline(\"automatic-speech-recognition\", \"facebook/wav2vec2-base-960h\")\nclassifier = pipeline(\"text-classification\")\n\ndef speech_to_text(speech):\n text = asr(speech)[\"text\"] # type: ignore\n return text\n\ndef text_to_sentiment(text):\n return classifier(text)[0][\"label\"] # type: ignore\n\ndemo = gr.Blocks()\n\nwith demo:\n audio_file = gr.Audio(type=\"filepath\")\n text = gr.Textbox()\n label = gr.Label()\n\n b1 = gr.Button(\"Recognize Speech\")\n b2 = gr.Button(\"Classify Sentiment\")\n\n b1.click(speech_to_text, inputs=audio_file, outputs=text)\n b2.click(text_to_sentiment, inputs=text, outputs=label)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_blocks_speech_text_sentiment"
},
{
"id": 17,
"parent": 12,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 2,
"title": "Function Input List vs Dict",
"content": "The event listeners you've seen so far have a single input component. If you'd like to have multiple input components pass data to the function, you have two options on how the function can accept input component values:\n\n1. as a list of arguments, or\n2. as a single dictionary of values, keyed by the component\n\nLet's see an example of each:\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n a = gr.Number(label=\"a\")\n b = gr.Number(label=\"b\")\n with gr.Row():\n add_btn = gr.Button(\"Add\")\n sub_btn = gr.Button(\"Subtract\")\n c = gr.Number(label=\"sum\")\n\n def add(num1, num2):\n return num1 + num2\n add_btn.click(add, inputs=[a, b], outputs=c)\n\n def sub(data):\n return data[a] - data[b]\n sub_btn.click(sub, inputs={a, b}, outputs=c)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n\nBoth `add()` and `sub()` take `a` and `b` as inputs. However, the syntax is different between these listeners.\n\n1. To the `add_btn` listener, we pass the inputs as a list. The function `add()` takes each of these inputs as arguments. The value of `a` maps to the argument `num1`, and the value of `b` maps to the argument `num2`.\n2. To the `sub_btn` listener, we pass the inputs as a set (note the curly brackets!). The function `sub()` takes a single dictionary argument `data`, where the keys are the input components, and the values are the values of those components.\n\nIt is a matter of preference which syntax you prefer! For functions with many input components, option 2 may be easier to manage.\n\n$demo_calculator_list_and_dict"
},
{
"id": 18,
"parent": 12,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 2,
"title": "Function Return List vs Dict",
"content": "Similarly, you may return values for multiple output components either as:\n\n1. a list of values, or\n2. a dictionary keyed by the component\n\nLet's first see an example of (1), where we set the values of two output components by returning two values:\n\n```python\nwith gr.Blocks() as demo:\n food_box = gr.Number(value=10, label=\"Food Count\")\n status_box = gr.Textbox()\n\n def eat(food):\n if food > 0:\n return food - 1, \"full\"\n else:\n return 0, \"hungry\"\n\n gr.Button(\"Eat\").click(\n fn=eat,\n inputs=food_box,\n outputs=[food_box, status_box]\n )\n```\n\nAbove, each return statement returns two values corresponding to `food_box` and `status_box`, respectively.\n\nInstead of returning a list of values corresponding to each output component in order, you can also return a dictionary, with the key corresponding to the output component and the value as the new value. This also allows you to skip updating some output components.\n\n```python\nwith gr.Blocks() as demo:\n food_box = gr.Number(value=10, label=\"Food Count\")\n status_box = gr.Textbox()\n\n def eat(food):\n if food > 0:\n return {food_box: food - 1, status_box: \"full\"}\n else:\n return {status_box: \"hungry\"}\n\n gr.Button(\"Eat\").click(\n fn=eat,\n inputs=food_box,\n outputs=[food_box, status_box]\n )\n```\n\nNotice how when there is no food, we only update the `status_box` element. We skipped updating the `food_box` component.\n\nDictionary returns are helpful when an event listener affects many components on return, or conditionally affects outputs and not others.\n\nKeep in mind that with dictionary returns, we still need to specify the possible outputs in the event listener."
},
{
"id": 19,
"parent": 12,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 2,
"title": "Updating Component Configurations",
"content": "The return value of an event listener function is usually the updated value of the corresponding output Component. Sometimes we want to update the configuration of the Component as well, such as the visibility. In this case, we return a new Component, setting the properties we want to change.\n\n```py\nimport gradio as gr\n\ndef change_textbox(choice):\n if choice == \"short\":\n return gr.Textbox(lines=2, visible=True)\n elif choice == \"long\":\n return gr.Textbox(lines=8, visible=True, value=\"Lorem ipsum dolor sit amet\")\n else:\n return gr.Textbox(visible=False)\n\nwith gr.Blocks() as demo:\n radio = gr.Radio(\n [\"short\", \"long\", \"none\"], label=\"What kind of essay would you like to write?\"\n )\n text = gr.Textbox(lines=2, interactive=True, show_copy_button=True)\n radio.change(fn=change_textbox, inputs=radio, outputs=text)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_blocks_essay_simple\n\nSee how we can configure the Textbox itself through a new `gr.Textbox()` method. The `value=` argument can still be used to update the value along with Component configuration. Any arguments we do not set will preserve their previous values."
},
{
"id": 20,
"parent": 12,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 2,
"title": "Not Changing a Component's Value",
"content": "In some cases, you may want to leave a component's value unchanged. Gradio includes a special function, `gr.skip()`, which can be returned from your function. Returning this function will keep the output component (or components') values as is. Let us illustrate with an example:\n\n```py\nimport random\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n with gr.Row():\n clear_button = gr.Button(\"Clear\")\n skip_button = gr.Button(\"Skip\")\n random_button = gr.Button(\"Random\")\n numbers = [gr.Number(), gr.Number()]\n\n clear_button.click(lambda : (None, None), outputs=numbers)\n skip_button.click(lambda : [gr.skip(), gr.skip()], outputs=numbers)\n random_button.click(lambda : (random.randint(0, 100), random.randint(0, 100)), outputs=numbers)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n$demo_skip\n\nNote the difference between returning `None` (which generally resets a component's value to an empty state) versus returning `gr.skip()`, which leaves the component value unchanged.\n\nTip: if you have multiple output components, and you want to leave all of their values unchanged, you can just return a single `gr.skip()` instead of returning a tuple of skips, one for each element."
},
{
"id": 21,
"parent": 12,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 2,
"title": "Running Events Consecutively",
"content": "You can also run events consecutively by using the `then` method of an event listener. This will run an event after the previous event has finished running. This is useful for running events that update components in multiple steps.\n\nFor example, in the chatbot example below, we first update the chatbot with the user message immediately, and then update the chatbot with the computer response after a simulated delay.\n\n```py\nimport gradio as gr\nimport random\nimport time\n\nwith gr.Blocks() as demo:\n chatbot = gr.Chatbot()\n msg = gr.Textbox()\n clear = gr.Button(\"Clear\")\n\n def user(user_message, history):\n return \"\", history + [[user_message, None]]\n\n def bot(history):\n bot_message = random.choice([\"How are you?\", \"I love you\", \"I'm very hungry\"])\n time.sleep(2)\n history[-1][1] = bot_message\n return history\n\n msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(\n bot, chatbot, chatbot\n )\n clear.click(lambda: None, None, chatbot, queue=False)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_chatbot_consecutive\n\nThe `.then()` method of an event listener executes the subsequent event regardless of whether the previous event raised any errors. If you'd like to only run subsequent events if the previous event executed successfully, use the `.success()` method, which takes the same arguments as `.then()`."
},
{
"id": 22,
"parent": 12,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 2,
"title": "Binding Multiple Triggers to a Function",
"content": "Often times, you may want to bind multiple triggers to the same function. For example, you may want to allow a user to click a submit button, or press enter to submit a form. You can do this using the `gr.on` method and passing a list of triggers to the `trigger`.\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n name = gr.Textbox(label=\"Name\")\n output = gr.Textbox(label=\"Output Box\")\n greet_btn = gr.Button(\"Greet\")\n trigger = gr.Textbox(label=\"Trigger Box\")\n\n def greet(name, evt_data: gr.EventData):\n return \"Hello \" + name + \"!\", evt_data.target.__class__.__name__\n\n def clear_name(evt_data: gr.EventData):\n return \"\"\n\n gr.on(\n triggers=[name.submit, greet_btn.click],\n fn=greet,\n inputs=name,\n outputs=[output, trigger],\n ).then(clear_name, outputs=[name])\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_on_listener_basic\n\nYou can use decorator syntax as well:\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n name = gr.Textbox(label=\"Name\")\n output = gr.Textbox(label=\"Output Box\")\n greet_btn = gr.Button(\"Greet\")\n\n @gr.on(triggers=[name.submit, greet_btn.click], inputs=name, outputs=output)\n def greet(name):\n return \"Hello \" + name + \"!\"\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n\nYou can use `gr.on` to create \"live\" events by binding to the `change` event of components that implement it. If you do not specify any triggers, the function will automatically bind to all `change` event of all input components that include a `change` event (for example `gr.Textbox` has a `change` event whereas `gr.Button` does not).\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n with gr.Row():\n num1 = gr.Slider(1, 10)\n num2 = gr.Slider(1, 10)\n num3 = gr.Slider(1, 10)\n output = gr.Number(label=\"Sum\")\n\n @gr.on(inputs=[num1, num2, num3], outputs=output)\n def sum(a, b, c):\n return a + b + c\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_on_listener_live\n\nYou can follow `gr.on` with `.then`, just like any regular event listener. This handy method should save you from having to write a lot of repetitive code!"
},
{
"id": 23,
"parent": 12,
"path": "03_building-with-blocks/01_blocks-and-event-listeners.md",
"level": 2,
"title": "Binding a Component Value Directly to a Function of Other Components",
"content": "If you want to set a Component's value to always be a function of the value of other Components, you can use the following shorthand:\n\n```python\nwith gr.Blocks() as demo:\n num1 = gr.Number()\n num2 = gr.Number()\n product = gr.Number(lambda a, b: a * b, inputs=[num1, num2])\n```\n\nThis functionally the same as:\n```python\nwith gr.Blocks() as demo:\n num1 = gr.Number()\n num2 = gr.Number()\n product = gr.Number()\n\n gr.on(\n [num1.change, num2.change, demo.load], \n lambda a, b: a * b, \n inputs=[num1, num2], \n outputs=product\n )\n```"
},
{
"id": 24,
"parent": null,
"path": "03_building-with-blocks/07_using-blocks-like-functions.md",
"level": 1,
"title": "Using Gradio Blocks Like Functions",
"content": "Tags: TRANSLATION, HUB, SPACES\n\n**Prerequisite**: This Guide builds on the Blocks Introduction. Make sure to [read that guide first](https://gradio.app/blocks-and-event-listeners)."
},
{
"id": 25,
"parent": 24,
"path": "03_building-with-blocks/07_using-blocks-like-functions.md",
"level": 2,
"title": "Introduction",
"content": "Did you know that apart from being a full-stack machine learning demo, a Gradio Blocks app is also a regular-old python function!?\n\nThis means that if you have a gradio Blocks (or Interface) app called `demo`, you can use `demo` like you would any python function.\n\nSo doing something like `output = demo(\"Hello\", \"friend\")` will run the first event defined in `demo` on the inputs \"Hello\" and \"friend\" and store it\nin the variable `output`.\n\nIf I put you to sleep 🥱, please bear with me! By using apps like functions, you can seamlessly compose Gradio apps.\nThe following section will show how."
},
{
"id": 26,
"parent": 24,
"path": "03_building-with-blocks/07_using-blocks-like-functions.md",
"level": 2,
"title": "Treating Blocks like functions",
"content": "Let's say we have the following demo that translates english text to german text.\n\n```py\nimport gradio as gr\n\nfrom transformers import pipeline\n\npipe = pipeline(\"translation\", model=\"t5-base\")\n\ndef translate(text):\n return pipe(text)[0][\"translation_text\"] # type: ignore\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n english = gr.Textbox(label=\"English text\")\n translate_btn = gr.Button(value=\"Translate\")\n with gr.Column():\n german = gr.Textbox(label=\"German Text\")\n\n translate_btn.click(translate, inputs=english, outputs=german, api_name=\"translate-to-german\")\n examples = gr.Examples(examples=[\"I went to the supermarket yesterday.\", \"Helen is a good swimmer.\"],\n inputs=[english])\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n\nI already went ahead and hosted it in Hugging Face spaces at [gradio/english_translator](https://huggingface.co/spaces/gradio/english_translator).\n\nYou can see the demo below as well:\n\n$demo_english_translator\n\nNow, let's say you have an app that generates english text, but you wanted to additionally generate german text.\n\nYou could either:\n\n1. Copy the source code of my english-to-german translation and paste it in your app.\n\n2. Load my english-to-german translation in your app and treat it like a normal python function.\n\nOption 1 technically always works, but it often introduces unwanted complexity.\n\nOption 2 lets you borrow the functionality you want without tightly coupling our apps.\n\nAll you have to do is call the `Blocks.load` class method in your source file.\nAfter that, you can use my translation app like a regular python function!\n\nThe following code snippet and demo shows how to use `Blocks.load`.\n\nNote that the variable `english_translator` is my english to german app, but its used in `generate_text` like a regular function.\n\n```py\nimport gradio as gr\n\nfrom transformers import pipeline\n\nenglish_translator = gr.load(name=\"spaces/gradio/english_translator\")\nenglish_generator = pipeline(\"text-generation\", model=\"distilgpt2\")\n\ndef generate_text(text):\n english_text = english_generator(text)[0][\"generated_text\"] # type: ignore\n german_text = english_translator(english_text)\n return english_text, german_text\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n seed = gr.Text(label=\"Input Phrase\")\n with gr.Column():\n english = gr.Text(label=\"Generated English Text\")\n german = gr.Text(label=\"Generated German Text\")\n btn = gr.Button(\"Generate\")\n btn.click(generate_text, inputs=[seed], outputs=[english, german])\n gr.Examples([\"My name is Clara and I am\"], inputs=[seed])\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n\n$demo_generate_english_german"
},
{
"id": 27,
"parent": 24,
"path": "03_building-with-blocks/07_using-blocks-like-functions.md",
"level": 2,
"title": "How to control which function in the app to use",
"content": "If the app you are loading defines more than one function, you can specify which function to use\nwith the `fn_index` and `api_name` parameters.\n\nIn the code for our english to german demo, you'll see the following line:\n\n```python\ntranslate_btn.click(translate, inputs=english, outputs=german, api_name=\"translate-to-german\")\n```\n\nThe `api_name` gives this function a unique name in our app. You can use this name to tell gradio which\nfunction in the upstream space you want to use:\n\n```python\nenglish_generator(text, api_name=\"translate-to-german\")[0][\"generated_text\"]\n```\n\nYou can also use the `fn_index` parameter.\nImagine my app also defined an english to spanish translation function.\nIn order to use it in our text generation app, we would use the following code:\n\n```python\nenglish_generator(text, fn_index=1)[0][\"generated_text\"]\n```\n\nFunctions in gradio spaces are zero-indexed, so since the spanish translator would be the second function in my space,\nyou would use index 1."
},
{
"id": 28,
"parent": 24,
"path": "03_building-with-blocks/07_using-blocks-like-functions.md",
"level": 2,
"title": "Parting Remarks",
"content": "We showed how treating a Blocks app like a regular python helps you compose functionality across different apps.\nAny Blocks app can be treated like a function, but a powerful pattern is to `load` an app hosted on\n[Hugging Face Spaces](https://huggingface.co/spaces) prior to treating it like a function in your own app.\nYou can also load models hosted on the [Hugging Face Model Hub](https://huggingface.co/models) - see the [Using Hugging Face Integrations](/using_hugging_face_integrations) guide for an example.\n\nHappy building! ⚒️"
},
{
"id": 29,
"parent": null,
"path": "03_building-with-blocks/06_custom-CSS-and-JS.md",
"level": 1,
"title": "Customizing your demo with CSS and Javascript",
"content": "Gradio allows you to customize your demo in several ways. You can customize the layout of your demo, add custom HTML, and add custom theming as well. This tutorial will go beyond that and walk you through how to add custom CSS and JavaScript code to your demo in order to add custom styling, animations, custom UI functionality, analytics, and more."
},
{
"id": 30,
"parent": 29,
"path": "03_building-with-blocks/06_custom-CSS-and-JS.md",
"level": 2,
"title": "Adding custom CSS to your demo",
"content": "Gradio themes are the easiest way to customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `Blocks` constructor. For example:\n\n```python\nwith gr.Blocks(theme=gr.themes.Glass()):\n ...\n```\n\nGradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. You can extend these themes or create your own themes from scratch - see the [Theming guide](/guides/theming-guide) for more details.\n\nFor additional styling ability, you can pass any CSS to your app using the `css=` kwarg. You can either the filepath to a CSS file, or a string of CSS code.\n\n**Warning**: The use of query selectors in custom JS and CSS is _not_ guaranteed to work across Gradio versions that bind to Gradio's own HTML elements as the Gradio HTML DOM may change. We recommend using query selectors sparingly.\n\nThe base class for the Gradio app is `gradio-container`, so here's an example that changes the background color of the Gradio app:\n\n```python\nwith gr.Blocks(css=\".gradio-container {background-color: red}\") as demo:\n ...\n```\n\nIf you'd like to reference external files in your css, preface the file path (which can be a relative or absolute path) with `\"file=\"`, for example:\n\n```python\nwith gr.Blocks(css=\".gradio-container {background: url('file=clouds.jpg')}\") as demo:\n ...\n```\n\nNote: By default, files in the host machine are not accessible to users running the Gradio app. As a result, you should make sure that any referenced files (such as `clouds.jpg` here) are either URLs or allowed via the `allow_list` parameter in `launch()`. Read more in our [section on Security and File Access](/main/guides/file-access)."
},
{
"id": 31,
"parent": 29,
"path": "03_building-with-blocks/06_custom-CSS-and-JS.md",
"level": 2,
"title": "The `elem_id` and `elem_classes` Arguments",
"content": "You can `elem_id` to add an HTML element `id` to any component, and `elem_classes` to add a class or list of classes. This will allow you to select elements more easily with CSS. This approach is also more likely to be stable across Gradio versions as built-in class names or ids may change (however, as mentioned in the warning above, we cannot guarantee complete compatibility between Gradio versions if you use custom CSS as the DOM elements may themselves change).\n\n```python\ncss = \"\"\"\n#warning {background-color: #FFCCCB}\n.feedback textarea {font-size: 24px !important}\n\"\"\"\n\nwith gr.Blocks(css=css) as demo:\n box1 = gr.Textbox(value=\"Good Job\", elem_classes=\"feedback\")\n box2 = gr.Textbox(value=\"Failure\", elem_id=\"warning\", elem_classes=\"feedback\")\n```\n\nThe CSS `#warning` ruleset will only target the second Textbox, while the `.feedback` ruleset will target both. Note that when targeting classes, you might need to put the `!important` selector to override the default Gradio styles."
},
{
"id": 32,
"parent": 29,
"path": "03_building-with-blocks/06_custom-CSS-and-JS.md",
"level": 2,
"title": "Adding custom JavaScript to your demo",
"content": "There are 3 ways to add javascript code to your Gradio demo:\n\n1. You can add JavaScript code as a string or as a filepath to the `js` parameter of the `Blocks` or `Interface` initializer. This will run the JavaScript code when the demo is first loaded.\n\nBelow is an example of adding custom js to show an animated welcome message when the demo first loads.\n\n```py\nimport gradio as gr\n\ndef welcome(name):\n return f\"Welcome to Gradio, {name}!\"\n\njs = \"\"\"\nfunction createGradioAnimation() {\n var container = document.createElement('div');\n container.id = 'gradio-animation';\n container.style.fontSize = '2em';\n container.style.fontWeight = 'bold';\n container.style.textAlign = 'center';\n container.style.marginBottom = '20px';\n\n var text = 'Welcome to Gradio!';\n for (var i = 0; i < text.length; i++) {\n (function(i){\n setTimeout(function(){\n var letter = document.createElement('span');\n letter.style.opacity = '0';\n letter.style.transition = 'opacity 0.5s';\n letter.innerText = text[i];\n\n container.appendChild(letter);\n\n setTimeout(function() {\n letter.style.opacity = '1';\n }, 50);\n }, i * 250);\n })(i);\n }\n\n var gradioContainer = document.querySelector('.gradio-container');\n gradioContainer.insertBefore(container, gradioContainer.firstChild);\n\n return 'Animation created';\n}\n\"\"\"\nwith gr.Blocks(js=js) as demo:\n inp = gr.Textbox(placeholder=\"What is your name?\")\n out = gr.Textbox()\n inp.change(welcome, inp, out)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_blocks_js_load\n\nNote: You can also supply your custom js code as a file path. For example, if you have a file called `custom.js` in the same directory as your Python script, you can add it to your demo like so: `with gr.Blocks(js=\"custom.js\") as demo:`. Same goes for `Interface` (ex: `gr.Interface(..., js=\"custom.js\")`).\n\n2. When using `Blocks` and event listeners, events have a `js` argument that can take a JavaScript function as a string and treat it just like a Python event listener function. You can pass both a JavaScript function and a Python function (in which case the JavaScript function is run first) or only Javascript (and set the Python `fn` to `None`). Take a look at the code below:\n \n```py\nimport gradio as gr\n\nblocks = gr.Blocks()\n\nwith blocks as demo:\n subject = gr.Textbox(placeholder=\"subject\")\n verb = gr.Radio([\"ate\", \"loved\", \"hated\"])\n object = gr.Textbox(placeholder=\"object\")\n\n with gr.Row():\n btn = gr.Button(\"Create sentence.\")\n reverse_btn = gr.Button(\"Reverse sentence.\")\n foo_bar_btn = gr.Button(\"Append foo\")\n reverse_then_to_the_server_btn = gr.Button(\n \"Reverse sentence and send to server.\"\n )\n\n def sentence_maker(w1, w2, w3):\n return f\"{w1} {w2} {w3}\"\n\n output1 = gr.Textbox(label=\"output 1\")\n output2 = gr.Textbox(label=\"verb\")\n output3 = gr.Textbox(label=\"verb reversed\")\n output4 = gr.Textbox(label=\"front end process and then send to backend\")\n\n btn.click(sentence_maker, [subject, verb, object], output1)\n reverse_btn.click(\n None, [subject, verb, object], output2, js=\"(s, v, o) => o + ' ' + v + ' ' + s\"\n )\n verb.change(lambda x: x, verb, output3, js=\"(x) => [...x].reverse().join('')\")\n foo_bar_btn.click(None, [], subject, js=\"(x) => x + ' foo'\")\n\n reverse_then_to_the_server_btn.click(\n sentence_maker,\n [subject, verb, object],\n output4,\n js=\"(s, v, o) => [s, v, o].map(x => [...x].reverse().join(''))\",\n )\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_blocks_js_methods\n\n3. Lastly, you can add JavaScript code to the `head` param of the `Blocks` initializer. This will add the code to the head of the HTML document. For example, you can add Google Analytics to your demo like so:\n\n\n```python\nhead = f\"\"\"\n\n\n\"\"\"\n\nwith gr.Blocks(head=head) as demo:\n ...demo code...\n```\n\nThe `head` parameter accepts any HTML tags you would normally insert into the `
` of a page. For example, you can also include `` tags to `head`.\n\nNote that injecting custom HTML can affect browser behavior and compatibility (e.g. keyboard shortcuts). You should test your interface across different browsers and be mindful of how scripts may interact with browser defaults.\nHere's an example where pressing `Shift + s` triggers the `click` event of a specific `Button` component if the browser focus is _not_ on an input component (e.g. `Textbox` component):\n\n```python\nimport gradio as gr\n\nshortcut_js = \"\"\"\n\n\"\"\"\n\nwith gr.Blocks(head=shortcut_js) as demo:\n action_button = gr.Button(value=\"Name\", elem_id=\"my_btn\")\n textbox = gr.Textbox()\n action_button.click(lambda : \"button pressed\", None, textbox)\n \ndemo.launch()\n```"
},
{
"id": 33,
"parent": null,
"path": "03_building-with-blocks/04_dynamic-apps-with-render-decorator.md",
"level": 1,
"title": "Dynamic Apps with the Render Decorator",
"content": "The components and event listeners you define in a Blocks so far have been fixed - once the demo was launched, new components and listeners could not be added, and existing one could not be removed. \n\nThe `@gr.render` decorator introduces the ability to dynamically change this. Let's take a look."
},
{
"id": 34,
"parent": 33,
"path": "03_building-with-blocks/04_dynamic-apps-with-render-decorator.md",
"level": 2,
"title": "Dynamic Number of Components",
"content": "In the example below, we will create a variable number of Textboxes. When the user edits the input Textbox, we create a Textbox for each letter in the input. Try it out below:\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n input_text = gr.Textbox(label=\"input\")\n\n @gr.render(inputs=input_text)\n def show_split(text):\n if len(text) == 0:\n gr.Markdown(\"## No Input Provided\")\n else:\n for letter in text:\n gr.Textbox(letter)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_render_split_simple\n\nSee how we can now create a variable number of Textboxes using our custom logic - in this case, a simple `for` loop. The `@gr.render` decorator enables this with the following steps:\n\n1. Create a function and attach the @gr.render decorator to it.\n2. Add the input components to the `inputs=` argument of @gr.render, and create a corresponding argument in your function for each component. This function will automatically re-run on any change to a component.\n3. Add all components inside the function that you want to render based on the inputs.\n\nNow whenever the inputs change, the function re-runs, and replaces the components created from the previous function run with the latest run. Pretty straightforward! Let's add a little more complexity to this app:\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n input_text = gr.Textbox(label=\"input\")\n mode = gr.Radio([\"textbox\", \"button\"], value=\"textbox\")\n\n @gr.render(inputs=[input_text, mode], triggers=[input_text.submit])\n def show_split(text, mode):\n if len(text) == 0:\n gr.Markdown(\"## No Input Provided\")\n else:\n for letter in text:\n if mode == \"textbox\":\n gr.Textbox(letter)\n else:\n gr.Button(letter)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_render_split\n\nBy default, `@gr.render` re-runs are triggered by the `.load` listener to the app and the `.change` listener to any input component provided. We can override this by explicitly setting the triggers in the decorator, as we have in this app to only trigger on `input_text.submit` instead. \nIf you are setting custom triggers, and you also want an automatic render at the start of the app, make sure to add `demo.load` to your list of triggers."
},
{
"id": 35,
"parent": 33,
"path": "03_building-with-blocks/04_dynamic-apps-with-render-decorator.md",
"level": 2,
"title": "Dynamic Event Listeners",
"content": "If you're creating components, you probably want to attach event listeners to them as well. Let's take a look at an example that takes in a variable number of Textbox as input, and merges all the text into a single box.\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n text_count = gr.State(1)\n add_btn = gr.Button(\"Add Box\")\n add_btn.click(lambda x: x + 1, text_count, text_count)\n\n @gr.render(inputs=text_count)\n def render_count(count):\n boxes = []\n for i in range(count):\n box = gr.Textbox(key=i, label=f\"Box {i}\")\n boxes.append(box)\n\n def merge(*args):\n return \" \".join(args)\n\n merge_btn.click(merge, boxes, output)\n\n merge_btn = gr.Button(\"Merge\")\n output = gr.Textbox(label=\"Merged Output\")\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_render_merge_simple\n\nLet's take a look at what's happening here:\n\n1. The state variable `text_count` is keeping track of the number of Textboxes to create. By clicking on the Add button, we increase `text_count` which triggers the render decorator.\n2. Note that in every single Textbox we create in the render function, we explicitly set a `key=` argument. This key allows us to preserve the value of this Component between re-renders. If you type in a value in a textbox, and then click the Add button, all the Textboxes re-render, but their values aren't cleared because the `key=` maintains the the value of a Component across a render.\n3. We've stored the Textboxes created in a list, and provide this list as input to the merge button event listener. Note that **all event listeners that use Components created inside a render function must also be defined inside that render function**. The event listener can still reference Components outside the render function, as we do here by referencing `merge_btn` and `output` which are both defined outside the render function.\n\nJust as with Components, whenever a function re-renders, the event listeners created from the previous render are cleared and the new event listeners from the latest run are attached. \n\nThis allows us to create highly customizable and complex interactions!"
},
{
"id": 36,
"parent": 33,
"path": "03_building-with-blocks/04_dynamic-apps-with-render-decorator.md",
"level": 2,
"title": "Putting it Together",
"content": "Let's look at two examples that use all the features above. First, try out the to-do list app below: \n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n\n tasks = gr.State([])\n new_task = gr.Textbox(label=\"Task Name\", autofocus=True)\n\n def add_task(tasks, new_task_name):\n return tasks + [{\"name\": new_task_name, \"complete\": False}], \"\"\n\n new_task.submit(add_task, [tasks, new_task], [tasks, new_task])\n\n @gr.render(inputs=tasks)\n def render_todos(task_list):\n complete = [task for task in task_list if task[\"complete\"]]\n incomplete = [task for task in task_list if not task[\"complete\"]]\n gr.Markdown(f\"### Incomplete Tasks ({len(incomplete)})\")\n for task in incomplete:\n with gr.Row():\n gr.Textbox(task['name'], show_label=False, container=False)\n done_btn = gr.Button(\"Done\", scale=0)\n def mark_done(task=task):\n task[\"complete\"] = True\n return task_list\n done_btn.click(mark_done, None, [tasks])\n\n delete_btn = gr.Button(\"Delete\", scale=0, variant=\"stop\")\n def delete(task=task):\n task_list.remove(task)\n return task_list\n delete_btn.click(delete, None, [tasks])\n\n gr.Markdown(f\"### Complete Tasks ({len(complete)})\")\n for task in complete:\n gr.Textbox(task['name'], show_label=False, container=False)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_todo_list\n\nNote that almost the entire app is inside a single `gr.render` that reacts to the tasks `gr.State` variable. This variable is a nested list, which presents some complexity. If you design a `gr.render` to react to a list or dict structure, ensure you do the following:\n\n1. Any event listener that modifies a state variable in a manner that should trigger a re-render must set the state variable as an output. This lets Gradio know to check if the variable has changed behind the scenes. \n2. In a `gr.render`, if a variable in a loop is used inside an event listener function, that variable should be \"frozen\" via setting it to itself as a default argument in the function header. See how we have `task=task` in both `mark_done` and `delete`. This freezes the variable to its \"loop-time\" value.\n\nLet's take a look at one last example that uses everything we learned. Below is an audio mixer. Provide multiple audio tracks and mix them together.\n\n```py\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n track_count = gr.State(1)\n add_track_btn = gr.Button(\"Add Track\")\n\n add_track_btn.click(lambda count: count + 1, track_count, track_count)\n\n @gr.render(inputs=track_count)\n def render_tracks(count):\n audios = []\n volumes = []\n with gr.Row():\n for i in range(count):\n with gr.Column(variant=\"panel\", min_width=200):\n gr.Textbox(placeholder=\"Track Name\", key=f\"name-{i}\", show_label=False)\n track_audio = gr.Audio(label=f\"Track {i}\", key=f\"track-{i}\")\n track_volume = gr.Slider(0, 100, value=100, label=\"Volume\", key=f\"volume-{i}\")\n audios.append(track_audio)\n volumes.append(track_volume)\n\n def merge(data):\n sr, output = None, None\n for audio, volume in zip(audios, volumes):\n sr, audio_val = data[audio]\n volume_val = data[volume]\n final_track = audio_val * (volume_val / 100)\n if output is None:\n output = final_track\n else:\n min_shape = tuple(min(s1, s2) for s1, s2 in zip(output.shape, final_track.shape))\n trimmed_output = output[:min_shape[0], ...][:, :min_shape[1], ...] if output.ndim > 1 else output[:min_shape[0]]\n trimmed_final = final_track[:min_shape[0], ...][:, :min_shape[1], ...] if final_track.ndim > 1 else final_track[:min_shape[0]]\n output += trimmed_output + trimmed_final\n return (sr, output)\n\n merge_btn.click(merge, set(audios + volumes), output_audio)\n\n merge_btn = gr.Button(\"Merge Tracks\")\n output_audio = gr.Audio(label=\"Output\", interactive=False)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_audio_mixer\n\nTwo things to note in this app:\n1. Here we provide `key=` to all the components! We need to do this so that if we add another track after setting the values for an existing track, our input values to the existing track do not get reset on re-render.\n2. When there are lots of components of different types and arbitrary counts passed to an event listener, it is easier to use the set and dictionary notation for inputs rather than list notation. Above, we make one large set of all the input `gr.Audio` and `gr.Slider` components when we pass the inputs to the `merge` function. In the function body we query the component values as a dict.\n\nThe `gr.render` expands gradio capabilities extensively - see what you can make out of it!"
},
{
"id": 37,
"parent": null,
"path": "08_custom-components/01_custom-components-in-five-minutes.md",
"level": 1,
"title": "Custom Components in 5 minutes",
"content": "Gradio includes the ability for developers to create their own custom components and use them in Gradio apps.You can publish your components as Python packages so that other users can use them as well.\n\nUsers will be able to use all of Gradio's existing functions, such as `gr.Blocks`, `gr.Interface`, API usage, themes, etc. with Custom Components. This guide will cover how to get started making custom components."
},
{
"id": 38,
"parent": 37,
"path": "08_custom-components/01_custom-components-in-five-minutes.md",
"level": 2,
"title": "Installation",
"content": "You will need to have:\n\n* Python 3.10+ (install here)\n* pip 21.3+ (`python -m pip install --upgrade pip`)\n* Node.js 20+ (install here)\n* npm 9+ (install here)\n* Gradio 5+ (`pip install --upgrade gradio`)"
},
{
"id": 39,
"parent": 37,
"path": "08_custom-components/01_custom-components-in-five-minutes.md",
"level": 2,
"title": "The Workflow",
"content": "The Custom Components workflow consists of 4 steps: create, dev, build, and publish.\n\n1. create: creates a template for you to start developing a custom component.\n2. dev: launches a development server with a sample app & hot reloading allowing you to easily develop your custom component\n3. build: builds a python package containing to your custom component's Python and JavaScript code -- this makes things official!\n4. publish: uploads your package to [PyPi](https://pypi.org/) and/or a sample app to [HuggingFace Spaces](https://hf.co/spaces).\n\nEach of these steps is done via the Custom Component CLI. You can invoke it with `gradio cc` or `gradio component`\n\nTip: Run `gradio cc --help` to get a help menu of all available commands. There are some commands that are not covered in this guide. You can also append `--help` to any command name to bring up a help page for that command, e.g. `gradio cc create --help`."
},
{
"id": 40,
"parent": 37,
"path": "08_custom-components/01_custom-components-in-five-minutes.md",
"level": 2,
"title": "1. create",
"content": "Bootstrap a new template by running the following in any working directory:\n\n```bash\ngradio cc create MyComponent --template SimpleTextbox\n```\n\nInstead of `MyComponent`, give your component any name.\n\nInstead of `SimpleTextbox`, you can use any Gradio component as a template. `SimpleTextbox` is actually a special component that a stripped-down version of the `Textbox` component that makes it particularly useful when creating your first custom component.\nSome other components that are good if you are starting out: `SimpleDropdown`, `SimpleImage`, or `File`.\n\nTip: Run `gradio cc show` to get a list of available component templates.\n\nThe `create` command will:\n\n1. Create a directory with your component's name in lowercase with the following structure:\n```directory\n- backend/ <- The python code for your custom component\n- frontend/ <- The javascript code for your custom component\n- demo/ <- A sample app using your custom component. Modify this to develop your component!\n- pyproject.toml <- Used to build the package and specify package metadata.\n```\n\n2. Install the component in development mode\n\nEach of the directories will have the code you need to get started developing!"
},
{
"id": 41,
"parent": 37,
"path": "08_custom-components/01_custom-components-in-five-minutes.md",
"level": 2,
"title": "2. dev",
"content": "Once you have created your new component, you can start a development server by `entering the directory` and running\n\n```bash\ngradio cc dev\n```\n\nYou'll see several lines that are printed to the console.\nThe most important one is the one that says:\n\n> Frontend Server (Go here): http://localhost:7861/\n\nThe port number might be different for you.\nClick on that link to launch the demo app in hot reload mode.\nNow, you can start making changes to the backend and frontend you'll see the results reflected live in the sample app!\nWe'll go through a real example in a later guide.\n\nTip: You don't have to run dev mode from your custom component directory. The first argument to `dev` mode is the path to the directory. By default it uses the current directory."
},
{
"id": 42,
"parent": 37,
"path": "08_custom-components/01_custom-components-in-five-minutes.md",
"level": 2,
"title": "3. build",
"content": "Once you are satisfied with your custom component's implementation, you can `build` it to use it outside of the development server.\n\nFrom your component directory, run:\n\n```bash\ngradio cc build\n```\n\nThis will create a `tar.gz` and `.whl` file in a `dist/` subdirectory.\nIf you or anyone installs that `.whl` file (`pip install `) they will be able to use your custom component in any gradio app!\n\nThe `build` command will also generate documentation for your custom component. This takes the form of an interactive space and a static `README.md`. You can disable this by passing `--no-generate-docs`. You can read more about the documentation generator in [the dedicated guide](https://gradio.app/guides/documenting-custom-components)."
},
{
"id": 43,
"parent": 37,
"path": "08_custom-components/01_custom-components-in-five-minutes.md",
"level": 2,
"title": "4. publish",
"content": "Right now, your package is only available on a `.whl` file on your computer.\nYou can share that file with the world with the `publish` command!\n\nSimply run the following command from your component directory:\n\n```bash\ngradio cc publish\n```\n\nThis will guide you through the following process:\n\n1. Upload your distribution files to PyPi. This is optional. If you decide to upload to PyPi, you will need a PyPI username and password. You can get one [here](https://pypi.org/account/register/).\n2. Upload a demo of your component to hugging face spaces. This is also optional.\n\n\nHere is an example of what publishing looks like:\n\n"
},
{
"id": 44,
"parent": 37,
"path": "08_custom-components/01_custom-components-in-five-minutes.md",
"level": 2,
"title": "Conclusion",
"content": "Now that you know the high-level workflow of creating custom components, you can go in depth in the next guides!\nAfter reading the guides, check out this [collection](https://huggingface.co/collections/gradio/custom-components-65497a761c5192d981710b12) of custom components on the HuggingFace Hub so you can learn from other's code.\n\nTip: If you want to start off from someone else's custom component see this [guide](./frequently-asked-questions#do-i-always-need-to-start-my-component-from-scratch)."
},
{
"id": 45,
"parent": null,
"path": "08_custom-components/02_key-component-concepts.md",
"level": 1,
"title": "Gradio Components: The Key Concepts",
"content": "In this section, we discuss a few important concepts when it comes to components in Gradio.\nIt's important to understand these concepts when developing your own component.\nOtherwise, your component may behave very different to other Gradio components!\n\nTip: You can skip this section if you are familiar with the internals of the Gradio library, such as each component's preprocess and postprocess methods."
},
{
"id": 46,
"parent": 45,
"path": "08_custom-components/02_key-component-concepts.md",
"level": 2,
"title": "Interactive vs Static",
"content": "Every component in Gradio comes in a `static` variant, and most come in an `interactive` version as well.\nThe `static` version is used when a component is displaying a value, and the user can **NOT** change that value by interacting with it. \nThe `interactive` version is used when the user is able to change the value by interacting with the Gradio UI.\n\nLet's see some examples:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Textbox(value=\"Hello\", interactive=True)\n gr.Textbox(value=\"Hello\", interactive=False)\n\ndemo.launch()\n\n```\nThis will display two textboxes.\nThe only difference: you'll be able to edit the value of the Gradio component on top, and you won't be able to edit the variant on the bottom (i.e. the textbox will be disabled).\n\nPerhaps a more interesting example is with the `Image` component:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Image(interactive=True)\n gr.Image(interactive=False)\n\ndemo.launch()\n```\n\nThe interactive version of the component is much more complex -- you can upload images or snap a picture from your webcam -- while the static version can only be used to display images.\n\nNot every component has a distinct interactive version. For example, the `gr.AnnotatedImage` only appears as a static version since there's no way to interactively change the value of the annotations or the image."
},
{
"id": 47,
"parent": 46,
"path": "08_custom-components/02_key-component-concepts.md",
"level": 3,
"title": "What you need to remember",
"content": "* Gradio will use the interactive version (if available) of a component if that component is used as the **input** to any event; otherwise, the static version will be used.\n\n* When you design custom components, you **must** accept the boolean interactive keyword in the constructor of your Python class. In the frontend, you **may** accept the `interactive` property, a `bool` which represents whether the component should be static or interactive. If you do not use this property in the frontend, the component will appear the same in interactive or static mode."
},
{
"id": 48,
"parent": 45,
"path": "08_custom-components/02_key-component-concepts.md",
"level": 2,
"title": "The value and how it is preprocessed/postprocessed",
"content": "The most important attribute of a component is its `value`.\nEvery component has a `value`.\nThe value that is typically set by the user in the frontend (if the component is interactive) or displayed to the user (if it is static). \nIt is also this value that is sent to the backend function when a user triggers an event, or returned by the user's function e.g. at the end of a prediction.\n\nSo this value is passed around quite a bit, but sometimes the format of the value needs to change between the frontend and backend. \nTake a look at this example:\n\n```python\nimport numpy as np\nimport gradio as gr\n\ndef sepia(input_img):\n sepia_filter = np.array([\n [0.393, 0.769, 0.189], \n [0.349, 0.686, 0.168], \n [0.272, 0.534, 0.131]\n ])\n sepia_img = input_img.dot(sepia_filter.T)\n sepia_img /= sepia_img.max()\n return sepia_img\n\ndemo = gr.Interface(sepia, gr.Image(width=200, height=200), \"image\")\ndemo.launch()\n```\n\nThis will create a Gradio app which has an `Image` component as the input and the output. \nIn the frontend, the Image component will actually **upload** the file to the server and send the **filepath** but this is converted to a `numpy` array before it is sent to a user's function. \nConversely, when the user returns a `numpy` array from their function, the numpy array is converted to a file so that it can be sent to the frontend and displayed by the `Image` component.\n\nTip: By default, the `Image` component sends numpy arrays to the python function because it is a common choice for machine learning engineers, though the Image component also supports other formats using the `type` parameter. Read the `Image` docs [here](https://www.gradio.app/docs/image) to learn more.\n\nEach component does two conversions:\n\n1. `preprocess`: Converts the `value` from the format sent by the frontend to the format expected by the python function. This usually involves going from a web-friendly **JSON** structure to a **python-native** data structure, like a `numpy` array or `PIL` image. The `Audio`, `Image` components are good examples of `preprocess` methods.\n\n2. `postprocess`: Converts the value returned by the python function to the format expected by the frontend. This usually involves going from a **python-native** data-structure, like a `PIL` image to a **JSON** structure."
},
{
"id": 49,
"parent": 48,
"path": "08_custom-components/02_key-component-concepts.md",
"level": 3,
"title": "What you need to remember",
"content": "* Every component must implement `preprocess` and `postprocess` methods. In the rare event that no conversion needs to happen, simply return the value as-is. `Textbox` and `Number` are examples of this. \n\n* As a component author, **YOU** control the format of the data displayed in the frontend as well as the format of the data someone using your component will receive. Think of an ergonomic data-structure a **python** developer will find intuitive, and control the conversion from a **Web-friendly JSON** data structure (and vice-versa) with `preprocess` and `postprocess.`"
},
{
"id": 50,
"parent": 45,
"path": "08_custom-components/02_key-component-concepts.md",
"level": 2,
"title": "The \"Example Version\" of a Component",
"content": "Gradio apps support providing example inputs -- and these are very useful in helping users get started using your Gradio app. \nIn `gr.Interface`, you can provide examples using the `examples` keyword, and in `Blocks`, you can provide examples using the special `gr.Examples` component.\n\nAt the bottom of this screenshot, we show a miniature example image of a cheetah that, when clicked, will populate the same image in the input Image component:\n\n![img](https://user-images.githubusercontent.com/1778297/277548211-a3cb2133-2ffc-4cdf-9a83-3e8363b57ea6.png)\n\n\nTo enable the example view, you must have the following two files in the top of the `frontend` directory:\n\n* `Example.svelte`: this corresponds to the \"example version\" of your component\n* `Index.svelte`: this corresponds to the \"regular version\"\n\nIn the backend, you typically don't need to do anything. The user-provided example `value` is processed using the same `.postprocess()` method described earlier. If you'd like to do process the data differently (for example, if the `.postprocess()` method is computationally expensive), then you can write your own `.process_example()` method for your custom component, which will be used instead. \n\nThe `Example.svelte` file and `process_example()` method will be covered in greater depth in the dedicated [frontend](./frontend) and [backend](./backend) guides respectively."
},
{
"id": 51,
"parent": 50,
"path": "08_custom-components/02_key-component-concepts.md",
"level": 3,
"title": "What you need to remember",
"content": "* If you expect your component to be used as input, it is important to define an \"Example\" view.\n* If you don't, Gradio will use a default one but it won't be as informative as it can be!"
},
{
"id": 52,
"parent": 45,
"path": "08_custom-components/02_key-component-concepts.md",
"level": 2,
"title": "Conclusion",
"content": "Now that you know the most important pieces to remember about Gradio components, you can start to design and build your own!"
},
{
"id": 53,
"parent": null,
"path": "08_custom-components/05_frontend.md",
"level": 1,
"title": "The Frontend 🌐⭐️",
"content": "This guide will cover everything you need to know to implement your custom component's frontend.\n\nTip: Gradio components use Svelte. Writing Svelte is fun! If you're not familiar with it, we recommend checking out their interactive [guide](https://learn.svelte.dev/tutorial/welcome-to-svelte)."
},
{
"id": 54,
"parent": 53,
"path": "08_custom-components/05_frontend.md",
"level": 2,
"title": "The directory structure ",
"content": "The frontend code should have, at minimum, three files:\n\n* `Index.svelte`: This is the main export and where your component's layout and logic should live.\n* `Example.svelte`: This is where the example view of the component is defined.\n\nFeel free to add additional files and subdirectories. \nIf you want to export any additional modules, remember to modify the `package.json` file\n\n```json\n\"exports\": {\n \".\": \"./Index.svelte\",\n \"./example\": \"./Example.svelte\",\n \"./package.json\": \"./package.json\"\n},\n```"
},
{
"id": 55,
"parent": 53,
"path": "08_custom-components/05_frontend.md",
"level": 2,
"title": "The Index.svelte file",
"content": "Your component should expose the following props that will be passed down from the parent Gradio application.\n\n```typescript\nimport type { LoadingStatus } from \"@gradio/statustracker\";\nimport type { Gradio } from \"@gradio/utils\";\n\nexport let gradio: Gradio<{\n event_1: never;\n event_2: never;\n}>;\n\nexport let elem_id = \"\";\nexport let elem_classes: string[] = [];\nexport let scale: number | null = null;\nexport let min_width: number | undefined = undefined;\nexport let loading_status: LoadingStatus | undefined = undefined;\nexport let mode: \"static\" | \"interactive\";\n```\n\n* `elem_id` and `elem_classes` allow Gradio app developers to target your component with custom CSS and JavaScript from the Python `Blocks` class.\n\n* `scale` and `min_width` allow Gradio app developers to control how much space your component takes up in the UI.\n\n* `loading_status` is used to display a loading status over the component when it is the output of an event.\n\n* `mode` is how the parent Gradio app tells your component whether the `interactive` or `static` version should be displayed.\n\n* `gradio`: The `gradio` object is created by the parent Gradio app. It stores some application-level configuration that will be useful in your component, like internationalization. You must use it to dispatch events from your component.\n\nA minimal `Index.svelte` file would look like:\n\n```svelte\n\n\n\n\t{#if loading_status}\n\t\t\n\t{/if}\n
{value}
\n\n```"
},
{
"id": 56,
"parent": 53,
"path": "08_custom-components/05_frontend.md",
"level": 2,
"title": "The Example.svelte file",
"content": "The `Example.svelte` file should expose the following props:\n\n```typescript\n export let value: string;\n export let type: \"gallery\" | \"table\";\n export let selected = false;\n export let index: number;\n```\n\n* `value`: The example value that should be displayed.\n\n* `type`: This is a variable that can be either `\"gallery\"` or `\"table\"` depending on how the examples are displayed. The `\"gallery\"` form is used when the examples correspond to a single input component, while the `\"table\"` form is used when a user has multiple input components, and the examples need to populate all of them. \n\n* `selected`: You can also adjust how the examples are displayed if a user \"selects\" a particular example by using the selected variable.\n\n* `index`: The current index of the selected value.\n\n* Any additional props your \"non-example\" component takes!\n\nThis is the `Example.svelte` file for the code `Radio` component:\n\n```svelte\n\n\n
\n\t{value}\n
\n\n\n```"
},
{
"id": 57,
"parent": 53,
"path": "08_custom-components/05_frontend.md",
"level": 2,
"title": "Handling Files",
"content": "If your component deals with files, these files **should** be uploaded to the backend server. \nThe `@gradio/client` npm package provides the `upload` and `prepare_files` utility functions to help you do this.\n\nThe `prepare_files` function will convert the browser's `File` datatype to gradio's internal `FileData` type.\nYou should use the `FileData` data in your component to keep track of uploaded files.\n\nThe `upload` function will upload an array of `FileData` values to the server.\n\nHere's an example of loading files from an `` element when its value changes.\n\n\n```svelte\n\n\n\n```\n\nThe component exposes a prop named `root`. \nThis is passed down by the parent gradio app and it represents the base url that the files will be uploaded to and fetched from.\n\nFor WASM support, you should get the upload function from the `Context` and pass that as the third parameter of the `upload` function.\n\n```typescript\n\n```"
},
{
"id": 58,
"parent": 53,
"path": "08_custom-components/05_frontend.md",
"level": 2,
"title": "Leveraging Existing Gradio Components",
"content": "Most of Gradio's frontend components are published on [npm](https://www.npmjs.com/), the javascript package repository.\nThis means that you can use them to save yourself time while incorporating common patterns in your component, like uploading files.\nFor example, the `@gradio/upload` package has `Upload` and `ModifyUpload` components for properly uploading files to the Gradio server. \nHere is how you can use them to create a user interface to upload and display PDF files.\n\n```svelte\n\n\n\n{#if value === null && interactive}\n \n \n \n{:else if value !== null}\n {#if interactive}\n \n {/if}\n \n{:else}\n \t\n{/if}\n```\n\nYou can also combine existing Gradio components to create entirely unique experiences.\nLike rendering a gallery of chatbot conversations. \nThe possibilities are endless, please read the documentation on our javascript packages [here](https://gradio.app/main/docs/js).\nWe'll be adding more packages and documentation over the coming weeks!"
},
{
"id": 59,
"parent": 53,
"path": "08_custom-components/05_frontend.md",
"level": 2,
"title": "Matching Gradio Core's Design System",
"content": "You can explore our component library via Storybook. You'll be able to interact with our components and see them in their various states.\n\nFor those interested in design customization, we provide the CSS variables consisting of our color palette, radii, spacing, and the icons we use - so you can easily match up your custom component with the style of our core components. This Storybook will be regularly updated with any new additions or changes.\n\n[Storybook Link](https://gradio.app/main/docs/js/storybook)"
},
{
"id": 60,
"parent": 53,
"path": "08_custom-components/05_frontend.md",
"level": 2,
"title": "Custom configuration",
"content": "If you want to make use of the vast vite ecosystem, you can use the `gradio.config.js` file to configure your component's build process. This allows you to make use of tools like tailwindcss, mdsvex, and more.\n\nCurrently, it is possible to configure the following:\n\nVite options:\n- `plugins`: A list of vite plugins to use.\n\nSvelte options:\n- `preprocess`: A list of svelte preprocessors to use.\n- `extensions`: A list of file extensions to compile to `.svelte` files.\n- `build.target`: The target to build for, this may be necessary to support newer javascript features. See the [esbuild docs](https://esbuild.github.io/api/#target) for more information.\n\nThe `gradio.config.js` file should be placed in the root of your component's `frontend` directory. A default config file is created for you when you create a new component. But you can also create your own config file, if one doesn't exist, and use it to customize your component's build process."
},
{
"id": 61,
"parent": 60,
"path": "08_custom-components/05_frontend.md",
"level": 3,
"title": "Example for a Vite plugin",
"content": "Custom components can use Vite plugins to customize the build process. Check out the [Vite Docs](https://vitejs.dev/guide/using-plugins.html) for more information. \n\nHere we configure [TailwindCSS](https://tailwindcss.com), a utility-first CSS framework. Setup is easiest using the version 4 prerelease. \n\n```\nnpm install tailwindcss@next @tailwindcss/vite@next\n```\n\nIn `gradio.config.js`:\n\n```typescript\nimport tailwindcss from \"@tailwindcss/vite\";\nexport default {\n plugins: [tailwindcss()]\n};\n```\n\nThen create a `style.css` file with the following content:\n\n```css\n@import \"tailwindcss\";\n```\n\nImport this file into `Index.svelte`. Note, that you need to import the css file containing `@import` and cannot just use a `\n```\n\nNow import `PdfUploadText.svelte` in your `\n\n
\n\t\n
\n\n\n```\n\n\nTip: Exercise for the reader - reduce the code duplication between `Index.svelte` and `Example.svelte` 😊\n\n\nYou will not be able to render examples until we make some changes to the backend code in the next step!"
},
{
"id": 84,
"parent": 73,
"path": "08_custom-components/07_pdf-component-example.md",
"level": 2,
"title": "Step 9: The backend",
"content": "The backend changes needed are smaller.\nWe're almost done!\n\nWhat we're going to do is:\n* Add `change` and `upload` events to our component.\n* Add a `height` property to let users control the height of the PDF.\n* Set the `data_model` of our component to be `FileData`. This is so that Gradio can automatically cache and safely serve any files that are processed by our component.\n* Modify the `preprocess` method to return a string corresponding to the path of our uploaded PDF.\n* Modify the `postprocess` to turn a path to a PDF created in an event handler to a `FileData`.\n\nWhen all is said an done, your component's backend code should look like this:\n\n```python\nfrom __future__ import annotations\nfrom typing import Any, Callable, TYPE_CHECKING\n\nfrom gradio.components.base import Component\nfrom gradio.data_classes import FileData\nfrom gradio import processing_utils\nif TYPE_CHECKING:\n from gradio.components import Timer\n\nclass PDF(Component):\n\n EVENTS = [\"change\", \"upload\"]\n\n data_model = FileData\n\n def __init__(self, value: Any = None, *,\n height: int | None = None,\n label: str | None = None, info: str | None = None,\n show_label: bool | None = None,\n container: bool = True,\n scale: int | None = None,\n min_width: int | None = None,\n interactive: bool | None = None,\n visible: bool = True,\n elem_id: str | None = None,\n elem_classes: list[str] | str | None = None,\n render: bool = True,\n load_fn: Callable[..., Any] | None = None,\n every: Timer | float | None = None):\n super().__init__(value, label=label, info=info,\n show_label=show_label, container=container,\n scale=scale, min_width=min_width,\n interactive=interactive, visible=visible,\n elem_id=elem_id, elem_classes=elem_classes,\n render=render, load_fn=load_fn, every=every)\n self.height = height\n\n def preprocess(self, payload: FileData) -> str:\n return payload.path\n\n def postprocess(self, value: str | None) -> FileData:\n if not value:\n return None\n return FileData(path=value)\n\n def example_payload(self):\n return \"https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf\"\n\n def example_value(self):\n return \"https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf\"\n```"
},
{
"id": 85,
"parent": 73,
"path": "08_custom-components/07_pdf-component-example.md",
"level": 2,
"title": "Step 10: Add a demo and publish!",
"content": "To test our backend code, let's add a more complex demo that performs Document Question and Answering with huggingface transformers.\n\nIn our `demo` directory, create a `requirements.txt` file with the following packages\n\n```\ntorch\ntransformers\npdf2image\npytesseract\n```\n\n\nTip: Remember to install these yourself and restart the dev server! You may need to install extra non-python dependencies for `pdf2image`. See [here](https://pypi.org/project/pdf2image/). Feel free to write your own demo if you have trouble.\n\n\n```python\nimport gradio as gr\nfrom gradio_pdf import PDF\nfrom pdf2image import convert_from_path\nfrom transformers import pipeline\nfrom pathlib import Path\n\ndir_ = Path(__file__).parent\n\np = pipeline(\n \"document-question-answering\",\n model=\"impira/layoutlm-document-qa\",\n)\n\ndef qa(question: str, doc: str) -> str:\n img = convert_from_path(doc)[0]\n output = p(img, question)\n return sorted(output, key=lambda x: x[\"score\"], reverse=True)[0]['answer']\n\n\ndemo = gr.Interface(\n qa,\n [gr.Textbox(label=\"Question\"), PDF(label=\"Document\")],\n gr.Textbox(),\n)\n\ndemo.launch()\n```\n\nSee our demo in action below!\n\n\n\nFinally lets build our component with `gradio cc build` and publish it with the `gradio cc publish` command!\nThis will guide you through the process of uploading your component to [PyPi](https://pypi.org/) and [HuggingFace Spaces](https://huggingface.co/spaces).\n\n\nTip: You may need to add the following lines to the `Dockerfile` of your HuggingFace Space.\n\n```Dockerfile\nRUN mkdir -p /tmp/cache/\nRUN chmod a+rwx -R /tmp/cache/\nRUN apt-get update && apt-get install -y poppler-utils tesseract-ocr\n\nENV TRANSFORMERS_CACHE=/tmp/cache/\n```"
},
{
"id": 86,
"parent": 73,
"path": "08_custom-components/07_pdf-component-example.md",
"level": 2,
"title": "Conclusion",
"content": "In order to use our new component in **any** gradio 4.0 app, simply install it with pip, e.g. `pip install gradio-pdf`. Then you can use it like the built-in `gr.File()` component (except that it will only accept and display PDF files).\n\nHere is a simple demo with the Blocks api:\n\n```python\nimport gradio as gr\nfrom gradio_pdf import PDF\n\nwith gr.Blocks() as demo:\n pdf = PDF(label=\"Upload a PDF\", interactive=True)\n name = gr.Textbox()\n pdf.upload(lambda f: f, pdf, name)\n\ndemo.launch()\n```\n\n\nI hope you enjoyed this tutorial!\nThe complete source code for our component is [here](https://huggingface.co/spaces/freddyaboulton/gradio_pdf/tree/main/src).\nPlease don't hesitate to reach out to the gradio community on the [HuggingFace Discord](https://discord.gg/hugging-face-879548962464493619) if you get stuck."
},
{
"id": 87,
"parent": null,
"path": "08_custom-components/04_backend.md",
"level": 1,
"title": "The Backend 🐍",
"content": "This guide will cover everything you need to know to implement your custom component's backend processing."
},
{
"id": 88,
"parent": 87,
"path": "08_custom-components/04_backend.md",
"level": 2,
"title": "Which Class to Inherit From",
"content": "All components inherit from one of three classes `Component`, `FormComponent`, or `BlockContext`.\nYou need to inherit from one so that your component behaves like all other gradio components.\nWhen you start from a template with `gradio cc create --template`, you don't need to worry about which one to choose since the template uses the correct one. \nFor completeness, and in the event that you need to make your own component from scratch, we explain what each class is for.\n\n* `FormComponent`: Use this when you want your component to be grouped together in the same `Form` layout with other `FormComponents`. The `Slider`, `Textbox`, and `Number` components are all `FormComponents`.\n* `BlockContext`: Use this when you want to place other components \"inside\" your component. This enabled `with MyComponent() as component:` syntax.\n* `Component`: Use this for all other cases.\n\nTip: If your component supports streaming output, inherit from the `StreamingOutput` class.\n\nTip: If you inherit from `BlockContext`, you also need to set the metaclass to be `ComponentMeta`. See example below.\n\n```python\nfrom gradio.blocks import BlockContext\nfrom gradio.component_meta import ComponentMeta\n\n\n\n\n@document()\nclass Row(BlockContext, metaclass=ComponentMeta):\n pass\n```"
},
{
"id": 89,
"parent": 87,
"path": "08_custom-components/04_backend.md",
"level": 2,
"title": "The methods you need to implement",
"content": "When you inherit from any of these classes, the following methods must be implemented.\nOtherwise the Python interpreter will raise an error when you instantiate your component!"
},
{
"id": 90,
"parent": 89,
"path": "08_custom-components/04_backend.md",
"level": 3,
"title": "`preprocess` and `postprocess`",
"content": "Explained in the [Key Concepts](./key-component-concepts#the-value-and-how-it-is-preprocessed-postprocessed) guide. \nThey handle the conversion from the data sent by the frontend to the format expected by the python function.\n\n```python\n def preprocess(self, x: Any) -> Any:\n \"\"\"\n Convert from the web-friendly (typically JSON) value in the frontend to the format expected by the python function.\n \"\"\"\n return x\n\n def postprocess(self, y):\n \"\"\"\n Convert from the data returned by the python function to the web-friendly (typically JSON) value expected by the frontend.\n \"\"\"\n return y\n```"
},
{
"id": 91,
"parent": 89,
"path": "08_custom-components/04_backend.md",
"level": 3,
"title": "`process_example`",
"content": "Takes in the original Python value and returns the modified value that should be displayed in the examples preview in the app. \nIf not provided, the `.postprocess()` method is used instead. Let's look at the following example from the `SimpleDropdown` component.\n\n```python\ndef process_example(self, input_data):\n return next((c[0] for c in self.choices if c[1] == input_data), None)\n```\n\nSince `self.choices` is a list of tuples corresponding to (`display_name`, `value`), this converts the value that a user provides to the display value (or if the value is not present in `self.choices`, it is converted to `None`)."
},
{
"id": 92,
"parent": 89,
"path": "08_custom-components/04_backend.md",
"level": 3,
"title": "`api_info`",
"content": "A JSON-schema representation of the value that the `preprocess` expects. \nThis powers api usage via the gradio clients. \nYou do **not** need to implement this yourself if you components specifies a `data_model`. \nThe `data_model` in the following section.\n\n```python\ndef api_info(self) -> dict[str, list[str]]:\n \"\"\"\n A JSON-schema representation of the value that the `preprocess` expects and the `postprocess` returns.\n \"\"\"\n pass\n```"
},
{
"id": 93,
"parent": 89,
"path": "08_custom-components/04_backend.md",
"level": 3,
"title": "`example_payload`",
"content": "An example payload for your component, e.g. something that can be passed into the `.preprocess()` method\nof your component. The example input is displayed in the `View API` page of a Gradio app that uses your custom component. \nMust be JSON-serializable. If your component expects a file, it is best to use a publicly accessible URL.\n\n```python\ndef example_payload(self) -> Any:\n \"\"\"\n The example inputs for this component for API usage. Must be JSON-serializable.\n \"\"\"\n pass\n```"
},
{
"id": 94,
"parent": 89,
"path": "08_custom-components/04_backend.md",
"level": 3,
"title": "`example_value`",
"content": "An example value for your component, e.g. something that can be passed into the `.postprocess()` method\nof your component. This is used as the example value in the default app that is created in custom component development.\n\n```python\ndef example_payload(self) -> Any:\n \"\"\"\n The example inputs for this component for API usage. Must be JSON-serializable.\n \"\"\"\n pass\n```"
},
{
"id": 95,
"parent": 89,
"path": "08_custom-components/04_backend.md",
"level": 3,
"title": "`flag`",
"content": "Write the component's value to a format that can be stored in the `csv` or `json` file used for flagging.\nYou do **not** need to implement this yourself if you components specifies a `data_model`. \nThe `data_model` in the following section.\n\n```python\ndef flag(self, x: Any | GradioDataModel, flag_dir: str | Path = \"\") -> str:\n pass\n```"
},
{
"id": 96,
"parent": 89,
"path": "08_custom-components/04_backend.md",
"level": 3,
"title": "`read_from_flag`",
"content": "Convert from the format stored in the `csv` or `json` file used for flagging to the component's python `value`.\nYou do **not** need to implement this yourself if you components specifies a `data_model`. \nThe `data_model` in the following section.\n\n```python\ndef read_from_flag(\n self,\n x: Any,\n) -> GradioDataModel | Any:\n \"\"\"\n Convert the data from the csv or jsonl file into the component state.\n \"\"\"\n return x\n```"
},
{
"id": 97,
"parent": 87,
"path": "08_custom-components/04_backend.md",
"level": 2,
"title": "The `data_model`",
"content": "The `data_model` is how you define the expected data format your component's value will be stored in the frontend.\nIt specifies the data format your `preprocess` method expects and the format the `postprocess` method returns.\nIt is not necessary to define a `data_model` for your component but it greatly simplifies the process of creating a custom component.\nIf you define a custom component you only need to implement four methods - `preprocess`, `postprocess`, `example_payload`, and `example_value`!\n\nYou define a `data_model` by defining a [pydantic model](https://docs.pydantic.dev/latest/concepts/models/#basic-model-usage) that inherits from either `GradioModel` or `GradioRootModel`.\n\nThis is best explained with an example. Let's look at the core `Video` component, which stores the video data as a JSON object with two keys `video` and `subtitles` which point to separate files.\n\n```python\nfrom gradio.data_classes import FileData, GradioModel\n\nclass VideoData(GradioModel):\n video: FileData\n subtitles: Optional[FileData] = None\n\nclass Video(Component):\n data_model = VideoData\n```\n\nBy adding these four lines of code, your component automatically implements the methods needed for API usage, the flagging methods, and example caching methods!\nIt also has the added benefit of self-documenting your code.\nAnyone who reads your component code will know exactly the data it expects.\n\nTip: If your component expects files to be uploaded from the frontend, your must use the `FileData` model! It will be explained in the following section. \n\nTip: Read the pydantic docs [here](https://docs.pydantic.dev/latest/concepts/models/#basic-model-usage).\n\nThe difference between a `GradioModel` and a `GradioRootModel` is that the `RootModel` will not serialize the data to a dictionary.\nFor example, the `Names` model will serialize the data to `{'names': ['freddy', 'pete']}` whereas the `NamesRoot` model will serialize it to `['freddy', 'pete']`.\n\n```python\nfrom typing import List\n\nclass Names(GradioModel):\n names: List[str]\n\nclass NamesRoot(GradioRootModel):\n root: List[str]\n```\n\nEven if your component does not expect a \"complex\" JSON data structure it can be beneficial to define a `GradioRootModel` so that you don't have to worry about implementing the API and flagging methods.\n\nTip: Use classes from the Python typing library to type your models. e.g. `List` instead of `list`."
},
{
"id": 98,
"parent": 87,
"path": "08_custom-components/04_backend.md",
"level": 2,
"title": "Handling Files",
"content": "If your component expects uploaded files as input, or returns saved files to the frontend, you **MUST** use the `FileData` to type the files in your `data_model`.\n\nWhen you use the `FileData`:\n\n* Gradio knows that it should allow serving this file to the frontend. Gradio automatically blocks requests to serve arbitrary files in the computer running the server.\n\n* Gradio will automatically place the file in a cache so that duplicate copies of the file don't get saved.\n\n* The client libraries will automatically know that they should upload input files prior to sending the request. They will also automatically download files.\n\nIf you do not use the `FileData`, your component will not work as expected!"
},
{
"id": 99,
"parent": 87,
"path": "08_custom-components/04_backend.md",
"level": 2,
"title": "Adding Event Triggers To Your Component",
"content": "The events triggers for your component are defined in the `EVENTS` class attribute.\nThis is a list that contains the string names of the events.\nAdding an event to this list will automatically add a method with that same name to your component!\n\nYou can import the `Events` enum from `gradio.events` to access commonly used events in the core gradio components.\n\nFor example, the following code will define `text_submit`, `file_upload` and `change` methods in the `MyComponent` class.\n\n```python\nfrom gradio.events import Events\nfrom gradio.components import FormComponent\n\nclass MyComponent(FormComponent):\n\n EVENTS = [\n \"text_submit\",\n \"file_upload\",\n Events.change\n ]\n```\n\n\nTip: Don't forget to also handle these events in the JavaScript code!"
},
{
"id": 100,
"parent": 87,
"path": "08_custom-components/04_backend.md",
"level": 2,
"title": "Conclusion",
"content": ""
},
{
"id": 101,
"parent": null,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 1,
"title": "Frequently Asked Questions",
"content": ""
},
{
"id": 102,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "What do I need to install before using Custom Components?",
"content": "Before using Custom Components, make sure you have Python 3.10+, Node.js v18+, npm 9+, and Gradio 4.0+ (preferably Gradio 5.0+) installed."
},
{
"id": 103,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "Are custom components compatible between Gradio 4.0 and 5.0?",
"content": "Custom components built with Gradio 5.0 should be compatible with Gradio 4.0. If you built your custom component in Gradio 4.0 you will have to rebuild your component to be compatible with Gradio 5.0. Simply follow these steps:\n1. Update the `@gradio/preview` package. `cd` into the `frontend` directory and run `npm update`.\n2. Modify the `dependencies` key in `pyproject.toml` to pin the maximum allowed Gradio version at version 5, e.g. `dependencies = [\"gradio>=4.0,<6.0\"]`.\n3. Run the build and publish commands"
},
{
"id": 104,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "What templates can I use to create my custom component?",
"content": "Run `gradio cc show` to see the list of built-in templates.\nYou can also start off from other's custom components!\nSimply `git clone` their repository and make your modifications."
},
{
"id": 105,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "What is the development server?",
"content": "When you run `gradio cc dev`, a development server will load and run a Gradio app of your choosing.\nThis is like when you run `python .py`, however the `gradio` command will hot reload so you can instantly see your changes."
},
{
"id": 106,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "The development server didn't work for me ",
"content": "**1. Check your terminal and browser console**\n\nMake sure there are no syntax errors or other obvious problems in your code. Exceptions triggered from python will be displayed in the terminal. Exceptions from javascript will be displayed in the browser console and/or the terminal.\n\n**2. Are you developing on Windows?**\n\nChrome on Windows will block the local compiled svelte files for security reasons. We recommend developing your custom component in the windows subsystem for linux (WSL) while the team looks at this issue.\n\n**3. Inspect the window.__GRADIO_CC__ variable**\n\nIn the browser console, print the `window.__GRADIO__CC` variable (just type it into the console). If it is an empty object, that means\nthat the CLI could not find your custom component source code. Typically, this happens when the custom component is installed in a different virtual environment than the one used to run the dev command. Please use the `--python-path` and `gradio-path` CLI arguments to specify the path of the python and gradio executables for the environment your component is installed in. For example, if you are using a virtualenv located at `/Users/mary/venv`, pass in `/Users/mary/bin/python` and `/Users/mary/bin/gradio` respectively.\n\nIf the `window.__GRADIO__CC` variable is not empty (see below for an example), then the dev server should be working correctly. \n\n![](https://gradio-builds.s3.amazonaws.com/demo-files/gradio_CC_DEV.png)\n\n**4. Make sure you are using a virtual environment**\nIt is highly recommended you use a virtual environment to prevent conflicts with other python dependencies installed in your system."
},
{
"id": 107,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "Do I always need to start my component from scratch?",
"content": "No! You can start off from an existing gradio component as a template, see the [five minute guide](./custom-components-in-five-minutes).\nYou can also start from an existing custom component if you'd like to tweak it further. Once you find the source code of a custom component you like, clone the code to your computer and run `gradio cc install`. Then you can run the development server to make changes.If you run into any issues, contact the author of the component by opening an issue in their repository. The [gallery](https://www.gradio.app/custom-components/gallery) is a good place to look for published components. For example, to start from the [PDF component](https://www.gradio.app/custom-components/gallery?id=freddyaboulton%2Fgradio_pdf), clone the space with `git clone https://huggingface.co/spaces/freddyaboulton/gradio_pdf`, `cd` into the `src` directory, and run `gradio cc install`."
},
{
"id": 108,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "Do I need to host my custom component on HuggingFace Spaces?",
"content": "You can develop and build your custom component without hosting or connecting to HuggingFace.\nIf you would like to share your component with the gradio community, it is recommended to publish your package to PyPi and host a demo on HuggingFace so that anyone can install it or try it out."
},
{
"id": 109,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "What methods are mandatory for implementing a custom component in Gradio?",
"content": "You must implement the `preprocess`, `postprocess`, `example_payload`, and `example_value` methods. If your component does not use a data model, you must also define the `api_info`, `flag`, and `read_from_flag` methods. Read more in the [backend guide](./backend)."
},
{
"id": 110,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "What is the purpose of a `data_model` in Gradio custom components?",
"content": "A `data_model` defines the expected data format for your component, simplifying the component development process and self-documenting your code. It streamlines API usage and example caching."
},
{
"id": 111,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "Why is it important to use `FileData` for components dealing with file uploads?",
"content": "Utilizing `FileData` is crucial for components that expect file uploads. It ensures secure file handling, automatic caching, and streamlined client library functionality."
},
{
"id": 112,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "How can I add event triggers to my custom Gradio component?",
"content": "You can define event triggers in the `EVENTS` class attribute by listing the desired event names, which automatically adds corresponding methods to your component."
},
{
"id": 113,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "Can I implement a custom Gradio component without defining a `data_model`?",
"content": "Yes, it is possible to create custom components without a `data_model`, but you are going to have to manually implement `api_info`, `flag`, and `read_from_flag` methods."
},
{
"id": 114,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "Are there sample custom components I can learn from?",
"content": "We have prepared this [collection](https://huggingface.co/collections/gradio/custom-components-65497a761c5192d981710b12) of custom components on the HuggingFace Hub that you can use to get started!"
},
{
"id": 115,
"parent": 101,
"path": "08_custom-components/06_frequently-asked-questions.md",
"level": 2,
"title": "How can I find custom components created by the Gradio community?",
"content": "We're working on creating a gallery to make it really easy to discover new custom components.\nIn the meantime, you can search for HuggingFace Spaces that are tagged as a `gradio-custom-component` [here](https://huggingface.co/search/full-text?q=gradio-custom-component&type=space)"
},
{
"id": 116,
"parent": null,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 1,
"title": "Documenting Custom Components",
"content": "In 4.15, we added a new `gradio cc docs` command to the Gradio CLI to generate rich documentation for your custom component. This command will generate documentation for users automatically, but to get the most out of it, you need to do a few things."
},
{
"id": 117,
"parent": 116,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 2,
"title": "How do I use it?",
"content": "The documentation will be generated when running `gradio cc build`. You can pass the `--no-generate-docs` argument to turn off this behaviour.\n\nThere is also a standalone `docs` command that allows for greater customisation. If you are running this command manually it should be run _after_ the `version` in your `pyproject.toml` has been bumped but before building the component.\n\nAll arguments are optional.\n\n```bash\ngradio cc docs\n path # The directory of the custom component.\n --demo-dir # Path to the demo directory.\n --demo-name # Name of the demo file\n --space-url # URL of the Hugging Face Space to link to\n --generate-space # create a documentation space.\n --no-generate-space # do not create a documentation space\n --readme-path # Path to the README.md file.\n --generate-readme # create a REAMDE.md file\n --no-generate-readme # do not create a README.md file\n --suppress-demo-check # suppress validation checks and warnings\n```"
},
{
"id": 118,
"parent": 116,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 2,
"title": "What gets generated?",
"content": "The `gradio cc docs` command will generate an interactive Gradio app and a static README file with various features. You can see an example here:\n\n- [Gradio app deployed on Hugging Face Spaces]()\n- [README.md rendered by GitHub]()\n\nThe README.md and space both have the following features:\n\n- A description.\n- Installation instructions.\n- A fully functioning code snippet.\n- Optional links to PyPi, GitHub, and Hugging Face Spaces.\n- API documentation including:\n - An argument table for component initialisation showing types, defaults, and descriptions.\n - A description of how the component affects the user's predict function.\n - A table of events and their descriptions.\n - Any additional interfaces or classes that may be used during initialisation or in the pre- or post- processors.\n\nAdditionally, the Gradio includes:\n\n- A live demo.\n- A richer, interactive version of the parameter tables.\n- Nicer styling!"
},
{
"id": 119,
"parent": 116,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 2,
"title": "What do I need to do?",
"content": "The documentation generator uses existing standards to extract the necessary information, namely Type Hints and Docstrings. There are no Gradio-specific APIs for documentation, so following best practices will generally yield the best results.\n\nIf you already use type hints and docstrings in your component source code, you don't need to do much to benefit from this feature, but there are some details that you should be aware of."
},
{
"id": 120,
"parent": 119,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 3,
"title": "Python version",
"content": "To get the best documentation experience, you need to use Python `3.10` or greater when generating documentation. This is because some introspection features used to generate the documentation were only added in `3.10`."
},
{
"id": 121,
"parent": 119,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 3,
"title": "Type hints",
"content": "Python type hints are used extensively to provide helpful information for users. \n\n \n What are type hints?\n\n\nIf you need to become more familiar with type hints in Python, they are a simple way to express what Python types are expected for arguments and return values of functions and methods. They provide a helpful in-editor experience, aid in maintenance, and integrate with various other tools. These types can be simple primitives, like `list` `str` `bool`; they could be more compound types like `list[str]`, `str | None` or `tuple[str, float | int]`; or they can be more complex types using utility classed like [`TypedDict`](https://peps.python.org/pep-0589/#abstract).\n\n[Read more about type hints in Python.](https://realpython.com/lessons/type-hinting/)\n\n\n"
},
{
"id": 122,
"parent": 121,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 4,
"title": "What do I need to add hints to?",
"content": "You do not need to add type hints to every part of your code. For the documentation to work correctly, you will need to add type hints to the following component methods:\n\n- `__init__` parameters should be typed.\n- `postprocess` parameters and return value should be typed.\n- `preprocess` parameters and return value should be typed.\n\nIf you are using `gradio cc create`, these types should already exist, but you may need to tweak them based on any changes you make."
},
{
"id": 123,
"parent": 122,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 5,
"title": "`__init__`",
"content": "Here, you only need to type the parameters. If you have cloned a template with `gradio` cc create`, these should already be in place. You will only need to add new hints for anything you have added or changed:\n\n```py\ndef __init__(\n self,\n value: str | None = None,\n *,\n sources: Literal[\"upload\", \"microphone\"] = \"upload,\n every: Timer | float | None = None,\n ...\n):\n ...\n```"
},
{
"id": 124,
"parent": 122,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 5,
"title": "`preprocess` and `postprocess`",
"content": "The `preprocess` and `postprocess` methods determine the value passed to the user function and the value that needs to be returned.\n\nEven if the design of your component is primarily as an input or an output, it is worth adding type hints to both the input parameters and the return values because Gradio has no way of limiting how components can be used.\n\nIn this case, we specifically care about:\n\n- The return type of `preprocess`.\n- The input type of `postprocess`.\n\n```py\ndef preprocess(\n self, payload: FileData | None # input is optional\n) -> tuple[int, str] | str | None:"
},
{
"id": 125,
"parent": null,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 1,
"title": "user function input is the preprocess return ▲",
"content": ""
},
{
"id": 126,
"parent": null,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 1,
"title": "user function output is the postprocess input ▼",
"content": "def postprocess(\n self, value: tuple[int, str] | None\n) -> FileData | bytes | None: # return is optional\n ...\n```"
},
{
"id": 127,
"parent": 126,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 3,
"title": "Docstrings",
"content": "Docstrings are also used extensively to extract more meaningful, human-readable descriptions of certain parts of the API.\n\n \n What are docstrings?\n\n\nIf you need to become more familiar with docstrings in Python, they are a way to annotate parts of your code with human-readable decisions and explanations. They offer a rich in-editor experience like type hints, but unlike type hints, they don't have any specific syntax requirements. They are simple strings and can take almost any form. The only requirement is where they appear. Docstrings should be \"a string literal that occurs as the first statement in a module, function, class, or method definition\".\n\n[Read more about Python docstrings.](https://peps.python.org/pep-0257/#what-is-a-docstring)\n\n\n\nWhile docstrings don't have any syntax requirements, we need a particular structure for documentation purposes.\n\nAs with type hint, the specific information we care about is as follows:\n\n- `__init__` parameter docstrings.\n- `preprocess` return docstrings.\n- `postprocess` input parameter docstrings.\n\nEverything else is optional.\n\nDocstrings should always take this format to be picked up by the documentation generator:"
},
{
"id": 128,
"parent": 127,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 4,
"title": "Classes",
"content": "```py\n\"\"\"\nA description of the class.\n\nThis can span multiple lines and can _contain_ *markdown*.\n\"\"\"\n```"
},
{
"id": 129,
"parent": 127,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 4,
"title": "Methods and functions ",
"content": "Markdown in these descriptions will not be converted into formatted text.\n\n```py\n\"\"\"\nParameters:\n param_one: A description for this parameter.\n param_two: A description for this parameter.\nReturns:\n A description for this return value.\n\"\"\"\n```"
},
{
"id": 130,
"parent": 126,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 3,
"title": "Events",
"content": "In custom components, events are expressed as a list stored on the `events` field of the component class. While we do not need types for events, we _do_ need a human-readable description so users can understand the behaviour of the event.\n\nTo facilitate this, we must create the event in a specific way.\n\nThere are two ways to add events to a custom component."
},
{
"id": 131,
"parent": 130,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 4,
"title": "Built-in events",
"content": "Gradio comes with a variety of built-in events that may be enough for your component. If you are using built-in events, you do not need to do anything as they already have descriptions we can extract:\n\n```py\nfrom gradio.events import Events\n\nclass ParamViewer(Component):\n ...\n\n EVENTS = [\n Events.change,\n Events.upload,\n ]\n```"
},
{
"id": 132,
"parent": 130,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 4,
"title": "Custom events",
"content": "You can define a custom event if the built-in events are unsuitable for your use case. This is a straightforward process, but you must create the event in this way for docstrings to work correctly:\n\n```py\nfrom gradio.events import Events, EventListener\n\nclass ParamViewer(Component):\n ...\n\n EVENTS = [\n Events.change,\n EventListener(\n \"bingbong\",\n doc=\"This listener is triggered when the user does a bingbong.\"\n )\n ]\n```"
},
{
"id": 133,
"parent": 126,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 3,
"title": "Demo",
"content": "The `demo/app.py`, often used for developing the component, generates the live demo and code snippet. The only strict rule here is that the `demo.launch()` command must be contained with a `__name__ == \"__main__\"` conditional as below:\n\n```py\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThe documentation generator will scan for such a clause and error if absent. If you are _not_ launching the demo inside the `demo/app.py`, then you can pass `--suppress-demo-check` to turn off this check."
},
{
"id": 134,
"parent": 133,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 4,
"title": "Demo recommendations",
"content": "Although there are no additional rules, there are some best practices you should bear in mind to get the best experience from the documentation generator.\n\nThese are only guidelines, and every situation is unique, but they are sound principles to remember."
},
{
"id": 135,
"parent": 134,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 5,
"title": "Keep the demo compact",
"content": "Compact demos look better and make it easier for users to understand what the demo does. Try to remove as many extraneous UI elements as possible to focus the users' attention on the core use case. \n\nSometimes, it might make sense to have a `demo/app.py` just for the docs and an additional, more complex app for your testing purposes. You can also create other spaces, showcasing more complex examples and linking to them from the main class docstring or the `pyproject.toml` description."
},
{
"id": 136,
"parent": 133,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 4,
"title": "Keep the code concise",
"content": "The 'getting started' snippet utilises the demo code, which should be as short as possible to keep users engaged and avoid confusion.\n\nIt isn't the job of the sample snippet to demonstrate the whole API; this snippet should be the shortest path to success for a new user. It should be easy to type or copy-paste and easy to understand. Explanatory comments should be brief and to the point."
},
{
"id": 137,
"parent": 133,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 4,
"title": "Avoid external dependencies",
"content": "As mentioned above, users should be able to copy-paste a snippet and have a fully working app. Try to avoid third-party library dependencies to facilitate this.\n\nYou should carefully consider any examples; avoiding examples that require additional files or that make assumptions about the environment is generally a good idea."
},
{
"id": 138,
"parent": 133,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 4,
"title": "Ensure the `demo` directory is self-contained",
"content": "Only the `demo` directory will be uploaded to Hugging Face spaces in certain instances, as the component will be installed via PyPi if possible. It is essential that this directory is self-contained and any files needed for the correct running of the demo are present."
},
{
"id": 139,
"parent": 126,
"path": "08_custom-components/09_documenting-custom-components.md",
"level": 3,
"title": "Additional URLs",
"content": "The documentation generator will generate a few buttons, providing helpful information and links to users. They are obtained automatically in some cases, but some need to be explicitly included in the `pyproject.yaml`. \n\n- PyPi Version and link - This is generated automatically.\n- GitHub Repository - This is populated via the `pyproject.toml`'s `project.urls.repository`.\n- Hugging Face Space - This is populated via the `pyproject.toml`'s `project.urls.space`.\n\nAn example `pyproject.toml` urls section might look like this:\n\n```toml\n[project.urls]\nrepository = \"https://github.com/user/repo-name\"\nspace = \"https://huggingface.co/spaces/user/space-name\"\n```"
},
{
"id": 140,
"parent": null,
"path": "08_custom-components/03_configuration.md",
"level": 1,
"title": "Configuring Your Custom Component",
"content": "The custom components workflow focuses on [convention over configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) to reduce the number of decisions you as a developer need to make when developing your custom component.\nThat being said, you can still configure some aspects of the custom component package and directory.\nThis guide will cover how."
},
{
"id": 141,
"parent": 140,
"path": "08_custom-components/03_configuration.md",
"level": 2,
"title": "The Package Name",
"content": "By default, all custom component packages are called `gradio_` where `component-name` is the name of the component's python class in lowercase.\n\nAs an example, let's walkthrough changing the name of a component from `gradio_mytextbox` to `supertextbox`. \n\n1. Modify the `name` in the `pyproject.toml` file. \n\n```bash\n[project]\nname = \"supertextbox\"\n```\n\n2. Change all occurrences of `gradio_` in `pyproject.toml` to ``\n\n```bash\n[tool.hatch.build]\nartifacts = [\"/backend/supertextbox/templates\", \"*.pyi\"]\n\n[tool.hatch.build.targets.wheel]\npackages = [\"/backend/supertextbox\"]\n```\n\n3. Rename the `gradio_` directory in `backend/` to ``\n\n```bash\nmv backend/gradio_mytextbox backend/supertextbox\n```\n\n\nTip: Remember to change the import statement in `demo/app.py`!"
},
{
"id": 142,
"parent": 140,
"path": "08_custom-components/03_configuration.md",
"level": 2,
"title": "Top Level Python Exports",
"content": "By default, only the custom component python class is a top level export. \nThis means that when users type `from gradio_ import ...`, the only class that will be available is the custom component class.\nTo add more classes as top level exports, modify the `__all__` property in `__init__.py`\n\n```python\nfrom .mytextbox import MyTextbox\nfrom .mytextbox import AdditionalClass, additional_function\n\n__all__ = ['MyTextbox', 'AdditionalClass', 'additional_function']\n```"
},
{
"id": 143,
"parent": 140,
"path": "08_custom-components/03_configuration.md",
"level": 2,
"title": "Python Dependencies",
"content": "You can add python dependencies by modifying the `dependencies` key in `pyproject.toml`\n\n```bash\ndependencies = [\"gradio\", \"numpy\", \"PIL\"]\n```\n\n\nTip: Remember to run `gradio cc install` when you add dependencies!"
},
{
"id": 144,
"parent": 140,
"path": "08_custom-components/03_configuration.md",
"level": 2,
"title": "Javascript Dependencies",
"content": "You can add JavaScript dependencies by modifying the `\"dependencies\"` key in `frontend/package.json`\n\n```json\n\"dependencies\": {\n \"@gradio/atoms\": \"0.2.0-beta.4\",\n \"@gradio/statustracker\": \"0.3.0-beta.6\",\n \"@gradio/utils\": \"0.2.0-beta.4\",\n \"your-npm-package\": \"\"\n}\n```"
},
{
"id": 145,
"parent": 140,
"path": "08_custom-components/03_configuration.md",
"level": 2,
"title": "Directory Structure",
"content": "By default, the CLI will place the Python code in `backend` and the JavaScript code in `frontend`.\nIt is not recommended to change this structure since it makes it easy for a potential contributor to look at your source code and know where everything is.\nHowever, if you did want to this is what you would have to do:\n\n1. Place the Python code in the subdirectory of your choosing. Remember to modify the `[tool.hatch.build]` `[tool.hatch.build.targets.wheel]` in the `pyproject.toml` to match!\n\n2. Place the JavaScript code in the subdirectory of your choosing.\n\n2. Add the `FRONTEND_DIR` property on the component python class. It must be the relative path from the file where the class is defined to the location of the JavaScript directory.\n\n```python\nclass SuperTextbox(Component):\n FRONTEND_DIR = \"../../frontend/\"\n```\n\nThe JavaScript and Python directories must be under the same common directory!"
},
{
"id": 146,
"parent": 140,
"path": "08_custom-components/03_configuration.md",
"level": 2,
"title": "Conclusion",
"content": "Sticking to the defaults will make it easy for others to understand and contribute to your custom component.\nAfter all, the beauty of open source is that anyone can help improve your code!\nBut if you ever need to deviate from the defaults, you know how!"
},
{
"id": 147,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "How to Style the Gradio Dataframe",
"content": "Tags: DATAFRAME, STYLE, COLOR"
},
{
"id": 148,
"parent": 147,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 2,
"title": "Introduction",
"content": "Data visualization is a crucial aspect of data analysis and machine learning. The Gradio `DataFrame` component is a popular way to display tabular data (particularly data in the form of a `pandas` `DataFrame` object) within a web application. \n\nThis post will explore the recent enhancements in Gradio that allow users to integrate the styling options of pandas, e.g. adding colors to the DataFrame component, or setting the display precision of numbers. \n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-highlight.png)\n\nLet's dive in!\n\n**Prerequisites**: We'll be using the `gradio.Blocks` class in our examples.\nYou can [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`."
},
{
"id": 149,
"parent": 147,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 2,
"title": "Overview",
"content": "The Gradio `DataFrame` component now supports values of the type `Styler` from the `pandas` class. This allows us to reuse the rich existing API and documentation of the `Styler` class instead of inventing a new style format on our own. Here's a complete example of how it looks:\n\n```python\nimport pandas as pd \nimport gradio as gr"
},
{
"id": 150,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Creating a sample dataframe",
"content": "df = pd.DataFrame({\n \"A\" : [14, 4, 5, 4, 1], \n \"B\" : [5, 2, 54, 3, 2], \n \"C\" : [20, 20, 7, 3, 8], \n \"D\" : [14, 3, 6, 2, 6], \n \"E\" : [23, 45, 64, 32, 23]\n})"
},
{
"id": 151,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Applying style to highlight the maximum value in each row",
"content": "styler = df.style.highlight_max(color = 'lightgreen', axis = 0)"
},
{
"id": 152,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Displaying the styled dataframe in Gradio",
"content": "with gr.Blocks() as demo:\n gr.DataFrame(styler)\n \ndemo.launch()\n```\n\nThe Styler class can be used to apply conditional formatting and styling to dataframes, making them more visually appealing and interpretable. You can highlight certain values, apply gradients, or even use custom CSS to style the DataFrame. The Styler object is applied to a DataFrame and it returns a new object with the relevant styling properties, which can then be previewed directly, or rendered dynamically in a Gradio interface.\n\nTo read more about the Styler object, read the official `pandas` documentation at: https://pandas.pydata.org/docs/user_guide/style.html\n\nBelow, we'll explore a few examples:"
},
{
"id": 153,
"parent": 152,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 2,
"title": "Highlighting Cells",
"content": "Ok, so let's revisit the previous example. We start by creating a `pd.DataFrame` object and then highlight the highest value in each row with a light green color:\n\n```python\nimport pandas as pd"
},
{
"id": 154,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Creating a sample dataframe",
"content": "df = pd.DataFrame({\n \"A\" : [14, 4, 5, 4, 1], \n \"B\" : [5, 2, 54, 3, 2], \n \"C\" : [20, 20, 7, 3, 8], \n \"D\" : [14, 3, 6, 2, 6], \n \"E\" : [23, 45, 64, 32, 23]\n})"
},
{
"id": 155,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Applying style to highlight the maximum value in each row",
"content": "styler = df.style.highlight_max(color = 'lightgreen', axis = 0)\n```\n\nNow, we simply pass this object into the Gradio `DataFrame` and we can visualize our colorful table of data in 4 lines of python:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Dataframe(styler)\n \ndemo.launch()\n```\n\nHere's how it looks:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-highlight.png)"
},
{
"id": 156,
"parent": 155,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 2,
"title": "Font Colors",
"content": "Apart from highlighting cells, you might want to color specific text within the cells. Here's how you can change text colors for certain columns:\n\n```python\nimport pandas as pd \nimport gradio as gr"
},
{
"id": 157,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Creating a sample dataframe",
"content": "df = pd.DataFrame({\n \"A\" : [14, 4, 5, 4, 1], \n \"B\" : [5, 2, 54, 3, 2], \n \"C\" : [20, 20, 7, 3, 8], \n \"D\" : [14, 3, 6, 2, 6], \n \"E\" : [23, 45, 64, 32, 23]\n})"
},
{
"id": 158,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Function to apply text color",
"content": "def highlight_cols(x): \n df = x.copy() \n df.loc[:, :] = 'color: purple'\n df[['B', 'C', 'E']] = 'color: green'\n return df"
},
{
"id": 159,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Applying the style function",
"content": "s = df.style.apply(highlight_cols, axis = None)"
},
{
"id": 160,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Displaying the styled dataframe in Gradio",
"content": "with gr.Blocks() as demo:\n gr.DataFrame(s)\n \ndemo.launch()\n```\n\nIn this script, we define a custom function highlight_cols that changes the text color to purple for all cells, but overrides this for columns B, C, and E with green. Here's how it looks:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-color.png)"
},
{
"id": 161,
"parent": 160,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 2,
"title": "Display Precision ",
"content": "Sometimes, the data you are dealing with might have long floating numbers, and you may want to display only a fixed number of decimals for simplicity. The pandas Styler object allows you to format the precision of numbers displayed. Here's how you can do this:\n\n```python\nimport pandas as pd\nimport gradio as gr"
},
{
"id": 162,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Creating a sample dataframe with floating numbers",
"content": "df = pd.DataFrame({\n \"A\" : [14.12345, 4.23456, 5.34567, 4.45678, 1.56789], \n \"B\" : [5.67891, 2.78912, 54.89123, 3.91234, 2.12345], \n # ... other columns\n})"
},
{
"id": 163,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Setting the precision of numbers to 2 decimal places",
"content": "s = df.style.format(\"{:.2f}\")"
},
{
"id": 164,
"parent": null,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 1,
"title": "Displaying the styled dataframe in Gradio",
"content": "with gr.Blocks() as demo:\n gr.DataFrame(s)\n \ndemo.launch()\n```\n\nIn this script, the format method of the Styler object is used to set the precision of numbers to two decimal places. Much cleaner now:\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-precision.png)"
},
{
"id": 165,
"parent": 164,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 2,
"title": "Note about Interactivity",
"content": "One thing to keep in mind is that the gradio `DataFrame` component only accepts `Styler` objects when it is non-interactive (i.e. in \"static\" mode). If the `DataFrame` component is interactive, then the styling information is ignored and instead the raw table values are shown instead. \n\nThe `DataFrame` component is by default non-interactive, unless it is used as an input to an event. In which case, you can force the component to be non-interactive by setting the `interactive` prop like this:\n\n```python\nc = gr.DataFrame(styler, interactive=False)\n```"
},
{
"id": 166,
"parent": 164,
"path": "10_other-tutorials/styling-the-gradio-dataframe.md",
"level": 2,
"title": "Conclusion 🎉",
"content": "This is just a taste of what's possible using the `gradio.DataFrame` component with the `Styler` class from `pandas`. Try it out and let us know what you think!"
},
{
"id": 167,
"parent": null,
"path": "10_other-tutorials/image-classification-in-pytorch.md",
"level": 1,
"title": "Image Classification in PyTorch",
"content": "Related spaces: https://huggingface.co/spaces/abidlabs/pytorch-image-classifier, https://huggingface.co/spaces/pytorch/ResNet, https://huggingface.co/spaces/pytorch/ResNext, https://huggingface.co/spaces/pytorch/SqueezeNet\nTags: VISION, RESNET, PYTORCH"
},
{
"id": 168,
"parent": 167,
"path": "10_other-tutorials/image-classification-in-pytorch.md",
"level": 2,
"title": "Introduction",
"content": "Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from autonomous vehicles to medical imaging.\n\nSuch models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in Python, and it will look like the demo on the bottom of the page.\n\nLet's get started!"
},
{
"id": 169,
"parent": 168,
"path": "10_other-tutorials/image-classification-in-pytorch.md",
"level": 3,
"title": "Prerequisites",
"content": "Make sure you have the `gradio` Python package already [installed](/getting_started). We will be using a pretrained image classification model, so you should also have `torch` installed."
},
{
"id": 170,
"parent": 167,
"path": "10_other-tutorials/image-classification-in-pytorch.md",
"level": 2,
"title": "Step 1 — Setting up the Image Classification Model",
"content": "First, we will need an image classification model. For this tutorial, we will use a pretrained Resnet-18 model, as it is easily downloadable from [PyTorch Hub](https://pytorch.org/hub/pytorch_vision_resnet/). You can use a different pretrained model or train your own.\n\n```python\nimport torch\n\nmodel = torch.hub.load('pytorch/vision:v0.6.0', 'resnet18', pretrained=True).eval()\n```\n\nBecause we will be using the model for inference, we have called the `.eval()` method."
},
{
"id": 171,
"parent": 167,
"path": "10_other-tutorials/image-classification-in-pytorch.md",
"level": 2,
"title": "Step 2 — Defining a `predict` function",
"content": "Next, we will need to define a function that takes in the _user input_, which in this case is an image, and returns the prediction. The prediction should be returned as a dictionary whose keys are class name and values are confidence probabilities. We will load the class names from this [text file](https://git.io/JJkYN).\n\nIn the case of our pretrained model, it will look like this:\n\n```python\nimport requests\nfrom PIL import Image\nfrom torchvision import transforms"
},
{
"id": 172,
"parent": null,
"path": "10_other-tutorials/image-classification-in-pytorch.md",
"level": 1,
"title": "Download human-readable labels for ImageNet.",
"content": "response = requests.get(\"https://git.io/JJkYN\")\nlabels = response.text.split(\"\\n\")\n\ndef predict(inp):\n inp = transforms.ToTensor()(inp).unsqueeze(0)\n with torch.no_grad():\n prediction = torch.nn.functional.softmax(model(inp)[0], dim=0)\n confidences = {labels[i]: float(prediction[i]) for i in range(1000)}\n return confidences\n```\n\nLet's break this down. The function takes one parameter:\n\n- `inp`: the input image as a `PIL` image\n\nThen, the function converts the image to a PIL Image and then eventually a PyTorch `tensor`, passes it through the model, and returns:\n\n- `confidences`: the predictions, as a dictionary whose keys are class labels and whose values are confidence probabilities"
},
{
"id": 173,
"parent": 172,
"path": "10_other-tutorials/image-classification-in-pytorch.md",
"level": 2,
"title": "Step 3 — Creating a Gradio Interface",
"content": "Now that we have our predictive function set up, we can create a Gradio Interface around it.\n\nIn this case, the input component is a drag-and-drop image component. To create this input, we use `Image(type=\"pil\")` which creates the component and handles the preprocessing to convert that to a `PIL` image.\n\nThe output component will be a `Label`, which displays the top labels in a nice form. Since we don't want to show all 1,000 class labels, we will customize it to show only the top 3 images by constructing it as `Label(num_top_classes=3)`.\n\nFinally, we'll add one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples. The code for Gradio looks like this:\n\n```python\nimport gradio as gr\n\ngr.Interface(fn=predict,\n inputs=gr.Image(type=\"pil\"),\n outputs=gr.Label(num_top_classes=3),\n examples=[\"lion.jpg\", \"cheetah.jpg\"]).launch()\n```\n\nThis produces the following interface, which you can try right here in your browser (try uploading your own examples!):\n\n\n\n\n---\n\nAnd you're done! That's all the code you need to build a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!"
},
{
"id": 174,
"parent": null,
"path": "10_other-tutorials/using-flagging.md",
"level": 1,
"title": "Using Flagging",
"content": "Related spaces: https://huggingface.co/spaces/gradio/calculator-flagging-crowdsourced, https://huggingface.co/spaces/gradio/calculator-flagging-options, https://huggingface.co/spaces/gradio/calculator-flag-basic\nTags: FLAGGING, DATA"
},
{
"id": 175,
"parent": 174,
"path": "10_other-tutorials/using-flagging.md",
"level": 2,
"title": "Introduction",
"content": "When you demo a machine learning model, you might want to collect data from users who try the model, particularly data points in which the model is not behaving as expected. Capturing these \"hard\" data points is valuable because it allows you to improve your machine learning model and make it more reliable and robust.\n\nGradio simplifies the collection of this data by including a **Flag** button with every `Interface`. This allows a user or tester to easily send data back to the machine where the demo is running. In this Guide, we discuss more about how to use the flagging feature, both with `gradio.Interface` as well as with `gradio.Blocks`."
},
{
"id": 176,
"parent": 174,
"path": "10_other-tutorials/using-flagging.md",
"level": 2,
"title": "The **Flag** button in `gradio.Interface`",
"content": "Flagging with Gradio's `Interface` is especially easy. By default, underneath the output components, there is a button marked **Flag**. When a user testing your model sees input with interesting output, they can click the flag button to send the input and output data back to the machine where the demo is running. The sample is saved to a CSV log file (by default). If the demo involves images, audio, video, or other types of files, these are saved separately in a parallel directory and the paths to these files are saved in the CSV file.\n\nThere are [four parameters](https://gradio.app/docs/interface#initialization) in `gradio.Interface` that control how flagging works. We will go over them in greater detail.\n\n- `flagging_mode`: this parameter can be set to either `\"manual\"` (default), `\"auto\"`, or `\"never\"`.\n - `manual`: users will see a button to flag, and samples are only flagged when the button is clicked.\n - `auto`: users will not see a button to flag, but every sample will be flagged automatically.\n - `never`: users will not see a button to flag, and no sample will be flagged.\n- `flagging_options`: this parameter can be either `None` (default) or a list of strings.\n - If `None`, then the user simply clicks on the **Flag** button and no additional options are shown.\n - If a list of strings are provided, then the user sees several buttons, corresponding to each of the strings that are provided. For example, if the value of this parameter is `[\"Incorrect\", \"Ambiguous\"]`, then buttons labeled **Flag as Incorrect** and **Flag as Ambiguous** appear. This only applies if `flagging_mode` is `\"manual\"`.\n - The chosen option is then logged along with the input and output.\n- `flagging_dir`: this parameter takes a string.\n - It represents what to name the directory where flagged data is stored.\n- `flagging_callback`: this parameter takes an instance of a subclass of the `FlaggingCallback` class\n - Using this parameter allows you to write custom code that gets run when the flag button is clicked\n - By default, this is set to an instance of `gr.JSONLogger`"
},
{
"id": 177,
"parent": 174,
"path": "10_other-tutorials/using-flagging.md",
"level": 2,
"title": "What happens to flagged data?",
"content": "Within the directory provided by the `flagging_dir` argument, a JSON file will log the flagged data.\n\nHere's an example: The code below creates the calculator interface embedded below it:\n\n```python\nimport gradio as gr\n\n\ndef calculator(num1, operation, num2):\n if operation == \"add\":\n return num1 + num2\n elif operation == \"subtract\":\n return num1 - num2\n elif operation == \"multiply\":\n return num1 * num2\n elif operation == \"divide\":\n return num1 / num2\n\n\niface = gr.Interface(\n calculator,\n [\"number\", gr.Radio([\"add\", \"subtract\", \"multiply\", \"divide\"]), \"number\"],\n \"number\",\n allow_flagging=\"manual\"\n)\n\niface.launch()\n```\n\n\n\nWhen you click the flag button above, the directory where the interface was launched will include a new flagged subfolder, with a csv file inside it. This csv file includes all the data that was flagged.\n\n```directory\n+-- flagged/\n| +-- logs.csv\n```\n\n_flagged/logs.csv_\n\n```csv\nnum1,operation,num2,Output,timestamp\n5,add,7,12,2022-01-31 11:40:51.093412\n6,subtract,1.5,4.5,2022-01-31 03:25:32.023542\n```\n\nIf the interface involves file data, such as for Image and Audio components, folders will be created to store those flagged data as well. For example an `image` input to `image` output interface will create the following structure.\n\n```directory\n+-- flagged/\n| +-- logs.csv\n| +-- image/\n| | +-- 0.png\n| | +-- 1.png\n| +-- Output/\n| | +-- 0.png\n| | +-- 1.png\n```\n\n_flagged/logs.csv_\n\n```csv\nim,Output timestamp\nim/0.png,Output/0.png,2022-02-04 19:49:58.026963\nim/1.png,Output/1.png,2022-02-02 10:40:51.093412\n```\n\nIf you wish for the user to provide a reason for flagging, you can pass a list of strings to the `flagging_options` argument of Interface. Users will have to select one of these choices when flagging, and the option will be saved as an additional column to the CSV.\n\nIf we go back to the calculator example, the following code will create the interface embedded below it.\n\n```python\niface = gr.Interface(\n calculator,\n [\"number\", gr.Radio([\"add\", \"subtract\", \"multiply\", \"divide\"]), \"number\"],\n \"number\",\n flagging_mode=\"manual\",\n flagging_options=[\"wrong sign\", \"off by one\", \"other\"]\n)\n\niface.launch()\n```\n\n\n\nWhen users click the flag button, the csv file will now include a column indicating the selected option.\n\n_flagged/logs.csv_\n\n```csv\nnum1,operation,num2,Output,flag,timestamp\n5,add,7,-12,wrong sign,2022-02-04 11:40:51.093412\n6,subtract,1.5,3.5,off by one,2022-02-04 11:42:32.062512\n```"
},
{
"id": 178,
"parent": 174,
"path": "10_other-tutorials/using-flagging.md",
"level": 2,
"title": "Flagging with Blocks",
"content": "What about if you are using `gradio.Blocks`? On one hand, you have even more flexibility\nwith Blocks -- you can write whatever Python code you want to run when a button is clicked,\nand assign that using the built-in events in Blocks.\n\nAt the same time, you might want to use an existing `FlaggingCallback` to avoid writing extra code.\nThis requires two steps:\n\n1. You have to run your callback's `.setup()` somewhere in the code prior to the\n first time you flag data\n2. When the flagging button is clicked, then you trigger the callback's `.flag()` method,\n making sure to collect the arguments correctly and disabling the typical preprocessing.\n\nHere is an example with an image sepia filter Blocks demo that lets you flag\ndata using the default `CSVLogger`:\n\n```py\nimport numpy as np\nimport gradio as gr\n\ndef sepia(input_img, strength):\n sepia_filter = strength * np.array(\n [[0.393, 0.769, 0.189], [0.349, 0.686, 0.168], [0.272, 0.534, 0.131]]\n ) + (1-strength) * np.identity(3)\n sepia_img = input_img.dot(sepia_filter.T)\n sepia_img /= sepia_img.max()\n return sepia_img\n\ncallback = gr.CSVLogger()\n\nwith gr.Blocks() as demo:\n with gr.Row():\n with gr.Column():\n img_input = gr.Image()\n strength = gr.Slider(0, 1, 0.5)\n img_output = gr.Image()\n with gr.Row():\n btn = gr.Button(\"Flag\")\n\n # This needs to be called at some point prior to the first call to callback.flag()\n callback.setup([img_input, strength, img_output], \"flagged_data_points\")\n\n img_input.change(sepia, [img_input, strength], img_output)\n strength.change(sepia, [img_input, strength], img_output)\n\n # We can choose which components to flag -- in this case, we'll flag all of them\n btn.click(lambda *args: callback.flag(list(args)), [img_input, strength, img_output], None, preprocess=False)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_blocks_flag"
},
{
"id": 179,
"parent": 174,
"path": "10_other-tutorials/using-flagging.md",
"level": 2,
"title": "Privacy",
"content": "Important Note: please make sure your users understand when the data they submit is being saved, and what you plan on doing with it. This is especially important when you use `flagging_mode=auto` (when all of the data submitted through the demo is being flagged)"
},
{
"id": 180,
"parent": 179,
"path": "10_other-tutorials/using-flagging.md",
"level": 3,
"title": "That's all! Happy building :)",
"content": ""
},
{
"id": 181,
"parent": null,
"path": "10_other-tutorials/theming-guide.md",
"level": 1,
"title": "Theming",
"content": "Tags: THEMES"
},
{
"id": 182,
"parent": 181,
"path": "10_other-tutorials/theming-guide.md",
"level": 2,
"title": "Introduction",
"content": "Gradio features a built-in theming engine that lets you customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `Blocks` or `Interface` constructor. For example:\n\n```python\nwith gr.Blocks(theme=gr.themes.Soft()) as demo:\n ...\n```\n\n
\n\n
\n\nGradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. These are:\n\n\n* `gr.themes.Base()` - the `\"base\"` theme sets the primary color to blue but otherwise has minimal styling, making it particularly useful as a base for creating new, custom themes.\n* `gr.themes.Default()` - the `\"default\"` Gradio 5 theme, with a vibrant orange primary color and gray secondary color.\n* `gr.themes.Origin()` - the `\"origin\"` theme is most similar to Gradio 4 styling. Colors, especially in light mode, are more subdued than the Gradio 5 default theme.\n* `gr.themes.Citrus()` - the `\"citrus\"` theme uses a yellow primary color, highlights form elements that are in focus, and includes fun 3D effects when buttons are clicked.\n* `gr.themes.Monochrome()` - the `\"monochrome\"` theme uses a black primary and white secondary color, and uses serif-style fonts, giving the appearance of a black-and-white newspaper. \n* `gr.themes.Soft()` - the `\"soft\"` theme uses a purpose primary color and white secondary color. It also increases the border radii and around buttons and form elements and highlights labels.\n* `gr.themes.Glass()` - the `\"glass\"` theme has a blue primary color and a transclucent gray secondary color. The theme also uses vertical gradients to create a glassy effect.\n* `gr.themes.Ocean()` - the `\"ocean\"` theme has a blue-green primary color and gray secondary color. The theme also uses horizontal gradients, especially for buttons and some form elements.\n\n\nEach of these themes set values for hundreds of CSS variables. You can use prebuilt themes as a starting point for your own custom themes, or you can create your own themes from scratch. Let's take a look at each approach."
},
{
"id": 183,
"parent": 181,
"path": "10_other-tutorials/theming-guide.md",
"level": 2,
"title": "Using the Theme Builder",
"content": "The easiest way to build a theme is using the Theme Builder. To launch the Theme Builder locally, run the following code:\n\n```python\nimport gradio as gr\n\ngr.themes.builder()\n```\n\n$demo_theme_builder\n\nYou can use the Theme Builder running on Spaces above, though it runs much faster when you launch it locally via `gr.themes.builder()`.\n\nAs you edit the values in the Theme Builder, the app will preview updates in real time. You can download the code to generate the theme you've created so you can use it in any Gradio app.\n\nIn the rest of the guide, we will cover building themes programmatically."
},
{
"id": 184,
"parent": 181,
"path": "10_other-tutorials/theming-guide.md",
"level": 2,
"title": "Extending Themes via the Constructor",
"content": "Although each theme has hundreds of CSS variables, the values for most these variables are drawn from 8 core variables which can be set through the constructor of each prebuilt theme. Modifying these 8 arguments allows you to quickly change the look and feel of your app."
},
{
"id": 185,
"parent": 184,
"path": "10_other-tutorials/theming-guide.md",
"level": 3,
"title": "Core Colors",
"content": "The first 3 constructor arguments set the colors of the theme and are `gradio.themes.Color` objects. Internally, these Color objects hold brightness values for the palette of a single hue, ranging from 50, 100, 200..., 800, 900, 950. Other CSS variables are derived from these 3 colors.\n\nThe 3 color constructor arguments are:\n\n- `primary_hue`: This is the color draws attention in your theme. In the default theme, this is set to `gradio.themes.colors.orange`.\n- `secondary_hue`: This is the color that is used for secondary elements in your theme. In the default theme, this is set to `gradio.themes.colors.blue`.\n- `neutral_hue`: This is the color that is used for text and other neutral elements in your theme. In the default theme, this is set to `gradio.themes.colors.gray`.\n\nYou could modify these values using their string shortcuts, such as\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(primary_hue=\"red\", secondary_hue=\"pink\")) as demo:\n ...\n```\n\nor you could use the `Color` objects directly, like this:\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(primary_hue=gr.themes.colors.red, secondary_hue=gr.themes.colors.pink)) as demo:\n ...\n```\n\n
\n\n
\n\nPredefined colors are:\n\n- `slate`\n- `gray`\n- `zinc`\n- `neutral`\n- `stone`\n- `red`\n- `orange`\n- `amber`\n- `yellow`\n- `lime`\n- `green`\n- `emerald`\n- `teal`\n- `cyan`\n- `sky`\n- `blue`\n- `indigo`\n- `violet`\n- `purple`\n- `fuchsia`\n- `pink`\n- `rose`\n\nYou could also create your own custom `Color` objects and pass them in."
},
{
"id": 186,
"parent": 184,
"path": "10_other-tutorials/theming-guide.md",
"level": 3,
"title": "Core Sizing",
"content": "The next 3 constructor arguments set the sizing of the theme and are `gradio.themes.Size` objects. Internally, these Size objects hold pixel size values that range from `xxs` to `xxl`. Other CSS variables are derived from these 3 sizes.\n\n- `spacing_size`: This sets the padding within and spacing between elements. In the default theme, this is set to `gradio.themes.sizes.spacing_md`.\n- `radius_size`: This sets the roundedness of corners of elements. In the default theme, this is set to `gradio.themes.sizes.radius_md`.\n- `text_size`: This sets the font size of text. In the default theme, this is set to `gradio.themes.sizes.text_md`.\n\nYou could modify these values using their string shortcuts, such as\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(spacing_size=\"sm\", radius_size=\"none\")) as demo:\n ...\n```\n\nor you could use the `Size` objects directly, like this:\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(spacing_size=gr.themes.sizes.spacing_sm, radius_size=gr.themes.sizes.radius_none)) as demo:\n ...\n```\n\n
\n\n
\n\nThe predefined size objects are:\n\n- `radius_none`\n- `radius_sm`\n- `radius_md`\n- `radius_lg`\n- `spacing_sm`\n- `spacing_md`\n- `spacing_lg`\n- `text_sm`\n- `text_md`\n- `text_lg`\n\nYou could also create your own custom `Size` objects and pass them in."
},
{
"id": 187,
"parent": 184,
"path": "10_other-tutorials/theming-guide.md",
"level": 3,
"title": "Core Fonts",
"content": "The final 2 constructor arguments set the fonts of the theme. You can pass a list of fonts to each of these arguments to specify fallbacks. If you provide a string, it will be loaded as a system font. If you provide a `gradio.themes.GoogleFont`, the font will be loaded from Google Fonts.\n\n- `font`: This sets the primary font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont(\"IBM Plex Sans\")`.\n- `font_mono`: This sets the monospace font of the theme. In the default theme, this is set to `gradio.themes.GoogleFont(\"IBM Plex Mono\")`.\n\nYou could modify these values such as the following:\n\n```python\nwith gr.Blocks(theme=gr.themes.Default(font=[gr.themes.GoogleFont(\"Inconsolata\"), \"Arial\", \"sans-serif\"])) as demo:\n ...\n```\n\n
\n\n
"
},
{
"id": 188,
"parent": 181,
"path": "10_other-tutorials/theming-guide.md",
"level": 2,
"title": "Extending Themes via `.set()`",
"content": "You can also modify the values of CSS variables after the theme has been loaded. To do so, use the `.set()` method of the theme object to get access to the CSS variables. For example:\n\n```python\ntheme = gr.themes.Default(primary_hue=\"blue\").set(\n loader_color=\"#FF0000\",\n slider_color=\"#FF0000\",\n)\n\nwith gr.Blocks(theme=theme) as demo:\n ...\n```\n\nIn the example above, we've set the `loader_color` and `slider_color` variables to `#FF0000`, despite the overall `primary_color` using the blue color palette. You can set any CSS variable that is defined in the theme in this manner.\n\nYour IDE type hinting should help you navigate these variables. Since there are so many CSS variables, let's take a look at how these variables are named and organized."
},
{
"id": 189,
"parent": 188,
"path": "10_other-tutorials/theming-guide.md",
"level": 3,
"title": "CSS Variable Naming Conventions",
"content": "CSS variable names can get quite long, like `button_primary_background_fill_hover_dark`! However they follow a common naming convention that makes it easy to understand what they do and to find the variable you're looking for. Separated by underscores, the variable name is made up of:\n\n1. The target element, such as `button`, `slider`, or `block`.\n2. The target element type or sub-element, such as `button_primary`, or `block_label`.\n3. The property, such as `button_primary_background_fill`, or `block_label_border_width`.\n4. Any relevant state, such as `button_primary_background_fill_hover`.\n5. If the value is different in dark mode, the suffix `_dark`. For example, `input_border_color_focus_dark`.\n\nOf course, many CSS variable names are shorter than this, such as `table_border_color`, or `input_shadow`."
},
{
"id": 190,
"parent": 188,
"path": "10_other-tutorials/theming-guide.md",
"level": 3,
"title": "CSS Variable Organization",
"content": "Though there are hundreds of CSS variables, they do not all have to have individual values. They draw their values by referencing a set of core variables and referencing each other. This allows us to only have to modify a few variables to change the look and feel of the entire theme, while also getting finer control of individual elements that we may want to modify."
},
{
"id": 191,
"parent": 190,
"path": "10_other-tutorials/theming-guide.md",
"level": 4,
"title": "Referencing Core Variables",
"content": "To reference one of the core constructor variables, precede the variable name with an asterisk. To reference a core color, use the `*primary_`, `*secondary_`, or `*neutral_` prefix, followed by the brightness value. For example:\n\n```python\ntheme = gr.themes.Default(primary_hue=\"blue\").set(\n button_primary_background_fill=\"*primary_200\",\n button_primary_background_fill_hover=\"*primary_300\",\n)\n```\n\nIn the example above, we've set the `button_primary_background_fill` and `button_primary_background_fill_hover` variables to `*primary_200` and `*primary_300`. These variables will be set to the 200 and 300 brightness values of the blue primary color palette, respectively.\n\nSimilarly, to reference a core size, use the `*spacing_`, `*radius_`, or `*text_` prefix, followed by the size value. For example:\n\n```python\ntheme = gr.themes.Default(radius_size=\"md\").set(\n button_primary_border_radius=\"*radius_xl\",\n)\n```\n\nIn the example above, we've set the `button_primary_border_radius` variable to `*radius_xl`. This variable will be set to the `xl` setting of the medium radius size range."
},
{
"id": 192,
"parent": 190,
"path": "10_other-tutorials/theming-guide.md",
"level": 4,
"title": "Referencing Other Variables",
"content": "Variables can also reference each other. For example, look at the example below:\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"#FF0000\",\n button_primary_background_fill_hover=\"#FF0000\",\n button_primary_border=\"#FF0000\",\n)\n```\n\nHaving to set these values to a common color is a bit tedious. Instead, we can reference the `button_primary_background_fill` variable in the `button_primary_background_fill_hover` and `button_primary_border` variables, using a `*` prefix.\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"#FF0000\",\n button_primary_background_fill_hover=\"*button_primary_background_fill\",\n button_primary_border=\"*button_primary_background_fill\",\n)\n```\n\nNow, if we change the `button_primary_background_fill` variable, the `button_primary_background_fill_hover` and `button_primary_border` variables will automatically update as well.\n\nThis is particularly useful if you intend to share your theme - it makes it easy to modify the theme without having to change every variable.\n\nNote that dark mode variables automatically reference each other. For example:\n\n```python\ntheme = gr.themes.Default().set(\n button_primary_background_fill=\"#FF0000\",\n button_primary_background_fill_dark=\"#AAAAAA\",\n button_primary_border=\"*button_primary_background_fill\",\n button_primary_border_dark=\"*button_primary_background_fill_dark\",\n)\n```\n\n`button_primary_border_dark` will draw its value from `button_primary_background_fill_dark`, because dark mode always draw from the dark version of the variable."
},
{
"id": 193,
"parent": 181,
"path": "10_other-tutorials/theming-guide.md",
"level": 2,
"title": "Creating a Full Theme",
"content": "Let's say you want to create a theme from scratch! We'll go through it step by step - you can also see the source of prebuilt themes in the gradio source repo for reference - [here's the source](https://github.com/gradio-app/gradio/blob/main/gradio/themes/monochrome.py) for the Monochrome theme.\n\nOur new theme class will inherit from `gradio.themes.Base`, a theme that sets a lot of convenient defaults. Let's make a simple demo that creates a dummy theme called Seafoam, and make a simple app that uses it.\n\n```py\nimport gradio as gr\nfrom gradio.themes.base import Base\nimport time\n\nclass Seafoam(Base):\n pass\n\nseafoam = Seafoam()\n\nwith gr.Blocks(theme=seafoam) as demo:\n textbox = gr.Textbox(label=\"Name\")\n slider = gr.Slider(label=\"Count\", minimum=0, maximum=100, step=1)\n with gr.Row():\n button = gr.Button(\"Submit\", variant=\"primary\")\n clear = gr.Button(\"Clear\")\n output = gr.Textbox(label=\"Output\")\n\n def repeat(name, count):\n time.sleep(3)\n return name * count\n\n button.click(repeat, [textbox, slider], output)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n\n
\n\n
\n\nThe Base theme is very barebones, and uses `gr.themes.Blue` as it primary color - you'll note the primary button and the loading animation are both blue as a result. Let's change the defaults core arguments of our app. We'll overwrite the constructor and pass new defaults for the core constructor arguments.\n\nWe'll use `gr.themes.Emerald` as our primary color, and set secondary and neutral hues to `gr.themes.Blue`. We'll make our text larger using `text_lg`. We'll use `Quicksand` as our default font, loaded from Google Fonts.\n\n```py\nfrom __future__ import annotations\nfrom typing import Iterable\nimport gradio as gr\nfrom gradio.themes.base import Base\nfrom gradio.themes.utils import colors, fonts, sizes\nimport time\n\nclass Seafoam(Base):\n def __init__(\n self,\n *,\n primary_hue: colors.Color | str = colors.emerald,\n secondary_hue: colors.Color | str = colors.blue,\n neutral_hue: colors.Color | str = colors.gray,\n spacing_size: sizes.Size | str = sizes.spacing_md,\n radius_size: sizes.Size | str = sizes.radius_md,\n text_size: sizes.Size | str = sizes.text_lg,\n font: fonts.Font\n | str\n | Iterable[fonts.Font | str] = (\n fonts.GoogleFont(\"Quicksand\"),\n \"ui-sans-serif\",\n \"sans-serif\",\n ),\n font_mono: fonts.Font\n | str\n | Iterable[fonts.Font | str] = (\n fonts.GoogleFont(\"IBM Plex Mono\"),\n \"ui-monospace\",\n \"monospace\",\n ),\n ):\n super().__init__(\n primary_hue=primary_hue,\n secondary_hue=secondary_hue,\n neutral_hue=neutral_hue,\n spacing_size=spacing_size,\n radius_size=radius_size,\n text_size=text_size,\n font=font,\n font_mono=font_mono,\n )\n\nseafoam = Seafoam()\n\nwith gr.Blocks(theme=seafoam) as demo:\n textbox = gr.Textbox(label=\"Name\")\n slider = gr.Slider(label=\"Count\", minimum=0, maximum=100, step=1)\n with gr.Row():\n button = gr.Button(\"Submit\", variant=\"primary\")\n clear = gr.Button(\"Clear\")\n output = gr.Textbox(label=\"Output\")\n\n def repeat(name, count):\n time.sleep(3)\n return name * count\n\n button.click(repeat, [textbox, slider], output)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n\n
\n\n
\n\nSee how the primary button and the loading animation are now green? These CSS variables are tied to the `primary_hue` variable.\n\nLet's modify the theme a bit more directly. We'll call the `set()` method to overwrite CSS variable values explicitly. We can use any CSS logic, and reference our core constructor arguments using the `*` prefix.\n\n```py\nfrom __future__ import annotations\nfrom typing import Iterable\nimport gradio as gr\nfrom gradio.themes.base import Base\nfrom gradio.themes.utils import colors, fonts, sizes\nimport time\n\nclass Seafoam(Base):\n def __init__(\n self,\n *,\n primary_hue: colors.Color | str = colors.emerald,\n secondary_hue: colors.Color | str = colors.blue,\n neutral_hue: colors.Color | str = colors.blue,\n spacing_size: sizes.Size | str = sizes.spacing_md,\n radius_size: sizes.Size | str = sizes.radius_md,\n text_size: sizes.Size | str = sizes.text_lg,\n font: fonts.Font\n | str\n | Iterable[fonts.Font | str] = (\n fonts.GoogleFont(\"Quicksand\"),\n \"ui-sans-serif\",\n \"sans-serif\",\n ),\n font_mono: fonts.Font\n | str\n | Iterable[fonts.Font | str] = (\n fonts.GoogleFont(\"IBM Plex Mono\"),\n \"ui-monospace\",\n \"monospace\",\n ),\n ):\n super().__init__(\n primary_hue=primary_hue,\n secondary_hue=secondary_hue,\n neutral_hue=neutral_hue,\n spacing_size=spacing_size,\n radius_size=radius_size,\n text_size=text_size,\n font=font,\n font_mono=font_mono,\n )\n super().set(\n body_background_fill=\"repeating-linear-gradient(45deg, *primary_200, *primary_200 10px, *primary_50 10px, *primary_50 20px)\",\n body_background_fill_dark=\"repeating-linear-gradient(45deg, *primary_800, *primary_800 10px, *primary_900 10px, *primary_900 20px)\",\n button_primary_background_fill=\"linear-gradient(90deg, *primary_300, *secondary_400)\",\n button_primary_background_fill_hover=\"linear-gradient(90deg, *primary_200, *secondary_300)\",\n button_primary_text_color=\"white\",\n button_primary_background_fill_dark=\"linear-gradient(90deg, *primary_600, *secondary_800)\",\n slider_color=\"*secondary_300\",\n slider_color_dark=\"*secondary_600\",\n block_title_text_weight=\"600\",\n block_border_width=\"3px\",\n block_shadow=\"*shadow_drop_lg\",\n button_primary_shadow=\"*shadow_drop_lg\",\n button_large_padding=\"32px\",\n )\n\nseafoam = Seafoam()\n\nwith gr.Blocks(theme=seafoam) as demo:\n textbox = gr.Textbox(label=\"Name\")\n slider = gr.Slider(label=\"Count\", minimum=0, maximum=100, step=1)\n with gr.Row():\n button = gr.Button(\"Submit\", variant=\"primary\")\n clear = gr.Button(\"Clear\")\n output = gr.Textbox(label=\"Output\")\n\n def repeat(name, count):\n time.sleep(3)\n return name * count\n\n button.click(repeat, [textbox, slider], output)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n\n
\n\n
\n\nLook how fun our theme looks now! With just a few variable changes, our theme looks completely different.\n\nYou may find it helpful to explore the [source code of the other prebuilt themes](https://github.com/gradio-app/gradio/blob/main/gradio/themes) to see how they modified the base theme. You can also find your browser's Inspector useful to select elements from the UI and see what CSS variables are being used in the styles panel."
},
{
"id": 194,
"parent": 181,
"path": "10_other-tutorials/theming-guide.md",
"level": 2,
"title": "Sharing Themes",
"content": "Once you have created a theme, you can upload it to the HuggingFace Hub to let others view it, use it, and build off of it!"
},
{
"id": 195,
"parent": 194,
"path": "10_other-tutorials/theming-guide.md",
"level": 3,
"title": "Uploading a Theme",
"content": "There are two ways to upload a theme, via the theme class instance or the command line. We will cover both of them with the previously created `seafoam` theme.\n\n- Via the class instance\n\nEach theme instance has a method called `push_to_hub` we can use to upload a theme to the HuggingFace hub.\n\n```python\nseafoam.push_to_hub(repo_name=\"seafoam\",\n version=\"0.0.1\",\n\t\t\t\t\thf_token=\"\")\n```\n\n- Via the command line\n\nFirst save the theme to disk\n\n```python\nseafoam.dump(filename=\"seafoam.json\")\n```\n\nThen use the `upload_theme` command:\n\n```bash\nupload_theme\\\n\"seafoam.json\"\\\n\"seafoam\"\\\n--version \"0.0.1\"\\\n--hf_token \"\"\n```\n\nIn order to upload a theme, you must have a HuggingFace account and pass your [Access Token](https://huggingface.co/docs/huggingface_hub/quick-start#login)\nas the `hf_token` argument. However, if you log in via the [HuggingFace command line](https://huggingface.co/docs/huggingface_hub/quick-start#login) (which comes installed with `gradio`),\nyou can omit the `hf_token` argument.\n\nThe `version` argument lets you specify a valid [semantic version](https://www.geeksforgeeks.org/introduction-semantic-versioning/) string for your theme.\nThat way your users are able to specify which version of your theme they want to use in their apps. This also lets you publish updates to your theme without worrying\nabout changing how previously created apps look. The `version` argument is optional. If omitted, the next patch version is automatically applied."
},
{
"id": 196,
"parent": 194,
"path": "10_other-tutorials/theming-guide.md",
"level": 3,
"title": "Theme Previews",
"content": "By calling `push_to_hub` or `upload_theme`, the theme assets will be stored in a [HuggingFace space](https://huggingface.co/docs/hub/spaces-overview).\n\nThe theme preview for our seafoam theme is here: [seafoam preview](https://huggingface.co/spaces/gradio/seafoam).\n\n
\n\n
"
},
{
"id": 197,
"parent": 194,
"path": "10_other-tutorials/theming-guide.md",
"level": 3,
"title": "Discovering Themes",
"content": "The [Theme Gallery](https://huggingface.co/spaces/gradio/theme-gallery) shows all the public gradio themes. After publishing your theme,\nit will automatically show up in the theme gallery after a couple of minutes.\n\nYou can sort the themes by the number of likes on the space and from most to least recently created as well as toggling themes between light and dark mode.\n\n
\n\n
"
},
{
"id": 198,
"parent": 194,
"path": "10_other-tutorials/theming-guide.md",
"level": 3,
"title": "Downloading",
"content": "To use a theme from the hub, use the `from_hub` method on the `ThemeClass` and pass it to your app:\n\n```python\nmy_theme = gr.Theme.from_hub(\"gradio/seafoam\")\n\nwith gr.Blocks(theme=my_theme) as demo:\n ....\n```\n\nYou can also pass the theme string directly to `Blocks` or `Interface` (`gr.Blocks(theme=\"gradio/seafoam\")`)\n\nYou can pin your app to an upstream theme version by using semantic versioning expressions.\n\nFor example, the following would ensure the theme we load from the `seafoam` repo was between versions `0.0.1` and `0.1.0`:\n\n```python\nwith gr.Blocks(theme=\"gradio/seafoam@>=0.0.1,<0.1.0\") as demo:\n ....\n```\n\nEnjoy creating your own themes! If you make one you're proud of, please share it with the world by uploading it to the hub!\nIf you tag us on [Twitter](https://twitter.com/gradio) we can give your theme a shout out!\n\n"
},
{
"id": 199,
"parent": null,
"path": "10_other-tutorials/01_using-hugging-face-integrations.md",
"level": 1,
"title": "Using Hugging Face Integrations",
"content": "Related spaces: https://huggingface.co/spaces/gradio/en2es\nTags: HUB, SPACES, EMBED\n\nContributed by Omar Sanseviero 🦙"
},
{
"id": 200,
"parent": 199,
"path": "10_other-tutorials/01_using-hugging-face-integrations.md",
"level": 2,
"title": "Introduction",
"content": "The Hugging Face Hub is a central platform that has hundreds of thousands of [models](https://huggingface.co/models), [datasets](https://huggingface.co/datasets) and [demos](https://huggingface.co/spaces) (also known as Spaces). \n\nGradio has multiple features that make it extremely easy to leverage existing models and Spaces on the Hub. This guide walks through these features."
},
{
"id": 201,
"parent": 199,
"path": "10_other-tutorials/01_using-hugging-face-integrations.md",
"level": 2,
"title": "Demos with the Hugging Face Inference Endpoints",
"content": "Hugging Face has a service called [Serverless Inference Endpoints](https://huggingface.co/docs/api-inference/index), which allows you to send HTTP requests to models on the Hub. The API includes a generous free tier, and you can switch to [dedicated Inference Endpoints](https://huggingface.co/inference-endpoints/dedicated) when you want to use it in production. Gradio integrates directly with Serverless Inference Endpoints so that you can create a demo simply by specifying a model's name (e.g. `Helsinki-NLP/opus-mt-en-es`), like this:\n\n```python\nimport gradio as gr\n\ndemo = gr.load(\"Helsinki-NLP/opus-mt-en-es\", src=\"models\")\n\ndemo.launch()\n```\n\nFor any Hugging Face model supported in Inference Endpoints, Gradio automatically infers the expected input and output and make the underlying server calls, so you don't have to worry about defining the prediction function. \n\nNotice that we just put specify the model name and state that the `src` should be `models` (Hugging Face's Model Hub). There is no need to install any dependencies (except `gradio`) since you are not loading the model on your computer.\n\nYou might notice that the first inference takes a little bit longer. This happens since the Inference Endpoints is loading the model in the server. You get some benefits afterward:\n\n- The inference will be much faster.\n- The server caches your requests.\n- You get built-in automatic scaling."
},
{
"id": 202,
"parent": 199,
"path": "10_other-tutorials/01_using-hugging-face-integrations.md",
"level": 2,
"title": "Hosting your Gradio demos on Spaces",
"content": "[Hugging Face Spaces](https://hf.co/spaces) allows anyone to host their Gradio demos freely, and uploading your Gradio demos take a couple of minutes. You can head to [hf.co/new-space](https://huggingface.co/new-space), select the Gradio SDK, create an `app.py` file, and voila! You have a demo you can share with anyone else. To learn more, read [this guide how to host on Hugging Face Spaces using the website](https://huggingface.co/blog/gradio-spaces).\n\nAlternatively, you can create a Space programmatically, making use of the [huggingface_hub client library](https://huggingface.co/docs/huggingface_hub/index) library. Here's an example:\n\n```python\nfrom huggingface_hub import (\n create_repo,\n get_full_repo_name,\n upload_file,\n)\ncreate_repo(name=target_space_name, token=hf_token, repo_type=\"space\", space_sdk=\"gradio\")\nrepo_name = get_full_repo_name(model_id=target_space_name, token=hf_token)\nfile_url = upload_file(\n path_or_fileobj=\"file.txt\",\n path_in_repo=\"app.py\",\n repo_id=repo_name,\n repo_type=\"space\",\n token=hf_token,\n)\n```\n\nHere, `create_repo` creates a gradio repo with the target name under a specific account using that account's Write Token. `repo_name` gets the full repo name of the related repo. Finally `upload_file` uploads a file inside the repo with the name `app.py`."
},
{
"id": 203,
"parent": 199,
"path": "10_other-tutorials/01_using-hugging-face-integrations.md",
"level": 2,
"title": "Loading demos from Spaces",
"content": "You can also use and remix existing Gradio demos on Hugging Face Spaces. For example, you could take two existing Gradio demos on Spaces and put them as separate tabs and create a new demo. You can run this new demo locally, or upload it to Spaces, allowing endless possibilities to remix and create new demos!\n\nHere's an example that does exactly that:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n with gr.Tab(\"Translate to Spanish\"):\n gr.load(\"gradio/en2es\", src=\"spaces\")\n with gr.Tab(\"Translate to French\"):\n gr.load(\"abidlabs/en2fr\", src=\"spaces\")\n\ndemo.launch()\n```\n\nNotice that we use `gr.load()`, the same method we used to load models using Inference Endpoints. However, here we specify that the `src` is `spaces` (Hugging Face Spaces). \n\nNote: loading a Space in this way may result in slight differences from the original Space. In particular, any attributes that apply to the entire Blocks, such as the theme or custom CSS/JS, will not be loaded. You can copy these properties from the Space you are loading into your own `Blocks` object."
},
{
"id": 204,
"parent": 199,
"path": "10_other-tutorials/01_using-hugging-face-integrations.md",
"level": 2,
"title": "Demos with the `Pipeline` in `transformers`",
"content": "Hugging Face's popular `transformers` library has a very easy-to-use abstraction, [`pipeline()`](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/pipelines#transformers.pipeline) that handles most of the complex code to offer a simple API for common tasks. By specifying the task and an (optional) model, you can build a demo around an existing model with few lines of Python:\n\n```python\nimport gradio as gr\n\nfrom transformers import pipeline\n\npipe = pipeline(\"translation\", model=\"Helsinki-NLP/opus-mt-en-es\")\n\ndef predict(text):\n return pipe(text)[0][\"translation_text\"]\n\ndemo = gr.Interface(\n fn=predict,\n inputs='text',\n outputs='text',\n)\n\ndemo.launch()\n```\n\nBut `gradio` actually makes it even easier to convert a `pipeline` to a demo, simply by using the `gradio.Interface.from_pipeline` methods, which skips the need to specify the input and output components:\n\n```python\nfrom transformers import pipeline\nimport gradio as gr\n\npipe = pipeline(\"translation\", model=\"Helsinki-NLP/opus-mt-en-es\")\n\ndemo = gr.Interface.from_pipeline(pipe)\ndemo.launch()\n```\n\nThe previous code produces the following interface, which you can try right here in your browser:\n\n"
},
{
"id": 205,
"parent": 199,
"path": "10_other-tutorials/01_using-hugging-face-integrations.md",
"level": 2,
"title": "Recap",
"content": "That's it! Let's recap the various ways Gradio and Hugging Face work together:\n\n1. You can build a demo around Inference Endpoints without having to load the model, by using `gr.load()`.\n2. You host your Gradio demo on Hugging Face Spaces, either using the GUI or entirely in Python.\n3. You can load demos from Hugging Face Spaces to remix and create new Gradio demos using `gr.load()`.\n4. You can convert a `transformers` pipeline into a Gradio demo using `from_pipeline()`.\n\n🤗"
},
{
"id": 206,
"parent": null,
"path": "10_other-tutorials/named-entity-recognition.md",
"level": 1,
"title": "Named-Entity Recognition",
"content": "Related spaces: https://huggingface.co/spaces/rajistics/biobert_ner_demo, https://huggingface.co/spaces/abidlabs/ner, https://huggingface.co/spaces/rajistics/Financial_Analyst_AI\nTags: NER, TEXT, HIGHLIGHT"
},
{
"id": 207,
"parent": 206,
"path": "10_other-tutorials/named-entity-recognition.md",
"level": 2,
"title": "Introduction",
"content": "Named-entity recognition (NER), also known as token classification or text tagging, is the task of taking a sentence and classifying every word (or \"token\") into different categories, such as names of people or names of locations, or different parts of speech.\n\nFor example, given the sentence:\n\n> Does Chicago have any Pakistani restaurants?\n\nA named-entity recognition algorithm may identify:\n\n- \"Chicago\" as a **location**\n- \"Pakistani\" as an **ethnicity**\n\nand so on.\n\nUsing `gradio` (specifically the `HighlightedText` component), you can easily build a web demo of your NER model and share that with the rest of your team.\n\nHere is an example of a demo that you'll be able to build:\n\n$demo_ner_pipeline\n\nThis tutorial will show how to take a pretrained NER model and deploy it with a Gradio interface. We will show two different ways to use the `HighlightedText` component -- depending on your NER model, either of these two ways may be easier to learn!"
},
{
"id": 208,
"parent": 207,
"path": "10_other-tutorials/named-entity-recognition.md",
"level": 3,
"title": "Prerequisites",
"content": "Make sure you have the `gradio` Python package already [installed](/getting_started). You will also need a pretrained named-entity recognition model. You can use your own, while in this tutorial, we will use one from the `transformers` library."
},
{
"id": 209,
"parent": 207,
"path": "10_other-tutorials/named-entity-recognition.md",
"level": 3,
"title": "Approach 1: List of Entity Dictionaries",
"content": "Many named-entity recognition models output a list of dictionaries. Each dictionary consists of an _entity_, a \"start\" index, and an \"end\" index. This is, for example, how NER models in the `transformers` library operate:\n\n```py\nfrom transformers import pipeline\nner_pipeline = pipeline(\"ner\")\nner_pipeline(\"Does Chicago have any Pakistani restaurants\")\n```\n\nOutput:\n\n```bash\n[{'entity': 'I-LOC',\n 'score': 0.9988978,\n 'index': 2,\n 'word': 'Chicago',\n 'start': 5,\n 'end': 12},\n {'entity': 'I-MISC',\n 'score': 0.9958592,\n 'index': 5,\n 'word': 'Pakistani',\n 'start': 22,\n 'end': 31}]\n```\n\nIf you have such a model, it is very easy to hook it up to Gradio's `HighlightedText` component. All you need to do is pass in this **list of entities**, along with the **original text** to the model, together as dictionary, with the keys being `\"entities\"` and `\"text\"` respectively.\n\nHere is a complete example:\n\n```py\nfrom transformers import pipeline\n\nimport gradio as gr\n\nner_pipeline = pipeline(\"ner\")\n\nexamples = [\n \"Does Chicago have any stores and does Joe live here?\",\n]\n\ndef ner(text):\n output = ner_pipeline(text)\n return {\"text\": text, \"entities\": output}\n\ndemo = gr.Interface(ner,\n gr.Textbox(placeholder=\"Enter sentence here...\"),\n gr.HighlightedText(),\n examples=examples)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```\n$demo_ner_pipeline"
},
{
"id": 210,
"parent": 207,
"path": "10_other-tutorials/named-entity-recognition.md",
"level": 3,
"title": "Approach 2: List of Tuples",
"content": "An alternative way to pass data into the `HighlightedText` component is a list of tuples. The first element of each tuple should be the word or words that are being classified into a particular entity. The second element should be the entity label (or `None` if they should be unlabeled). The `HighlightedText` component automatically strings together the words and labels to display the entities.\n\nIn some cases, this can be easier than the first approach. Here is a demo showing this approach using Spacy's parts-of-speech tagger:\n\n```py\nimport gradio as gr\nimport os\nos.system('python -m spacy download en_core_web_sm')\nimport spacy\nfrom spacy import displacy\n\nnlp = spacy.load(\"en_core_web_sm\")\n\ndef text_analysis(text):\n doc = nlp(text)\n html = displacy.render(doc, style=\"dep\", page=True)\n html = (\n \"
\"\n + html\n + \"
\"\n )\n pos_count = {\n \"char_count\": len(text),\n \"token_count\": 0,\n }\n pos_tokens = []\n\n for token in doc:\n pos_tokens.extend([(token.text, token.pos_), (\" \", None)])\n\n return pos_tokens, pos_count, html\n\ndemo = gr.Interface(\n text_analysis,\n gr.Textbox(placeholder=\"Enter sentence here...\"),\n [\"highlight\", \"json\", \"html\"],\n examples=[\n [\"What a beautiful morning for a walk!\"],\n [\"It was the best of times, it was the worst of times.\"],\n ],\n)\n\ndemo.launch()\n\n```\n$demo_text_analysis\n\n---\n\nAnd you're done! That's all you need to know to build a web-based GUI for your NER model.\n\nFun tip: you can share your NER demo instantly with others simply by setting `share=True` in `launch()`."
},
{
"id": 211,
"parent": null,
"path": "10_other-tutorials/wrapping-layouts.md",
"level": 1,
"title": "Wrapping Layouts",
"content": "Tags: LAYOUTS"
},
{
"id": 212,
"parent": 211,
"path": "10_other-tutorials/wrapping-layouts.md",
"level": 2,
"title": "Introduction",
"content": "Gradio features [blocks](https://www.gradio.app/docs/blocks) to easily layout applications. To use this feature, you need to stack or nest layout components and create a hierarchy with them. This isn't difficult to implement and maintain for small projects, but after the project gets more complex, this component hierarchy becomes difficult to maintain and reuse.\n\nIn this guide, we are going to explore how we can wrap the layout classes to create more maintainable and easy-to-read applications without sacrificing flexibility."
},
{
"id": 213,
"parent": 211,
"path": "10_other-tutorials/wrapping-layouts.md",
"level": 2,
"title": "Example",
"content": "We are going to follow the implementation from this Huggingface Space example:\n\n\n"
},
{
"id": 214,
"parent": 211,
"path": "10_other-tutorials/wrapping-layouts.md",
"level": 2,
"title": "Implementation",
"content": "The wrapping utility has two important classes. The first one is the ```LayoutBase``` class and the other one is the ```Application``` class.\n\nWe are going to look at the ```render``` and ```attach_event``` functions of them for brevity. You can look at the full implementation from [the example code](https://huggingface.co/spaces/WoWoWoWololo/wrapping-layouts/blob/main/app.py).\n\nSo let's start with the ```LayoutBase``` class."
},
{
"id": 215,
"parent": 214,
"path": "10_other-tutorials/wrapping-layouts.md",
"level": 3,
"title": "LayoutBase Class",
"content": "1. Render Function\n\n Let's look at the ```render``` function in the ```LayoutBase``` class:\n\n```python"
},
{
"id": 216,
"parent": null,
"path": "10_other-tutorials/wrapping-layouts.md",
"level": 1,
"title": "other LayoutBase implementations",
"content": "def render(self) -> None:\n with self.main_layout:\n for renderable in self.renderables:\n renderable.render()\n\n self.main_layout.render()\n```\nThis is a little confusing at first but if you consider the default implementation you can understand it easily.\nLet's look at an example:\n\nIn the default implementation, this is what we're doing:\n\n```python\nwith Row():\n left_textbox = Textbox(value=\"left_textbox\")\n right_textbox = Textbox(value=\"right_textbox\")\n```\n\nNow, pay attention to the Textbox variables. These variables' ```render``` parameter is true by default. So as we use the ```with``` syntax and create these variables, they are calling the ```render``` function under the ```with``` syntax.\n\nWe know the render function is called in the constructor with the implementation from the ```gradio.blocks.Block``` class:\n\n```python\nclass Block:\n # constructor parameters are omitted for brevity\n def __init__(self, ...):\n # other assign functions \n\n if render:\n self.render()\n```\n\nSo our implementation looks like this:\n\n```python"
},
{
"id": 217,
"parent": null,
"path": "10_other-tutorials/wrapping-layouts.md",
"level": 1,
"title": "self.main_layout -> Row()",
"content": "with self.main_layout:\n left_textbox.render()\n right_textbox.render()\n```\n\nWhat this means is by calling the components' render functions under the ```with``` syntax, we are actually simulating the default implementation.\n\nSo now let's consider two nested ```with```s to see how the outer one works. For this, let's expand our example with the ```Tab``` component:\n\n```python\nwith Tab():\n with Row():\n first_textbox = Textbox(value=\"first_textbox\")\n second_textbox = Textbox(value=\"second_textbox\")\n```\n\nPay attention to the Row and Tab components this time. We have created the Textbox variables above and added them to Row with the ```with``` syntax. Now we need to add the Row component to the Tab component. You can see that the Row component is created with default parameters, so its render parameter is true, that's why the render function is going to be executed under the Tab component's ```with``` syntax.\n\nTo mimic this implementation, we need to call the ```render``` function of the ```main_layout``` variable after the ```with``` syntax of the ```main_layout``` variable.\n\nSo the implementation looks like this:\n\n```python\nwith tab_main_layout:\n with row_main_layout:\n first_textbox.render()\n second_textbox.render()\n\n row_main_layout.render()\n\ntab_main_layout.render()\n```\n\nThe default implementation and our implementation are the same, but we are using the render function ourselves. So it requires a little work.\n\nNow, let's take a look at the ```attach_event``` function.\n\n2. Attach Event Function\n\n The function is left as not implemented because it is specific to the class, so each class has to implement its `attach_event` function.\n\n```python\n # other LayoutBase implementations\n\n def attach_event(self, block_dict: Dict[str, Block]) -> None:\n raise NotImplementedError\n```\n\nCheck out the ```block_dict``` variable in the ```Application``` class's ```attach_event``` function."
},
{
"id": 218,
"parent": 217,
"path": "10_other-tutorials/wrapping-layouts.md",
"level": 3,
"title": "Application Class",
"content": "1. Render Function\n\n```python\n # other Application implementations\n\n def _render(self):\n with self.app:\n for child in self.children:\n child.render()\n\n self.app.render()\n```\n\nFrom the explanation of the ```LayoutBase``` class's ```render``` function, we can understand the ```child.render``` part.\n\nSo let's look at the bottom part, why are we calling the ```app``` variable's ```render``` function? It's important to call this function because if we look at the implementation in the ```gradio.blocks.Blocks``` class, we can see that it is adding the components and event functions into the root component. To put it another way, it is creating and structuring the gradio application.\n\n2. Attach Event Function\n\n Let's see how we can attach events to components:\n\n```python\n # other Application implementations\n\n def _attach_event(self):\n block_dict: Dict[str, Block] = {}\n\n for child in self.children:\n block_dict.update(child.global_children_dict)\n\n with self.app:\n for child in self.children:\n try:\n child.attach_event(block_dict=block_dict)\n except NotImplementedError:\n print(f\"{child.name}'s attach_event is not implemented\")\n```\n\nYou can see why the ```global_children_list``` is used in the ```LayoutBase``` class from the example code. With this, all the components in the application are gathered into one dictionary, so the component can access all the components with their names.\n\nThe ```with``` syntax is used here again to attach events to components. If we look at the ```__exit__``` function in the ```gradio.blocks.Blocks``` class, we can see that it is calling the ```attach_load_events``` function which is used for setting event triggers to components. So we have to use the ```with``` syntax to trigger the ```__exit__``` function.\n\nOf course, we can call ```attach_load_events``` without using the ```with``` syntax, but the function needs a ```Context.root_block```, and it is set in the ```__enter__``` function. So we used the ```with``` syntax here rather than calling the function ourselves."
},
{
"id": 219,
"parent": 217,
"path": "10_other-tutorials/wrapping-layouts.md",
"level": 2,
"title": "Conclusion",
"content": "In this guide, we saw\n\n- How we can wrap the layouts\n- How components are rendered\n- How we can structure our application with wrapped layout classes\n\nBecause the classes used in this guide are used for demonstration purposes, they may still not be totally optimized or modular. But that would make the guide much longer!\n\nI hope this guide helps you gain another view of the layout classes and gives you an idea about how you can use them for your needs. See the full implementation of our example [here](https://huggingface.co/spaces/WoWoWoWololo/wrapping-layouts/blob/main/app.py)."
},
{
"id": 220,
"parent": null,
"path": "10_other-tutorials/developing-faster-with-reload-mode.md",
"level": 1,
"title": "Developing Faster with Auto-Reloading",
"content": "**Prerequisite**: This Guide requires you to know about Blocks. Make sure to [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners).\n\nThis guide covers auto reloading, reloading in a Python IDE, and using gradio with Jupyter Notebooks."
},
{
"id": 221,
"parent": 220,
"path": "10_other-tutorials/developing-faster-with-reload-mode.md",
"level": 2,
"title": "Why Auto-Reloading?",
"content": "When you are building a Gradio demo, particularly out of Blocks, you may find it cumbersome to keep re-running your code to test your changes.\n\nTo make it faster and more convenient to write your code, we've made it easier to \"reload\" your Gradio apps instantly when you are developing in a **Python IDE** (like VS Code, Sublime Text, PyCharm, or so on) or generally running your Python code from the terminal. We've also developed an analogous \"magic command\" that allows you to re-run cells faster if you use **Jupyter Notebooks** (or any similar environment like Colab).\n\nThis short Guide will cover both of these methods, so no matter how you write Python, you'll leave knowing how to build Gradio apps faster."
},
{
"id": 222,
"parent": 220,
"path": "10_other-tutorials/developing-faster-with-reload-mode.md",
"level": 2,
"title": "Python IDE Reload 🔥",
"content": "If you are building Gradio Blocks using a Python IDE, your file of code (let's name it `run.py`) might look something like this:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"# Greetings from Gradio!\")\n inp = gr.Textbox(placeholder=\"What is your name?\")\n out = gr.Textbox()\n\n inp.change(fn=lambda x: f\"Welcome, {x}!\",\n inputs=inp,\n outputs=out)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nThe problem is that anytime that you want to make a change to your layout, events, or components, you have to close and rerun your app by writing `python run.py`.\n\nInstead of doing this, you can run your code in **reload mode** by changing 1 word: `python` to `gradio`:\n\nIn the terminal, run `gradio run.py`. That's it!\n\nNow, you'll see that after you'll see something like this:\n\n```bash\nWatching: '/Users/freddy/sources/gradio/gradio', '/Users/freddy/sources/gradio/demo/'\n\nRunning on local URL: http://127.0.0.1:7860\n```\n\nThe important part here is the line that says `Watching...` What's happening here is that Gradio will be observing the directory where `run.py` file lives, and if the file changes, it will automatically rerun the file for you. So you can focus on writing your code, and your Gradio demo will refresh automatically 🥳\n\nTip: the `gradio` command does not detect the parameters passed to the `launch()` methods because the `launch()` method is never called in reload mode. For example, setting `auth`, or `show_error` in `launch()` will not be reflected in the app.\n\nThere is one important thing to keep in mind when using the reload mode: Gradio specifically looks for a Gradio Blocks/Interface demo called `demo` in your code. If you have named your demo something else, you will need to pass in the name of your demo as the 2nd parameter in your code. So if your `run.py` file looked like this:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as my_demo:\n gr.Markdown(\"# Greetings from Gradio!\")\n inp = gr.Textbox(placeholder=\"What is your name?\")\n out = gr.Textbox()\n\n inp.change(fn=lambda x: f\"Welcome, {x}!\",\n inputs=inp,\n outputs=out)\n\nif __name__ == \"__main__\":\n my_demo.launch()\n```\n\nThen you would launch it in reload mode like this: `gradio run.py --demo-name=my_demo`.\n\nBy default, the Gradio use UTF-8 encoding for scripts. **For reload mode**, If you are using encoding formats other than UTF-8 (such as cp1252), make sure you've done like this:\n\n1. Configure encoding declaration of python script, for example: `# -*- coding: cp1252 -*-`\n2. Confirm that your code editor has identified that encoding format. \n3. Run like this: `gradio run.py --encoding cp1252`\n\n🔥 If your application accepts command line arguments, you can pass them in as well. Here's an example:\n\n```python\nimport gradio as gr\nimport argparse\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--name\", type=str, default=\"User\")\nargs, unknown = parser.parse_known_args()\n\nwith gr.Blocks() as demo:\n gr.Markdown(f\"# Greetings {args.name}!\")\n inp = gr.Textbox()\n out = gr.Textbox()\n\n inp.change(fn=lambda x: x, inputs=inp, outputs=out)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nWhich you could run like this: `gradio run.py --name Gretel`\n\nAs a small aside, this auto-reloading happens if you change your `run.py` source code or the Gradio source code. Meaning that this can be useful if you decide to [contribute to Gradio itself](https://github.com/gradio-app/gradio/blob/main/CONTRIBUTING.md) ✅"
},
{
"id": 223,
"parent": 220,
"path": "10_other-tutorials/developing-faster-with-reload-mode.md",
"level": 2,
"title": "Controlling the Reload 🎛️",
"content": "By default, reload mode will re-run your entire script for every change you make.\nBut there are some cases where this is not desirable.\nFor example, loading a machine learning model should probably only happen once to save time. There are also some Python libraries that use C or Rust extensions that throw errors when they are reloaded, like `numpy` and `tiktoken`.\n\nIn these situations, you can place code that you do not want to be re-run inside an `if gr.NO_RELOAD:` codeblock. Here's an example of how you can use it to only load a transformers model once during the development process.\n\nTip: The value of `gr.NO_RELOAD` is `True`. So you don't have to change your script when you are done developing and want to run it in production. Simply run the file with `python` instead of `gradio`.\n\n```python\nimport gradio as gr\n\nif gr.NO_RELOAD:\n\tfrom transformers import pipeline\n\tpipe = pipeline(\"text-classification\", model=\"cardiffnlp/twitter-roberta-base-sentiment-latest\")\n\ndemo = gr.Interface(lambda s: pipe(s), gr.Textbox(), gr.Label())\n\nif __name__ == \"__main__\":\n demo.launch()\n```"
},
{
"id": 224,
"parent": 220,
"path": "10_other-tutorials/developing-faster-with-reload-mode.md",
"level": 2,
"title": "Jupyter Notebook Magic 🔮",
"content": "What about if you use Jupyter Notebooks (or Colab Notebooks, etc.) to develop code? We got something for you too!\n\nWe've developed a **magic command** that will create and run a Blocks demo for you. To use this, load the gradio extension at the top of your notebook:\n\n`%load_ext gradio`\n\nThen, in the cell that you are developing your Gradio demo, simply write the magic command **`%%blocks`** at the top, and then write the layout and components like you would normally:\n\n```py\n%%blocks\n\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(f\"# Greetings {args.name}!\")\n inp = gr.Textbox()\n out = gr.Textbox()\n\n inp.change(fn=lambda x: x, inputs=inp, outputs=out)\n```\n\nNotice that:\n\n- You do not need to launch your demo — Gradio does that for you automatically!\n\n- Every time you rerun the cell, Gradio will re-render your app on the same port and using the same underlying web server. This means you'll see your changes _much, much faster_ than if you were rerunning the cell normally.\n\nHere's what it looks like in a jupyter notebook:\n\n![](https://gradio-builds.s3.amazonaws.com/demo-files/jupyter_reload.gif)\n\n🪄 This works in colab notebooks too! [Here's a colab notebook](https://colab.research.google.com/drive/1zAuWoiTIb3O2oitbtVb2_ekv1K6ggtC1?usp=sharing) where you can see the Blocks magic in action. Try making some changes and re-running the cell with the Gradio code!\n\nThe Notebook Magic is now the author's preferred way of building Gradio demos. Regardless of how you write Python code, we hope either of these methods will give you a much better development experience using Gradio.\n\n---"
},
{
"id": 225,
"parent": 220,
"path": "10_other-tutorials/developing-faster-with-reload-mode.md",
"level": 2,
"title": "Next Steps",
"content": "Now that you know how to develop quickly using Gradio, start building your own!\n\nIf you are looking for inspiration, try exploring demos other people have built with Gradio, [browse public Hugging Face Spaces](http://hf.space/) 🤗"
},
{
"id": 226,
"parent": null,
"path": "10_other-tutorials/creating-a-realtime-dashboard-from-google-sheets.md",
"level": 1,
"title": "Creating a Real-Time Dashboard from Google Sheets",
"content": "Tags: TABULAR, DASHBOARD, PLOTS\n\n[Google Sheets](https://www.google.com/sheets/about/) are an easy way to store tabular data in the form of spreadsheets. With Gradio and pandas, it's easy to read data from public or private Google Sheets and then display the data or plot it. In this blog post, we'll build a small _real-time_ dashboard, one that updates when the data in the Google Sheets updates.\n\nBuilding the dashboard itself will just be 9 lines of Python code using Gradio, and our final dashboard will look like this:\n\n\n\n**Prerequisites**: This Guide uses [Gradio Blocks](/guides/quickstart/#blocks-more-flexibility-and-control), so make you are familiar with the Blocks class.\n\nThe process is a little different depending on if you are working with a publicly accessible or a private Google Sheet. We'll cover both, so let's get started!"
},
{
"id": 227,
"parent": 226,
"path": "10_other-tutorials/creating-a-realtime-dashboard-from-google-sheets.md",
"level": 2,
"title": "Public Google Sheets",
"content": "Building a dashboard from a public Google Sheet is very easy, thanks to the [`pandas` library](https://pandas.pydata.org/):\n\n1\\. Get the URL of the Google Sheets that you want to use. To do this, simply go to the Google Sheets, click on the \"Share\" button in the top-right corner, and then click on the \"Get shareable link\" button. This will give you a URL that looks something like this:\n\n```html\nhttps://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0\n```\n\n2\\. Now, let's modify this URL and then use it to read the data from the Google Sheets into a Pandas DataFrame. (In the code below, replace the `URL` variable with the URL of your public Google Sheet):\n\n```python\nimport pandas as pd\n\nURL = \"https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0\"\ncsv_url = URL.replace('/edit#gid=', '/export?format=csv&gid=')\n\ndef get_data():\n return pd.read_csv(csv_url)\n```\n\n3\\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) you would like the component to refresh. Here's the Gradio code:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"# 📈 Real-Time Line Plot\")\n with gr.Row():\n with gr.Column():\n gr.DataFrame(get_data, every=gr.Timer(5))\n with gr.Column():\n gr.LinePlot(get_data, every=gr.Timer(5), x=\"Date\", y=\"Sales\", y_title=\"Sales ($ millions)\", overlay_point=True, width=500, height=500)\n\ndemo.queue().launch() # Run the demo with queuing enabled\n```\n\nAnd that's it! You have a dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet."
},
{
"id": 228,
"parent": 226,
"path": "10_other-tutorials/creating-a-realtime-dashboard-from-google-sheets.md",
"level": 2,
"title": "Private Google Sheets",
"content": "For private Google Sheets, the process requires a little more work, but not that much! The key difference is that now, you must authenticate yourself to authorize access to the private Google Sheets."
},
{
"id": 229,
"parent": 228,
"path": "10_other-tutorials/creating-a-realtime-dashboard-from-google-sheets.md",
"level": 3,
"title": "Authentication",
"content": "To authenticate yourself, obtain credentials from Google Cloud. Here's [how to set up google cloud credentials](https://developers.google.com/workspace/guides/create-credentials):\n\n1\\. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/)\n\n2\\. In the Cloud Console, click on the hamburger menu in the top-left corner and select \"APIs & Services\" from the menu. If you do not have an existing project, you will need to create one.\n\n3\\. Then, click the \"+ Enabled APIs & services\" button, which allows you to enable specific services for your project. Search for \"Google Sheets API\", click on it, and click the \"Enable\" button. If you see the \"Manage\" button, then Google Sheets is already enabled, and you're all set.\n\n4\\. In the APIs & Services menu, click on the \"Credentials\" tab and then click on the \"Create credentials\" button.\n\n5\\. In the \"Create credentials\" dialog, select \"Service account key\" as the type of credentials to create, and give it a name. **Note down the email of the service account**\n\n6\\. After selecting the service account, select the \"JSON\" key type and then click on the \"Create\" button. This will download the JSON key file containing your credentials to your computer. It will look something like this:\n\n```json\n{\n\t\"type\": \"service_account\",\n\t\"project_id\": \"your project\",\n\t\"private_key_id\": \"your private key id\",\n\t\"private_key\": \"private key\",\n\t\"client_email\": \"email\",\n\t\"client_id\": \"client id\",\n\t\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n\t\"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n\t\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n\t\"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/email_id\"\n}\n```"
},
{
"id": 230,
"parent": 228,
"path": "10_other-tutorials/creating-a-realtime-dashboard-from-google-sheets.md",
"level": 3,
"title": "Querying",
"content": "Once you have the credentials `.json` file, you can use the following steps to query your Google Sheet:\n\n1\\. Click on the \"Share\" button in the top-right corner of the Google Sheet. Share the Google Sheets with the email address of the service from Step 5 of authentication subsection (this step is important!). Then click on the \"Get shareable link\" button. This will give you a URL that looks something like this:\n\n```html\nhttps://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0\n```\n\n2\\. Install the [`gspread` library](https://docs.gspread.org/en/v5.7.0/), which makes it easy to work with the [Google Sheets API](https://developers.google.com/sheets/api/guides/concepts) in Python by running in the terminal: `pip install gspread`\n\n3\\. Write a function to load the data from the Google Sheet, like this (replace the `URL` variable with the URL of your private Google Sheet):\n\n```python\nimport gspread\nimport pandas as pd"
},
{
"id": 231,
"parent": null,
"path": "10_other-tutorials/creating-a-realtime-dashboard-from-google-sheets.md",
"level": 1,
"title": "Authenticate with Google and get the sheet",
"content": "URL = 'https://docs.google.com/spreadsheets/d/1_91Vps76SKOdDQ8cFxZQdgjTJiz23375sAT7vPvaj4k/edit#gid=0'\n\ngc = gspread.service_account(\"path/to/key.json\")\nsh = gc.open_by_url(URL)\nworksheet = sh.sheet1\n\ndef get_data():\n values = worksheet.get_all_values()\n df = pd.DataFrame(values[1:], columns=values[0])\n return df\n\n```\n\n4\\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, we just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) we would like the component to refresh. Here's the Gradio code:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as demo:\n gr.Markdown(\"# 📈 Real-Time Line Plot\")\n with gr.Row():\n with gr.Column():\n gr.DataFrame(get_data, every=gr.Timer(5))\n with gr.Column():\n gr.LinePlot(get_data, every=gr.Timer(5), x=\"Date\", y=\"Sales\", y_title=\"Sales ($ millions)\", overlay_point=True, width=500, height=500)\n\ndemo.queue().launch() # Run the demo with queuing enabled\n```\n\nYou now have a Dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet."
},
{
"id": 232,
"parent": 231,
"path": "10_other-tutorials/creating-a-realtime-dashboard-from-google-sheets.md",
"level": 2,
"title": "Conclusion",
"content": "And that's all there is to it! With just a few lines of code, you can use `gradio` and other libraries to read data from a public or private Google Sheet and then display and plot the data in a real-time dashboard."
},
{
"id": 233,
"parent": null,
"path": "10_other-tutorials/create-your-own-friends-with-a-gan.md",
"level": 1,
"title": "Create Your Own Friends with a GAN",
"content": "Related spaces: https://huggingface.co/spaces/NimaBoscarino/cryptopunks, https://huggingface.co/spaces/nateraw/cryptopunks-generator\nTags: GAN, IMAGE, HUB\n\nContributed by Nima Boscarino and Nate Raw"
},
{
"id": 234,
"parent": 233,
"path": "10_other-tutorials/create-your-own-friends-with-a-gan.md",
"level": 2,
"title": "Introduction",
"content": "It seems that cryptocurrencies, [NFTs](https://www.nytimes.com/interactive/2022/03/18/technology/nft-guide.html), and the web3 movement are all the rage these days! Digital assets are being listed on marketplaces for astounding amounts of money, and just about every celebrity is debuting their own NFT collection. While your crypto assets [may be taxable, such as in Canada](https://www.canada.ca/en/revenue-agency/programs/about-canada-revenue-agency-cra/compliance/digital-currency/cryptocurrency-guide.html), today we'll explore some fun and tax-free ways to generate your own assortment of procedurally generated [CryptoPunks](https://www.larvalabs.com/cryptopunks).\n\nGenerative Adversarial Networks, often known just as _GANs_, are a specific class of deep-learning models that are designed to learn from an input dataset to create (_generate!_) new material that is convincingly similar to elements of the original training set. Famously, the website [thispersondoesnotexist.com](https://thispersondoesnotexist.com/) went viral with lifelike, yet synthetic, images of people generated with a model called StyleGAN2. GANs have gained traction in the machine learning world, and are now being used to generate all sorts of images, text, and even [music](https://salu133445.github.io/musegan/)!\n\nToday we'll briefly look at the high-level intuition behind GANs, and then we'll build a small demo around a pre-trained GAN to see what all the fuss is about. Here's a [peek](https://nimaboscarino-cryptopunks.hf.space) at what we're going to be putting together."
},
{
"id": 235,
"parent": 234,
"path": "10_other-tutorials/create-your-own-friends-with-a-gan.md",
"level": 3,
"title": "Prerequisites",
"content": "Make sure you have the `gradio` Python package already [installed](/getting_started). To use the pretrained model, also install `torch` and `torchvision`."
},
{
"id": 236,
"parent": 233,
"path": "10_other-tutorials/create-your-own-friends-with-a-gan.md",
"level": 2,
"title": "GANs: a very brief introduction",
"content": "Originally proposed in [Goodfellow et al. 2014](https://arxiv.org/abs/1406.2661), GANs are made up of neural networks which compete with the intention of outsmarting each other. One network, known as the _generator_, is responsible for generating images. The other network, the _discriminator_, receives an image at a time from the generator along with a **real** image from the training data set. The discriminator then has to guess: which image is the fake?\n\nThe generator is constantly training to create images which are trickier for the discriminator to identify, while the discriminator raises the bar for the generator every time it correctly detects a fake. As the networks engage in this competitive (_adversarial!_) relationship, the images that get generated improve to the point where they become indistinguishable to human eyes!\n\nFor a more in-depth look at GANs, you can take a look at [this excellent post on Analytics Vidhya](https://www.analyticsvidhya.com/blog/2021/06/a-detailed-explanation-of-gan-with-implementation-using-tensorflow-and-keras/) or this [PyTorch tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html). For now, though, we'll dive into a demo!"
},
{
"id": 237,
"parent": 233,
"path": "10_other-tutorials/create-your-own-friends-with-a-gan.md",
"level": 2,
"title": "Step 1 — Create the Generator model",
"content": "To generate new images with a GAN, you only need the generator model. There are many different architectures that the generator could use, but for this demo we'll use a pretrained GAN generator model with the following architecture:\n\n```python\nfrom torch import nn\n\nclass Generator(nn.Module):\n # Refer to the link below for explanations about nc, nz, and ngf\n # https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html#inputs\n def __init__(self, nc=4, nz=100, ngf=64):\n super(Generator, self).__init__()\n self.network = nn.Sequential(\n nn.ConvTranspose2d(nz, ngf * 4, 3, 1, 0, bias=False),\n nn.BatchNorm2d(ngf * 4),\n nn.ReLU(True),\n nn.ConvTranspose2d(ngf * 4, ngf * 2, 3, 2, 1, bias=False),\n nn.BatchNorm2d(ngf * 2),\n nn.ReLU(True),\n nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 0, bias=False),\n nn.BatchNorm2d(ngf),\n nn.ReLU(True),\n nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),\n nn.Tanh(),\n )\n\n def forward(self, input):\n output = self.network(input)\n return output\n```\n\nWe're taking the generator from [this repo by @teddykoker](https://github.com/teddykoker/cryptopunks-gan/blob/main/train.py#L90), where you can also see the original discriminator model structure.\n\nAfter instantiating the model, we'll load in the weights from the Hugging Face Hub, stored at [nateraw/cryptopunks-gan](https://huggingface.co/nateraw/cryptopunks-gan):\n\n```python\nfrom huggingface_hub import hf_hub_download\nimport torch\n\nmodel = Generator()\nweights_path = hf_hub_download('nateraw/cryptopunks-gan', 'generator.pth')\nmodel.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu'))) # Use 'cuda' if you have a GPU available\n```"
},
{
"id": 238,
"parent": 233,
"path": "10_other-tutorials/create-your-own-friends-with-a-gan.md",
"level": 2,
"title": "Step 2 — Defining a `predict` function",
"content": "The `predict` function is the key to making Gradio work! Whatever inputs we choose through the Gradio interface will get passed through our `predict` function, which should operate on the inputs and generate outputs that we can display with Gradio output components. For GANs it's common to pass random noise into our model as the input, so we'll generate a tensor of random numbers and pass that through the model. We can then use `torchvision`'s `save_image` function to save the output of the model as a `png` file, and return the file name:\n\n```python\nfrom torchvision.utils import save_image\n\ndef predict(seed):\n num_punks = 4\n torch.manual_seed(seed)\n z = torch.randn(num_punks, 100, 1, 1)\n punks = model(z)\n save_image(punks, \"punks.png\", normalize=True)\n return 'punks.png'\n```\n\nWe're giving our `predict` function a `seed` parameter, so that we can fix the random tensor generation with a seed. We'll then be able to reproduce punks if we want to see them again by passing in the same seed.\n\n_Note!_ Our model needs an input tensor of dimensions 100x1x1 to do a single inference, or (BatchSize)x100x1x1 for generating a batch of images. In this demo we'll start by generating 4 punks at a time."
},
{
"id": 239,
"parent": 233,
"path": "10_other-tutorials/create-your-own-friends-with-a-gan.md",
"level": 2,
"title": "Step 3 — Creating a Gradio interface",
"content": "At this point you can even run the code you have with `predict()`, and you'll find your freshly generated punks in your file system at `./punks.png`. To make a truly interactive demo, though, we'll build out a simple interface with Gradio. Our goals here are to:\n\n- Set a slider input so users can choose the \"seed\" value\n- Use an image component for our output to showcase the generated punks\n- Use our `predict()` to take the seed and generate the images\n\nWith `gr.Interface()`, we can define all of that with a single function call:\n\n```python\nimport gradio as gr\n\ngr.Interface(\n predict,\n inputs=[\n gr.Slider(0, 1000, label='Seed', default=42),\n ],\n outputs=\"image\",\n).launch()\n```"
},
{
"id": 240,
"parent": 233,
"path": "10_other-tutorials/create-your-own-friends-with-a-gan.md",
"level": 2,
"title": "Step 4 — Even more punks!",
"content": "Generating 4 punks at a time is a good start, but maybe we'd like to control how many we want to make each time. Adding more inputs to our Gradio interface is as simple as adding another item to the `inputs` list that we pass to `gr.Interface`:\n\n```python\ngr.Interface(\n predict,\n inputs=[\n gr.Slider(0, 1000, label='Seed', default=42),\n gr.Slider(4, 64, label='Number of Punks', step=1, default=10), # Adding another slider!\n ],\n outputs=\"image\",\n).launch()\n```\n\nThe new input will be passed to our `predict()` function, so we have to make some changes to that function to accept a new parameter:\n\n```python\ndef predict(seed, num_punks):\n torch.manual_seed(seed)\n z = torch.randn(num_punks, 100, 1, 1)\n punks = model(z)\n save_image(punks, \"punks.png\", normalize=True)\n return 'punks.png'\n```\n\nWhen you relaunch your interface, you should see a second slider that'll let you control the number of punks!"
},
{
"id": 241,
"parent": 233,
"path": "10_other-tutorials/create-your-own-friends-with-a-gan.md",
"level": 2,
"title": "Step 5 - Polishing it up",
"content": "Your Gradio app is pretty much good to go, but you can add a few extra things to really make it ready for the spotlight ✨\n\nWe can add some examples that users can easily try out by adding this to the `gr.Interface`:\n\n```python\ngr.Interface(\n # ...\n # keep everything as it is, and then add\n examples=[[123, 15], [42, 29], [456, 8], [1337, 35]],\n).launch(cache_examples=True) # cache_examples is optional\n```\n\nThe `examples` parameter takes a list of lists, where each item in the sublists is ordered in the same order that we've listed the `inputs`. So in our case, `[seed, num_punks]`. Give it a try!\n\nYou can also try adding a `title`, `description`, and `article` to the `gr.Interface`. Each of those parameters accepts a string, so try it out and see what happens 👀 `article` will also accept HTML, as [explored in a previous guide](/guides/key-features/#descriptive-content)!\n\nWhen you're all done, you may end up with something like [this](https://nimaboscarino-cryptopunks.hf.space).\n\nFor reference, here is our full code:\n\n```python\nimport torch\nfrom torch import nn\nfrom huggingface_hub import hf_hub_download\nfrom torchvision.utils import save_image\nimport gradio as gr\n\nclass Generator(nn.Module):\n # Refer to the link below for explanations about nc, nz, and ngf\n # https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html#inputs\n def __init__(self, nc=4, nz=100, ngf=64):\n super(Generator, self).__init__()\n self.network = nn.Sequential(\n nn.ConvTranspose2d(nz, ngf * 4, 3, 1, 0, bias=False),\n nn.BatchNorm2d(ngf * 4),\n nn.ReLU(True),\n nn.ConvTranspose2d(ngf * 4, ngf * 2, 3, 2, 1, bias=False),\n nn.BatchNorm2d(ngf * 2),\n nn.ReLU(True),\n nn.ConvTranspose2d(ngf * 2, ngf, 4, 2, 0, bias=False),\n nn.BatchNorm2d(ngf),\n nn.ReLU(True),\n nn.ConvTranspose2d(ngf, nc, 4, 2, 1, bias=False),\n nn.Tanh(),\n )\n\n def forward(self, input):\n output = self.network(input)\n return output\n\nmodel = Generator()\nweights_path = hf_hub_download('nateraw/cryptopunks-gan', 'generator.pth')\nmodel.load_state_dict(torch.load(weights_path, map_location=torch.device('cpu'))) # Use 'cuda' if you have a GPU available\n\ndef predict(seed, num_punks):\n torch.manual_seed(seed)\n z = torch.randn(num_punks, 100, 1, 1)\n punks = model(z)\n save_image(punks, \"punks.png\", normalize=True)\n return 'punks.png'\n\ngr.Interface(\n predict,\n inputs=[\n gr.Slider(0, 1000, label='Seed', default=42),\n gr.Slider(4, 64, label='Number of Punks', step=1, default=10),\n ],\n outputs=\"image\",\n examples=[[123, 15], [42, 29], [456, 8], [1337, 35]],\n).launch(cache_examples=True)\n```\n\n---\n\nCongratulations! You've built out your very own GAN-powered CryptoPunks generator, with a fancy Gradio interface that makes it easy for anyone to use. Now you can [scour the Hub for more GANs](https://huggingface.co/models?other=gan) (or train your own) and continue making even more awesome demos 🤗"
},
{
"id": 242,
"parent": null,
"path": "10_other-tutorials/setting-up-a-demo-for-maximum-performance.md",
"level": 1,
"title": "Setting Up a Demo for Maximum Performance",
"content": "Tags: CONCURRENCY, LATENCY, PERFORMANCE\n\nLet's say that your Gradio demo goes _viral_ on social media -- you have lots of users trying it out simultaneously, and you want to provide your users with the best possible experience or, in other words, minimize the amount of time that each user has to wait in the queue to see their prediction.\n\nHow can you configure your Gradio demo to handle the most traffic? In this Guide, we dive into some of the parameters of Gradio's `.queue()` method as well as some other related parameters, and discuss how to set these parameters in a way that allows you to serve lots of users simultaneously with minimal latency.\n\nThis is an advanced guide, so make sure you know the basics of Gradio already, such as [how to create and launch a Gradio Interface](https://gradio.app/guides/quickstart/). Most of the information in this Guide is relevant whether you are hosting your demo on [Hugging Face Spaces](https://hf.space) or on your own server."
},
{
"id": 243,
"parent": 242,
"path": "10_other-tutorials/setting-up-a-demo-for-maximum-performance.md",
"level": 2,
"title": "Overview of Gradio's Queueing System",
"content": "By default, every Gradio demo includes a built-in queuing system that scales to thousands of requests. When a user of your app submits a request (i.e. submits an input to your function), Gradio adds the request to the queue, and requests are processed in order, generally speaking (this is not exactly true, as discussed below). When the user's request has finished processing, the Gradio server returns the result back to the user using server-side events (SSE). The SSE protocol has several advantages over simply using HTTP POST requests: \n\n(1) They do not time out -- most browsers raise a timeout error if they do not get a response to a POST request after a short period of time (e.g. 1 min). This can be a problem if your inference function takes longer than 1 minute to run or if many people are trying out your demo at the same time, resulting in increased latency.\n\n(2) They allow the server to send multiple updates to the frontend. This means, for example, that the server can send a real-time ETA of how long your prediction will take to complete.\n\nTo configure the queue, simply call the `.queue()` method before launching an `Interface`, `TabbedInterface`, `ChatInterface` or any `Blocks`. Here's an example:\n\n```py\nimport gradio as gr\n\napp = gr.Interface(lambda x:x, \"image\", \"image\")\napp.queue() # <-- Sets up a queue with default parameters\napp.launch()\n```\n\n**How Requests are Processed from the Queue**\n\nWhen a Gradio server is launched, a pool of threads is used to execute requests from the queue. By default, the maximum size of this thread pool is `40` (which is the default inherited from FastAPI, on which the Gradio server is based). However, this does *not* mean that 40 requests are always processed in parallel from the queue. \n\nInstead, Gradio uses a **single-function-single-worker** model by default. This means that each worker thread is only assigned a single function from among all of the functions that could be part of your Gradio app. This ensures that you do not see, for example, out-of-memory errors, due to multiple workers calling a machine learning model at the same time. Suppose you have 3 functions in your Gradio app: A, B, and C. And you see the following sequence of 7 requests come in from users using your app:\n\n```\n1 2 3 4 5 6 7\n-------------\nA B A A C B A\n```\n\nInitially, 3 workers will get dispatched to handle requests 1, 2, and 5 (corresponding to functions: A, B, C). As soon as any of these workers finish, they will start processing the next function in the queue of the same function type, e.g. the worker that finished processing request 1 will start processing request 3, and so on.\n\nIf you want to change this behavior, there are several parameters that can be used to configure the queue and help reduce latency. Let's go through them one-by-one."
},
{
"id": 244,
"parent": 243,
"path": "10_other-tutorials/setting-up-a-demo-for-maximum-performance.md",
"level": 3,
"title": "The `default_concurrency_limit` parameter in `queue()`",
"content": "The first parameter we will explore is the `default_concurrency_limit` parameter in `queue()`. This controls how many workers can execute the same event. By default, this is set to `1`, but you can set it to a higher integer: `2`, `10`, or even `None` (in the last case, there is no limit besides the total number of available workers). \n\nThis is useful, for example, if your Gradio app does not call any resource-intensive functions. If your app only queries external APIs, then you can set the `default_concurrency_limit` much higher. Increasing this parameter can **linearly multiply the capacity of your server to handle requests**.\n\nSo why not set this parameter much higher all the time? Keep in mind that since requests are processed in parallel, each request will consume memory to store the data and weights for processing. This means that you might get out-of-memory errors if you increase the `default_concurrency_limit` too high. You may also start to get diminishing returns if the `default_concurrency_limit` is too high because of costs of switching between different worker threads.\n\n**Recommendation**: Increase the `default_concurrency_limit` parameter as high as you can while you continue to see performance gains or until you hit memory limits on your machine. You can [read about Hugging Face Spaces machine specs here](https://huggingface.co/docs/hub/spaces-overview)."
},
{
"id": 245,
"parent": 243,
"path": "10_other-tutorials/setting-up-a-demo-for-maximum-performance.md",
"level": 3,
"title": "The `concurrency_limit` parameter in events",
"content": "You can also set the number of requests that can be processed in parallel for each event individually. These take priority over the `default_concurrency_limit` parameter described previously.\n\nTo do this, set the `concurrency_limit` parameter of any event listener, e.g. `btn.click(..., concurrency_limit=20)` or in the `Interface` or `ChatInterface` classes: e.g. `gr.Interface(..., concurrency_limit=20)`. By default, this parameter is set to the global `default_concurrency_limit`."
},
{
"id": 246,
"parent": 243,
"path": "10_other-tutorials/setting-up-a-demo-for-maximum-performance.md",
"level": 3,
"title": "The `max_threads` parameter in `launch()`",
"content": "If your demo uses non-async functions, e.g. `def` instead of `async def`, they will be run in a threadpool. This threadpool has a size of 40 meaning that only 40 threads can be created to run your non-async functions. If you are running into this limit, you can increase the threadpool size with `max_threads`. The default value is 40.\n\nTip: You should use async functions whenever possible to increase the number of concurrent requests your app can handle. Quick functions that are not CPU-bound are good candidates to be written as `async`. This [guide](https://fastapi.tiangolo.com/async/) is a good primer on the concept."
},
{
"id": 247,
"parent": 243,
"path": "10_other-tutorials/setting-up-a-demo-for-maximum-performance.md",
"level": 3,
"title": "The `max_size` parameter in `queue()`",
"content": "A more blunt way to reduce the wait times is simply to prevent too many people from joining the queue in the first place. You can set the maximum number of requests that the queue processes using the `max_size` parameter of `queue()`. If a request arrives when the queue is already of the maximum size, it will not be allowed to join the queue and instead, the user will receive an error saying that the queue is full and to try again. By default, `max_size=None`, meaning that there is no limit to the number of users that can join the queue.\n\nParadoxically, setting a `max_size` can often improve user experience because it prevents users from being dissuaded by very long queue wait times. Users who are more interested and invested in your demo will keep trying to join the queue, and will be able to get their results faster.\n\n**Recommendation**: For a better user experience, set a `max_size` that is reasonable given your expectations of how long users might be willing to wait for a prediction."
},
{
"id": 248,
"parent": 243,
"path": "10_other-tutorials/setting-up-a-demo-for-maximum-performance.md",
"level": 3,
"title": "The `max_batch_size` parameter in events",
"content": "Another way to increase the parallelism of your Gradio demo is to write your function so that it can accept **batches** of inputs. Most deep learning models can process batches of samples more efficiently than processing individual samples.\n\nIf you write your function to process a batch of samples, Gradio will automatically batch incoming requests together and pass them into your function as a batch of samples. You need to set `batch` to `True` (by default it is `False`) and set a `max_batch_size` (by default it is `4`) based on the maximum number of samples your function is able to handle. These two parameters can be passed into `gr.Interface()` or to an event in Blocks such as `.click()`.\n\nWhile setting a batch is conceptually similar to having workers process requests in parallel, it is often _faster_ than setting the `concurrency_count` for deep learning models. The downside is that you might need to adapt your function a little bit to accept batches of samples instead of individual samples.\n\nHere's an example of a function that does _not_ accept a batch of inputs -- it processes a single input at a time:\n\n```py\nimport time\n\ndef trim_words(word, length):\n return word[:int(length)]\n\n```\n\nHere's the same function rewritten to take in a batch of samples:\n\n```py\nimport time\n\ndef trim_words(words, lengths):\n trimmed_words = []\n for w, l in zip(words, lengths):\n trimmed_words.append(w[:int(l)])\n return [trimmed_words]\n\n```\n\nThe second function can be used with `batch=True` and an appropriate `max_batch_size` parameter.\n\n**Recommendation**: If possible, write your function to accept batches of samples, and then set `batch` to `True` and the `max_batch_size` as high as possible based on your machine's memory limits."
},
{
"id": 249,
"parent": 242,
"path": "10_other-tutorials/setting-up-a-demo-for-maximum-performance.md",
"level": 2,
"title": "Upgrading your Hardware (GPUs, TPUs, etc.)",
"content": "If you have done everything above, and your demo is still not fast enough, you can upgrade the hardware that your model is running on. Changing the model from running on CPUs to running on GPUs will usually provide a 10x-50x increase in inference time for deep learning models.\n\nIt is particularly straightforward to upgrade your Hardware on Hugging Face Spaces. Simply click on the \"Settings\" tab in your Space and choose the Space Hardware you'd like.\n\n![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/spaces-gpu-settings.png)\n\nWhile you might need to adapt portions of your machine learning inference code to run on a GPU (here's a [handy guide](https://cnvrg.io/pytorch-cuda/) if you are using PyTorch), Gradio is completely agnostic to the choice of hardware and will work completely fine if you use it with CPUs, GPUs, TPUs, or any other hardware!\n\nNote: your GPU memory is different than your CPU memory, so if you upgrade your hardware,\nyou might need to adjust the value of the `default_concurrency_limit` parameter described above."
},
{
"id": 250,
"parent": 242,
"path": "10_other-tutorials/setting-up-a-demo-for-maximum-performance.md",
"level": 2,
"title": "Conclusion",
"content": "Congratulations! You know how to set up a Gradio demo for maximum performance. Good luck on your next viral demo!"
},
{
"id": 251,
"parent": null,
"path": "10_other-tutorials/installing-gradio-in-a-virtual-environment.md",
"level": 1,
"title": "Installing Gradio in a Virtual Environment",
"content": "Tags: INSTALLATION\n\nIn this guide, we will describe step-by-step how to install `gradio` within a virtual environment. This guide will cover both Windows and MacOS/Linux systems."
},
{
"id": 252,
"parent": 251,
"path": "10_other-tutorials/installing-gradio-in-a-virtual-environment.md",
"level": 2,
"title": "Virtual Environments",
"content": "A virtual environment in Python is a self-contained directory that holds a Python installation for a particular version of Python, along with a number of additional packages. This environment is isolated from the main Python installation and other virtual environments. Each environment can have its own independent set of installed Python packages, which allows you to maintain different versions of libraries for different projects without conflicts.\n\n\nUsing virtual environments ensures that you can work on multiple Python projects on the same machine without any conflicts. This is particularly useful when different projects require different versions of the same library. It also simplifies dependency management and enhances reproducibility, as you can easily share the requirements of your project with others."
},
{
"id": 253,
"parent": 251,
"path": "10_other-tutorials/installing-gradio-in-a-virtual-environment.md",
"level": 2,
"title": "Installing Gradio on Windows",
"content": "To install Gradio on a Windows system in a virtual environment, follow these steps:\n\n1. **Install Python**: Ensure you have Python 3.10 or higher installed. You can download it from [python.org](https://www.python.org/). You can verify the installation by running `python --version` or `python3 --version` in Command Prompt.\n\n\n2. **Create a Virtual Environment**:\n Open Command Prompt and navigate to your project directory. Then create a virtual environment using the following command:\n\n ```bash\n python -m venv gradio-env\n ```\n\n This command creates a new directory `gradio-env` in your project folder, containing a fresh Python installation.\n\n3. **Activate the Virtual Environment**:\n To activate the virtual environment, run:\n\n ```bash\n .\\gradio-env\\Scripts\\activate\n ```\n\n Your command prompt should now indicate that you are working inside `gradio-env`. Note: you can choose a different name than `gradio-env` for your virtual environment in this step.\n\n\n4. **Install Gradio**:\n Now, you can install Gradio using pip:\n\n ```bash\n pip install gradio\n ```\n\n5. **Verification**:\n To verify the installation, run `python` and then type:\n\n ```python\n import gradio as gr\n print(gr.__version__)\n ```\n\n This will display the installed version of Gradio."
},
{
"id": 254,
"parent": 251,
"path": "10_other-tutorials/installing-gradio-in-a-virtual-environment.md",
"level": 2,
"title": "Installing Gradio on MacOS/Linux",
"content": "The installation steps on MacOS and Linux are similar to Windows but with some differences in commands.\n\n1. **Install Python**:\n Python usually comes pre-installed on MacOS and most Linux distributions. You can verify the installation by running `python --version` in the terminal (note that depending on how Python is installed, you might have to use `python3` instead of `python` throughout these steps). \n \n Ensure you have Python 3.10 or higher installed. If you do not have it installed, you can download it from [python.org](https://www.python.org/). \n\n2. **Create a Virtual Environment**:\n Open Terminal and navigate to your project directory. Then create a virtual environment using:\n\n ```bash\n python -m venv gradio-env\n ```\n\n Note: you can choose a different name than `gradio-env` for your virtual environment in this step.\n\n3. **Activate the Virtual Environment**:\n To activate the virtual environment on MacOS/Linux, use:\n\n ```bash\n source gradio-env/bin/activate\n ```\n\n4. **Install Gradio**:\n With the virtual environment activated, install Gradio using pip:\n\n ```bash\n pip install gradio\n ```\n\n5. **Verification**:\n To verify the installation, run `python` and then type:\n\n ```python\n import gradio as gr\n print(gr.__version__)\n ```\n\n This will display the installed version of Gradio.\n\nBy following these steps, you can successfully install Gradio in a virtual environment on your operating system, ensuring a clean and managed workspace for your Python projects."
},
{
"id": 255,
"parent": null,
"path": "10_other-tutorials/running-gradio-on-your-web-server-with-nginx.md",
"level": 1,
"title": "Running a Gradio App on your Web Server with Nginx",
"content": "Tags: DEPLOYMENT, WEB SERVER, NGINX"
},
{
"id": 256,
"parent": 255,
"path": "10_other-tutorials/running-gradio-on-your-web-server-with-nginx.md",
"level": 2,
"title": "Introduction",
"content": "Gradio is a Python library that allows you to quickly create customizable web apps for your machine learning models and data processing pipelines. Gradio apps can be deployed on [Hugging Face Spaces](https://hf.space) for free.\n\nIn some cases though, you might want to deploy a Gradio app on your own web server. You might already be using [Nginx](https://www.nginx.com/), a highly performant web server, to serve your website (say `https://www.example.com`), and you want to attach Gradio to a specific subpath on your website (e.g. `https://www.example.com/gradio-demo`).\n\nIn this Guide, we will guide you through the process of running a Gradio app behind Nginx on your own web server to achieve this.\n\n**Prerequisites**\n\n1. A Linux web server with [Nginx installed](https://www.nginx.com/blog/setting-up-nginx/) and [Gradio installed](/quickstart)\n2. A working Gradio app saved as a python file on your web server"
},
{
"id": 257,
"parent": 255,
"path": "10_other-tutorials/running-gradio-on-your-web-server-with-nginx.md",
"level": 2,
"title": "Editing your Nginx configuration file",
"content": "1. Start by editing the Nginx configuration file on your web server. By default, this is located at: `/etc/nginx/nginx.conf`\n\nIn the `http` block, add the following line to include server block configurations from a separate file:\n\n```bash\ninclude /etc/nginx/sites-enabled/*;\n```\n\n2. Create a new file in the `/etc/nginx/sites-available` directory (create the directory if it does not already exist), using a filename that represents your app, for example: `sudo nano /etc/nginx/sites-available/my_gradio_app`\n\n3. Paste the following into your file editor:\n\n```bash\nserver {\n listen 80;\n server_name example.com www.example.com; # Change this to your domain name\n\n location /gradio-demo/ { # Change this if you'd like to server your Gradio app on a different path\n proxy_pass http://127.0.0.1:7860/; # Change this if your Gradio app will be running on a different port\n proxy_buffering off;\n proxy_redirect off;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";\n proxy_set_header Host $host;\n proxy_set_header X-Forwarded-Host $host;\n proxy_set_header X-Forwarded-Proto $scheme;\n }\n}\n```\n\n\nTip: Setting the `X-Forwarded-Host` and `X-Forwarded-Proto` headers is important as Gradio uses these, in conjunction with the `root_path` parameter discussed below, to construct the public URL that your app is being served on. Gradio uses the public URL to fetch various static assets. If these headers are not set, your Gradio app may load in a broken state.\n\n*Note:* The `$host` variable does not include the host port. If you are serving your Gradio application on a raw IP address and port, you should use the `$http_host` variable instead, in these lines:\n\n```bash\n proxy_set_header Host $host;\n proxy_set_header X-Forwarded-Host $host;\n```"
},
{
"id": 258,
"parent": 255,
"path": "10_other-tutorials/running-gradio-on-your-web-server-with-nginx.md",
"level": 2,
"title": "Run your Gradio app on your web server",
"content": "1. Before you launch your Gradio app, you'll need to set the `root_path` to be the same as the subpath that you specified in your nginx configuration. This is necessary for Gradio to run on any subpath besides the root of the domain.\n\n *Note:* Instead of a subpath, you can also provide a complete URL for `root_path` (beginning with `http` or `https`) in which case the `root_path` is treated as an absolute URL instead of a URL suffix (but in this case, you'll need to update the `root_path` if the domain changes).\n\nHere's a simple example of a Gradio app with a custom `root_path` corresponding to the Nginx configuration above.\n\n```python\nimport gradio as gr\nimport time\n\ndef test(x):\ntime.sleep(4)\nreturn x\n\ngr.Interface(test, \"textbox\", \"textbox\").queue().launch(root_path=\"/gradio-demo\")\n```\n\n2. Start a `tmux` session by typing `tmux` and pressing enter (optional)\n\nIt's recommended that you run your Gradio app in a `tmux` session so that you can keep it running in the background easily\n\n3. Then, start your Gradio app. Simply type in `python` followed by the name of your Gradio python file. By default, your app will run on `localhost:7860`, but if it starts on a different port, you will need to update the nginx configuration file above."
},
{
"id": 259,
"parent": 255,
"path": "10_other-tutorials/running-gradio-on-your-web-server-with-nginx.md",
"level": 2,
"title": "Restart Nginx",
"content": "1. If you are in a tmux session, exit by typing CTRL+B (or CMD+B), followed by the \"D\" key.\n\n2. Finally, restart nginx by running `sudo systemctl restart nginx`.\n\nAnd that's it! If you visit `https://example.com/gradio-demo` on your browser, you should see your Gradio app running there"
},
{
"id": 260,
"parent": null,
"path": "10_other-tutorials/creating-a-dashboard-from-supabase-data.md",
"level": 1,
"title": "Create a Dashboard from Supabase Data",
"content": "Tags: TABULAR, DASHBOARD, PLOTS\n\n[Supabase](https://supabase.com/) is a cloud-based open-source backend that provides a PostgreSQL database, authentication, and other useful features for building web and mobile applications. In this tutorial, you will learn how to read data from Supabase and plot it in **real-time** on a Gradio Dashboard.\n\n**Prerequisites:** To start, you will need a free Supabase account, which you can sign up for here: [https://app.supabase.com/](https://app.supabase.com/)\n\nIn this end-to-end guide, you will learn how to:\n\n- Create tables in Supabase\n- Write data to Supabase using the Supabase Python Client\n- Visualize the data in a real-time dashboard using Gradio\n\nIf you already have data on Supabase that you'd like to visualize in a dashboard, you can skip the first two sections and go directly to [visualizing the data](#visualize-the-data-in-a-real-time-gradio-dashboard)!"
},
{
"id": 261,
"parent": 260,
"path": "10_other-tutorials/creating-a-dashboard-from-supabase-data.md",
"level": 2,
"title": "Create a table in Supabase",
"content": "First of all, we need some data to visualize. Following this [excellent guide](https://supabase.com/blog/loading-data-supabase-python), we'll create fake commerce data and put it in Supabase.\n\n1\\. Start by creating a new project in Supabase. Once you're logged in, click the \"New Project\" button\n\n2\\. Give your project a name and database password. You can also choose a pricing plan (for our purposes, the Free Tier is sufficient!)\n\n3\\. You'll be presented with your API keys while the database spins up (can take up to 2 minutes).\n\n4\\. Click on \"Table Editor\" (the table icon) in the left pane to create a new table. We'll create a single table called `Product`, with the following schema:\n\n
\n
\n
product_id
int8
\n
inventory_count
int8
\n
price
float8
\n
product_name
varchar
\n
\n
\n\n5\\. Click Save to save the table schema.\n\nOur table is now ready!"
},
{
"id": 262,
"parent": 260,
"path": "10_other-tutorials/creating-a-dashboard-from-supabase-data.md",
"level": 2,
"title": "Write data to Supabase",
"content": "The next step is to write data to a Supabase dataset. We will use the Supabase Python library to do this.\n\n6\\. Install `supabase` by running the following command in your terminal:\n\n```bash\npip install supabase\n```\n\n7\\. Get your project URL and API key. Click the Settings (gear icon) on the left pane and click 'API'. The URL is listed in the Project URL box, while the API key is listed in Project API keys (with the tags `service_role`, `secret`)\n\n8\\. Now, run the following Python script to write some fake data to the table (note you have to put the values of `SUPABASE_URL` and `SUPABASE_SECRET_KEY` from step 7):\n\n```python\nimport supabase"
},
{
"id": 263,
"parent": null,
"path": "10_other-tutorials/creating-a-dashboard-from-supabase-data.md",
"level": 1,
"title": "Initialize the Supabase client",
"content": "client = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')"
},
{
"id": 264,
"parent": null,
"path": "10_other-tutorials/creating-a-dashboard-from-supabase-data.md",
"level": 1,
"title": "Define the data to write",
"content": "import random\n\nmain_list = []\nfor i in range(10):\n value = {'product_id': i,\n 'product_name': f\"Item {i}\",\n 'inventory_count': random.randint(1, 100),\n 'price': random.random()*100\n }\n main_list.append(value)"
},
{
"id": 265,
"parent": null,
"path": "10_other-tutorials/creating-a-dashboard-from-supabase-data.md",
"level": 1,
"title": "Write the data to the table",
"content": "data = client.table('Product').insert(main_list).execute()\n```\n\nReturn to your Supabase dashboard and refresh the page, you should now see 10 rows populated in the `Product` table!"
},
{
"id": 266,
"parent": 265,
"path": "10_other-tutorials/creating-a-dashboard-from-supabase-data.md",
"level": 2,
"title": "Visualize the Data in a Real-Time Gradio Dashboard",
"content": "Finally, we will read the data from the Supabase dataset using the same `supabase` Python library and create a realtime dashboard using `gradio`.\n\nNote: We repeat certain steps in this section (like creating the Supabase client) in case you did not go through the previous sections. As described in Step 7, you will need the project URL and API Key for your database.\n\n9\\. Write a function that loads the data from the `Product` table and returns it as a pandas Dataframe:\n\n```python\nimport supabase\nimport pandas as pd\n\nclient = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')\n\ndef read_data():\n response = client.table('Product').select(\"*\").execute()\n df = pd.DataFrame(response.data)\n return df\n```\n\n10\\. Create a small Gradio Dashboard with 2 Barplots that plots the prices and inventories of all of the items every minute and updates in real-time:\n\n```python\nimport gradio as gr\n\nwith gr.Blocks() as dashboard:\n with gr.Row():\n gr.BarPlot(read_data, x=\"product_id\", y=\"price\", title=\"Prices\", every=gr.Timer(60))\n gr.BarPlot(read_data, x=\"product_id\", y=\"inventory_count\", title=\"Inventory\", every=gr.Timer(60))\n\ndashboard.queue().launch()\n```\n\nNotice that by passing in a function to `gr.BarPlot()`, we have the BarPlot query the database as soon as the web app loads (and then again every 60 seconds because of the `every` parameter). Your final dashboard should look something like this:\n\n"
},
{
"id": 267,
"parent": 265,
"path": "10_other-tutorials/creating-a-dashboard-from-supabase-data.md",
"level": 2,
"title": "Conclusion",
"content": "That's it! In this tutorial, you learned how to write data to a Supabase dataset, and then read that data and plot the results as bar plots. If you update the data in the Supabase database, you'll notice that the Gradio dashboard will update within a minute.\n\nTry adding more plots and visualizations to this example (or with a different dataset) to build a more complex dashboard!"
},
{
"id": 268,
"parent": null,
"path": "10_other-tutorials/how-to-use-3D-model-component.md",
"level": 1,
"title": "How to Use the 3D Model Component",
"content": "Related spaces: https://huggingface.co/spaces/gradio/Model3D, https://huggingface.co/spaces/gradio/PIFu-Clothed-Human-Digitization, https://huggingface.co/spaces/gradio/dpt-depth-estimation-3d-obj\nTags: VISION, IMAGE"
},
{
"id": 269,
"parent": 268,
"path": "10_other-tutorials/how-to-use-3D-model-component.md",
"level": 2,
"title": "Introduction",
"content": "3D models are becoming more popular in machine learning and make for some of the most fun demos to experiment with. Using `gradio`, you can easily build a demo of your 3D image model and share it with anyone. The Gradio 3D Model component accepts 3 file types including: _.obj_, _.glb_, & _.gltf_.\n\nThis guide will show you how to build a demo for your 3D image model in a few lines of code; like the one below. Play around with 3D object by clicking around, dragging and zooming:\n\n"
},
{
"id": 270,
"parent": 269,
"path": "10_other-tutorials/how-to-use-3D-model-component.md",
"level": 3,
"title": "Prerequisites",
"content": "Make sure you have the `gradio` Python package already [installed](https://gradio.app/guides/quickstart)."
},
{
"id": 271,
"parent": 268,
"path": "10_other-tutorials/how-to-use-3D-model-component.md",
"level": 2,
"title": "Taking a Look at the Code",
"content": "Let's take a look at how to create the minimal interface above. The prediction function in this case will just return the original 3D model mesh, but you can change this function to run inference on your machine learning model. We'll take a look at more complex examples below.\n\n```python\nimport gradio as gr\nimport os\n\n\ndef load_mesh(mesh_file_name):\n return mesh_file_name\n\n\ndemo = gr.Interface(\n fn=load_mesh,\n inputs=gr.Model3D(),\n outputs=gr.Model3D(\n clear_color=[0.0, 0.0, 0.0, 0.0], label=\"3D Model\"),\n examples=[\n [os.path.join(os.path.dirname(__file__), \"files/Bunny.obj\")],\n [os.path.join(os.path.dirname(__file__), \"files/Duck.glb\")],\n [os.path.join(os.path.dirname(__file__), \"files/Fox.gltf\")],\n [os.path.join(os.path.dirname(__file__), \"files/face.obj\")],\n ],\n)\n\nif __name__ == \"__main__\":\n demo.launch()\n```\n\nLet's break down the code above:\n\n`load_mesh`: This is our 'prediction' function and for simplicity, this function will take in the 3D model mesh and return it.\n\nCreating the Interface:\n\n- `fn`: the prediction function that is used when the user clicks submit. In our case this is the `load_mesh` function.\n- `inputs`: create a model3D input component. The input expects an uploaded file as a {str} filepath.\n- `outputs`: create a model3D output component. The output component also expects a file as a {str} filepath.\n - `clear_color`: this is the background color of the 3D model canvas. Expects RGBa values.\n - `label`: the label that appears on the top left of the component.\n- `examples`: list of 3D model files. The 3D model component can accept _.obj_, _.glb_, & _.gltf_ file types.\n- `cache_examples`: saves the predicted output for the examples, to save time on inference."
},
{
"id": 272,
"parent": 268,
"path": "10_other-tutorials/how-to-use-3D-model-component.md",
"level": 2,
"title": "Exploring a more complex Model3D Demo:",
"content": "Below is a demo that uses the DPT model to predict the depth of an image and then uses 3D Point Cloud to create a 3D object. Take a look at the [app.py](https://huggingface.co/spaces/gradio/dpt-depth-estimation-3d-obj/blob/main/app.py) file for a peek into the code and the model prediction function.\n\n\n---\n\nAnd you're done! That's all the code you need to build an interface for your Model3D model. Here are some references that you may find useful:\n\n- Gradio's [\"Getting Started\" guide](https://gradio.app/getting_started/)\n- The first [3D Model Demo](https://huggingface.co/spaces/gradio/Model3D) and [complete code](https://huggingface.co/spaces/gradio/Model3D/tree/main) (on Hugging Face Spaces)"
},
{
"id": 273,
"parent": null,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 1,
"title": "Gradio and ONNX on Hugging Face",
"content": "Related spaces: https://huggingface.co/spaces/onnx/EfficientNet-Lite4\nTags: ONNX, SPACES\nContributed by Gradio and the ONNX team"
},
{
"id": 274,
"parent": 273,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 2,
"title": "Introduction",
"content": "In this Guide, we'll walk you through:\n\n- Introduction of ONNX, ONNX model zoo, Gradio, and Hugging Face Spaces\n- How to setup a Gradio demo for EfficientNet-Lite4\n- How to contribute your own Gradio demos for the ONNX organization on Hugging Face\n\nHere's an [example](https://onnx-efficientnet-lite4.hf.space/) of an ONNX model."
},
{
"id": 275,
"parent": 273,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 2,
"title": "What is the ONNX Model Zoo?",
"content": "Open Neural Network Exchange ([ONNX](https://onnx.ai/)) is an open standard format for representing machine learning models. ONNX is supported by a community of partners who have implemented it in many frameworks and tools. For example, if you have trained a model in TensorFlow or PyTorch, you can convert it to ONNX easily, and from there run it on a variety of devices using an engine/compiler like ONNX Runtime.\n\nThe [ONNX Model Zoo](https://github.com/onnx/models) is a collection of pre-trained, state-of-the-art models in the ONNX format contributed by community members. Accompanying each model are Jupyter notebooks for model training and running inference with the trained model. The notebooks are written in Python and include links to the training dataset as well as references to the original paper that describes the model architecture."
},
{
"id": 276,
"parent": 273,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 2,
"title": "What are Hugging Face Spaces & Gradio?",
"content": ""
},
{
"id": 277,
"parent": 276,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 3,
"title": "Gradio",
"content": "Gradio lets users demo their machine learning models as a web app all in python code. Gradio wraps a python function into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.\n\nGet started [here](https://gradio.app/getting_started)"
},
{
"id": 278,
"parent": 276,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 3,
"title": "Hugging Face Spaces",
"content": "Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch)."
},
{
"id": 279,
"parent": 276,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 3,
"title": "Hugging Face Models",
"content": "Hugging Face Model Hub also supports ONNX models and ONNX models can be filtered through the [ONNX tag](https://huggingface.co/models?library=onnx&sort=downloads)"
},
{
"id": 280,
"parent": 273,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 2,
"title": "How did Hugging Face help the ONNX Model Zoo?",
"content": "There are a lot of Jupyter notebooks in the ONNX Model Zoo for users to test models. Previously, users needed to download the models themselves and run those notebooks locally for testing. With Hugging Face, the testing process can be much simpler and more user-friendly. Users can easily try certain ONNX Model Zoo model on Hugging Face Spaces and run a quick demo powered by Gradio with ONNX Runtime, all on cloud without downloading anything locally. Note, there are various runtimes for ONNX, e.g., [ONNX Runtime](https://github.com/microsoft/onnxruntime), [MXNet](https://github.com/apache/incubator-mxnet)."
},
{
"id": 281,
"parent": 273,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 2,
"title": "What is the role of ONNX Runtime?",
"content": "ONNX Runtime is a cross-platform inference and training machine-learning accelerator. It makes live Gradio demos with ONNX Model Zoo model on Hugging Face possible.\n\nONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms. For more information please see the [official website](https://onnxruntime.ai/)."
},
{
"id": 282,
"parent": 273,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 2,
"title": "Setting up a Gradio Demo for EfficientNet-Lite4",
"content": "EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite models. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU. To learn more read the [model card](https://github.com/onnx/models/tree/main/vision/classification/efficientnet-lite4)\n\nHere we walk through setting up a example demo for EfficientNet-Lite4 using Gradio\n\nFirst we import our dependencies and download and load the efficientnet-lite4 model from the onnx model zoo. Then load the labels from the labels_map.txt file. We then setup our preprocessing functions, load the model for inference, and setup the inference function. Finally, the inference function is wrapped into a gradio interface for a user to interact with. See the full code below.\n\n```python\nimport numpy as np\nimport math\nimport matplotlib.pyplot as plt\nimport cv2\nimport json\nimport gradio as gr\nfrom huggingface_hub import hf_hub_download\nfrom onnx import hub\nimport onnxruntime as ort"
},
{
"id": 283,
"parent": null,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 1,
"title": "loads ONNX model from ONNX Model Zoo",
"content": "model = hub.load(\"efficientnet-lite4\")"
},
{
"id": 284,
"parent": null,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 1,
"title": "loads the labels text file",
"content": "labels = json.load(open(\"labels_map.txt\", \"r\"))"
},
{
"id": 285,
"parent": null,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 1,
"title": "sets image file dimensions to 224x224 by resizing and cropping image from center",
"content": "def pre_process_edgetpu(img, dims):\n output_height, output_width, _ = dims\n img = resize_with_aspectratio(img, output_height, output_width, inter_pol=cv2.INTER_LINEAR)\n img = center_crop(img, output_height, output_width)\n img = np.asarray(img, dtype='float32')\n # converts jpg pixel value from [0 - 255] to float array [-1.0 - 1.0]\n img -= [127.0, 127.0, 127.0]\n img /= [128.0, 128.0, 128.0]\n return img"
},
{
"id": 286,
"parent": null,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 1,
"title": "resizes the image with a proportional scale",
"content": "def resize_with_aspectratio(img, out_height, out_width, scale=87.5, inter_pol=cv2.INTER_LINEAR):\n height, width, _ = img.shape\n new_height = int(100. * out_height / scale)\n new_width = int(100. * out_width / scale)\n if height > width:\n w = new_width\n h = int(new_height * height / width)\n else:\n h = new_height\n w = int(new_width * width / height)\n img = cv2.resize(img, (w, h), interpolation=inter_pol)\n return img"
},
{
"id": 287,
"parent": null,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 1,
"title": "crops the image around the center based on given height and width",
"content": "def center_crop(img, out_height, out_width):\n height, width, _ = img.shape\n left = int((width - out_width) / 2)\n right = int((width + out_width) / 2)\n top = int((height - out_height) / 2)\n bottom = int((height + out_height) / 2)\n img = img[top:bottom, left:right]\n return img\n\n\nsess = ort.InferenceSession(model)\n\ndef inference(img):\n img = cv2.imread(img)\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n img = pre_process_edgetpu(img, (224, 224, 3))\n\n img_batch = np.expand_dims(img, axis=0)\n\n results = sess.run([\"Softmax:0\"], {\"images:0\": img_batch})[0]\n result = reversed(results[0].argsort()[-5:])\n resultdic = {}\n for r in result:\n resultdic[labels[str(r)]] = float(results[0][r])\n return resultdic\n\ntitle = \"EfficientNet-Lite4\"\ndescription = \"EfficientNet-Lite 4 is the largest variant and most accurate of the set of EfficientNet-Lite model. It is an integer-only quantized model that produces the highest accuracy of all of the EfficientNet models. It achieves 80.4% ImageNet top-1 accuracy, while still running in real-time (e.g. 30ms/image) on a Pixel 4 CPU.\"\nexamples = [['catonnx.jpg']]\ngr.Interface(inference, gr.Image(type=\"filepath\"), \"label\", title=title, description=description, examples=examples).launch()\n```"
},
{
"id": 288,
"parent": 287,
"path": "10_other-tutorials/Gradio-and-ONNX-on-Hugging-Face.md",
"level": 2,
"title": "How to contribute Gradio demos on HF spaces using ONNX models",
"content": "- Add model to the [onnx model zoo](https://github.com/onnx/models/blob/main/.github/PULL_REQUEST_TEMPLATE.md)\n- Create an account on Hugging Face [here](https://huggingface.co/join).\n- See list of models left to add to ONNX organization, please refer to the table with the [Models list](https://github.com/onnx/models#models)\n- Add Gradio Demo under your username, see this [blog post](https://huggingface.co/blog/gradio-spaces) for setting up Gradio Demo on Hugging Face.\n- Request to join ONNX Organization [here](https://huggingface.co/onnx).\n- Once approved transfer model from your username to ONNX organization\n- Add a badge for model in model table, see examples in [Models list](https://github.com/onnx/models#models)"
},
{
"id": 289,
"parent": null,
"path": "10_other-tutorials/plot-component-for-maps.md",
"level": 1,
"title": "How to Use the Plot Component for Maps",
"content": "Tags: PLOTS, MAPS"
},
{
"id": 290,
"parent": 289,
"path": "10_other-tutorials/plot-component-for-maps.md",
"level": 2,
"title": "Introduction",
"content": "This guide explains how you can use Gradio to plot geographical data on a map using the `gradio.Plot` component. The Gradio `Plot` component works with Matplotlib, Bokeh and Plotly. Plotly is what we will be working with in this guide. Plotly allows developers to easily create all sorts of maps with their geographical data. Take a look [here](https://plotly.com/python/maps/) for some examples."
},
{
"id": 291,
"parent": 289,
"path": "10_other-tutorials/plot-component-for-maps.md",
"level": 2,
"title": "Overview",
"content": "We will be using the New York City Airbnb dataset, which is hosted on kaggle [here](https://www.kaggle.com/datasets/dgomonov/new-york-city-airbnb-open-data). I've uploaded it to the Hugging Face Hub as a dataset [here](https://huggingface.co/datasets/gradio/NYC-Airbnb-Open-Data) for easier use and download. Using this data we will plot Airbnb locations on a map output and allow filtering based on price and location. Below is the demo that we will be building. ⚡️\n\n$demo_map_airbnb"
},
{
"id": 292,
"parent": 289,
"path": "10_other-tutorials/plot-component-for-maps.md",
"level": 2,
"title": "Step 1 - Loading CSV data 💾",
"content": "Let's start by loading the Airbnb NYC data from the Hugging Face Hub.\n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"gradio/NYC-Airbnb-Open-Data\", split=\"train\")\ndf = dataset.to_pandas()\n\ndef filter_map(min_price, max_price, boroughs):\n new_df = df[(df['neighbourhood_group'].isin(boroughs)) &\n (df['price'] > min_price) & (df['price'] < max_price)]\n names = new_df[\"name\"].tolist()\n prices = new_df[\"price\"].tolist()\n text_list = [(names[i], prices[i]) for i in range(0, len(names))]\n```\n\nIn the code above, we first load the csv data into a pandas dataframe. Let's begin by defining a function that we will use as the prediction function for the gradio app. This function will accept the minimum price and maximum price range as well as the list of boroughs to filter the resulting map. We can use the passed in values (`min_price`, `max_price`, and list of `boroughs`) to filter the dataframe and create `new_df`. Next we will create `text_list` of the names and prices of each Airbnb to use as labels on the map."
},
{
"id": 293,
"parent": 289,
"path": "10_other-tutorials/plot-component-for-maps.md",
"level": 2,
"title": "Step 2 - Map Figure 🌐",
"content": "Plotly makes it easy to work with maps. Let's take a look below how we can create a map figure.\n\n```python\nimport plotly.graph_objects as go\n\nfig = go.Figure(go.Scattermapbox(\n customdata=text_list,\n lat=new_df['latitude'].tolist(),\n lon=new_df['longitude'].tolist(),\n mode='markers',\n marker=go.scattermapbox.Marker(\n size=6\n ),\n hoverinfo=\"text\",\n hovertemplate='Name: %{customdata[0]} Price: $%{customdata[1]}'\n ))\n\nfig.update_layout(\n mapbox_style=\"open-street-map\",\n hovermode='closest',\n mapbox=dict(\n bearing=0,\n center=go.layout.mapbox.Center(\n lat=40.67,\n lon=-73.90\n ),\n pitch=0,\n zoom=9\n ),\n)\n```\n\nAbove, we create a scatter plot on mapbox by passing it our list of latitudes and longitudes to plot markers. We also pass in our custom data of names and prices for additional info to appear on every marker we hover over. Next we use `update_layout` to specify other map settings such as zoom, and centering.\n\nMore info [here](https://plotly.com/python/scattermapbox/) on scatter plots using Mapbox and Plotly."
},
{
"id": 294,
"parent": 289,
"path": "10_other-tutorials/plot-component-for-maps.md",
"level": 2,
"title": "Step 3 - Gradio App ⚡️",
"content": "We will use two `gr.Number` components and a `gr.CheckboxGroup` to allow users of our app to specify price ranges and borough locations. We will then use the `gr.Plot` component as an output for our Plotly + Mapbox map we created earlier.\n\n```python\nwith gr.Blocks() as demo:\n with gr.Column():\n with gr.Row():\n min_price = gr.Number(value=250, label=\"Minimum Price\")\n max_price = gr.Number(value=1000, label=\"Maximum Price\")\n boroughs = gr.CheckboxGroup(choices=[\"Queens\", \"Brooklyn\", \"Manhattan\", \"Bronx\", \"Staten Island\"], value=[\"Queens\", \"Brooklyn\"], label=\"Select Boroughs:\")\n btn = gr.Button(value=\"Update Filter\")\n map = gr.Plot()\n demo.load(filter_map, [min_price, max_price, boroughs], map)\n btn.click(filter_map, [min_price, max_price, boroughs], map)\n```\n\nWe layout these components using the `gr.Column` and `gr.Row` and we'll also add event triggers for when the demo first loads and when our \"Update Filter\" button is clicked in order to trigger the map to update with our new filters.\n\nThis is what the full demo code looks like:\n\n```py\n# type: ignore\nimport gradio as gr\nimport plotly.graph_objects as go\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"gradio/NYC-Airbnb-Open-Data\", split=\"train\")\ndf = dataset.to_pandas()\n\ndef filter_map(min_price, max_price, boroughs):\n\n filtered_df = df[(df['neighbourhood_group'].isin(boroughs)) &\n (df['price'] > min_price) & (df['price'] < max_price)]\n names = filtered_df[\"name\"].tolist()\n prices = filtered_df[\"price\"].tolist()\n text_list = [(names[i], prices[i]) for i in range(0, len(names))]\n fig = go.Figure(go.Scattermapbox(\n customdata=text_list,\n lat=filtered_df['latitude'].tolist(),\n lon=filtered_df['longitude'].tolist(),\n mode='markers',\n marker=go.scattermapbox.Marker(\n size=6\n ),\n hoverinfo=\"text\",\n hovertemplate='Name: %{customdata[0]} Price: $%{customdata[1]}'\n ))\n\n fig.update_layout(\n mapbox_style=\"open-street-map\",\n hovermode='closest',\n mapbox=dict(\n bearing=0,\n center=go.layout.mapbox.Center(\n lat=40.67,\n lon=-73.90\n ),\n pitch=0,\n zoom=9\n ),\n )\n\n return fig\n\nwith gr.Blocks() as demo:\n with gr.Column():\n with gr.Row():\n min_price = gr.Number(value=250, label=\"Minimum Price\")\n max_price = gr.Number(value=1000, label=\"Maximum Price\")\n boroughs = gr.CheckboxGroup(choices=[\"Queens\", \"Brooklyn\", \"Manhattan\", \"Bronx\", \"Staten Island\"], value=[\"Queens\", \"Brooklyn\"], label=\"Select Boroughs:\")\n btn = gr.Button(value=\"Update Filter\")\n map = gr.Plot()\n demo.load(filter_map, [min_price, max_price, boroughs], map)\n btn.click(filter_map, [min_price, max_price, boroughs], map)\n\nif __name__ == \"__main__\":\n demo.launch()\n\n```"
},
{
"id": 295,
"parent": 289,
"path": "10_other-tutorials/plot-component-for-maps.md",
"level": 2,
"title": "Step 4 - Deployment 🤗",
"content": "If you run the code above, your app will start running locally.\nYou can even get a temporary shareable link by passing the `share=True` parameter to `launch`.\n\nBut what if you want to a permanent deployment solution?\nLet's deploy our Gradio app to the free HuggingFace Spaces platform.\n\nIf you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations)."
},
{
"id": 296,
"parent": 289,
"path": "10_other-tutorials/plot-component-for-maps.md",
"level": 2,
"title": "Conclusion 🎉",
"content": "And you're all done! That's all the code you need to build a map demo.\n\nHere's a link to the demo [Map demo](https://huggingface.co/spaces/gradio/map_airbnb) and [complete code](https://huggingface.co/spaces/gradio/map_airbnb/blob/main/run.py) (on Hugging Face Spaces)"
},
{
"id": 297,
"parent": null,
"path": "10_other-tutorials/image-classification-with-vision-transformers.md",
"level": 1,
"title": "Image Classification with Vision Transformers",
"content": "Related spaces: https://huggingface.co/spaces/abidlabs/vision-transformer\nTags: VISION, TRANSFORMERS, HUB"
},
{
"id": 298,
"parent": 297,
"path": "10_other-tutorials/image-classification-with-vision-transformers.md",
"level": 2,
"title": "Introduction",
"content": "Image classification is a central task in computer vision. Building better classifiers to classify what object is present in a picture is an active area of research, as it has applications stretching from facial recognition to manufacturing quality control.\n\nState-of-the-art image classifiers are based on the _transformers_ architectures, originally popularized for NLP tasks. Such architectures are typically called vision transformers (ViT). Such models are perfect to use with Gradio's _image_ input component, so in this tutorial we will build a web demo to classify images using Gradio. We will be able to build the whole web application in a **single line of Python**, and it will look like the demo on the bottom of the page.\n\nLet's get started!"
},
{
"id": 299,
"parent": 298,
"path": "10_other-tutorials/image-classification-with-vision-transformers.md",
"level": 3,
"title": "Prerequisites",
"content": "Make sure you have the `gradio` Python package already [installed](/getting_started)."
},
{
"id": 300,
"parent": 297,
"path": "10_other-tutorials/image-classification-with-vision-transformers.md",
"level": 2,
"title": "Step 1 — Choosing a Vision Image Classification Model",
"content": "First, we will need an image classification model. For this tutorial, we will use a model from the [Hugging Face Model Hub](https://huggingface.co/models?pipeline_tag=image-classification). The Hub contains thousands of models covering dozens of different machine learning tasks.\n\nExpand the Tasks category on the left sidebar and select \"Image Classification\" as our task of interest. You will then see all of the models on the Hub that are designed to classify images.\n\nAt the time of writing, the most popular one is `google/vit-base-patch16-224`, which has been trained on ImageNet images at a resolution of 224x224 pixels. We will use this model for our demo."
},
{
"id": 301,
"parent": 297,
"path": "10_other-tutorials/image-classification-with-vision-transformers.md",
"level": 2,
"title": "Step 2 — Loading the Vision Transformer Model with Gradio",
"content": "When using a model from the Hugging Face Hub, we do not need to define the input or output components for the demo. Similarly, we do not need to be concerned with the details of preprocessing or postprocessing.\nAll of these are automatically inferred from the model tags.\n\nBesides the import statement, it only takes a single line of Python to load and launch the demo.\n\nWe use the `gr.Interface.load()` method and pass in the path to the model including the `huggingface/` to designate that it is from the Hugging Face Hub.\n\n```python\nimport gradio as gr\n\ngr.Interface.load(\n \"huggingface/google/vit-base-patch16-224\",\n examples=[\"alligator.jpg\", \"laptop.jpg\"]).launch()\n```\n\nNotice that we have added one more parameter, the `examples`, which allows us to prepopulate our interfaces with a few predefined examples.\n\nThis produces the following interface, which you can try right here in your browser. When you input an image, it is automatically preprocessed and sent to the Hugging Face Hub API, where it is passed through the model and returned as a human-interpretable prediction. Try uploading your own image!\n\n\n\n---\n\nAnd you're done! In one line of code, you have built a web demo for an image classifier. If you'd like to share with others, try setting `share=True` when you `launch()` the Interface!"
},
{
"id": 302,
"parent": null,
"path": "10_other-tutorials/Gradio-and-Wandb-Integration.md",
"level": 1,
"title": "Gradio and W&B Integration",
"content": "Related spaces: https://huggingface.co/spaces/akhaliq/JoJoGAN\nTags: WANDB, SPACES\nContributed by Gradio team"
},
{
"id": 303,
"parent": 302,
"path": "10_other-tutorials/Gradio-and-Wandb-Integration.md",
"level": 2,
"title": "Introduction",
"content": "In this Guide, we'll walk you through:\n\n- Introduction of Gradio, and Hugging Face Spaces, and Wandb\n- How to setup a Gradio demo using the Wandb integration for JoJoGAN\n- How to contribute your own Gradio demos after tracking your experiments on wandb to the Wandb organization on Hugging Face"
},
{
"id": 304,
"parent": 302,
"path": "10_other-tutorials/Gradio-and-Wandb-Integration.md",
"level": 2,
"title": "What is Wandb?",
"content": "Weights and Biases (W&B) allows data scientists and machine learning scientists to track their machine learning experiments at every stage, from training to production. Any metric can be aggregated over samples and shown in panels in a customizable and searchable dashboard, like below:\n\n"
},
{
"id": 305,
"parent": 302,
"path": "10_other-tutorials/Gradio-and-Wandb-Integration.md",
"level": 2,
"title": "What are Hugging Face Spaces & Gradio?",
"content": ""
},
{
"id": 306,
"parent": 305,
"path": "10_other-tutorials/Gradio-and-Wandb-Integration.md",
"level": 3,
"title": "Gradio",
"content": "Gradio lets users demo their machine learning models as a web app, all in a few lines of Python. Gradio wraps any Python function (such as a machine learning model's inference function) into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.\n\nGet started [here](https://gradio.app/getting_started)"
},
{
"id": 307,
"parent": 305,
"path": "10_other-tutorials/Gradio-and-Wandb-Integration.md",
"level": 3,
"title": "Hugging Face Spaces",
"content": "Hugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch)."
},
{
"id": 308,
"parent": 302,
"path": "10_other-tutorials/Gradio-and-Wandb-Integration.md",
"level": 2,
"title": "Setting up a Gradio Demo for JoJoGAN",
"content": "Now, let's walk you through how to do this on your own. We'll make the assumption that you're new to W&B and Gradio for the purposes of this tutorial.\n\nLet's get started!\n\n1. Create a W&B account\n\n Follow [these quick instructions](https://app.wandb.ai/login) to create your free account if you don’t have one already. It shouldn't take more than a couple minutes. Once you're done (or if you've already got an account), next, we'll run a quick colab.\n\n2. Open Colab Install Gradio and W&B\n\n We'll be following along with the colab provided in the JoJoGAN repo with some minor modifications to use Wandb and Gradio more effectively.\n\n [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)\n\n Install Gradio and Wandb at the top:\n\n ```sh\n pip install gradio wandb\n ```\n\n3. Finetune StyleGAN and W&B experiment tracking\n\n This next step will open a W&B dashboard to track your experiments and a gradio panel showing pretrained models to choose from a drop down menu from a Gradio Demo hosted on Huggingface Spaces. Here's the code you need for that:\n\n ```python\n alpha = 1.0\n alpha = 1-alpha\n\n preserve_color = True\n num_iter = 100\n log_interval = 50\n\n samples = []\n column_names = [\"Reference (y)\", \"Style Code(w)\", \"Real Face Image(x)\"]\n\n wandb.init(project=\"JoJoGAN\")\n config = wandb.config\n config.num_iter = num_iter\n config.preserve_color = preserve_color\n wandb.log(\n {\"Style reference\": [wandb.Image(transforms.ToPILImage()(target_im))]},\n step=0)\n\n # load discriminator for perceptual loss\n discriminator = Discriminator(1024, 2).eval().to(device)\n ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)\n discriminator.load_state_dict(ckpt[\"d\"], strict=False)\n\n # reset generator\n del generator\n generator = deepcopy(original_generator)\n\n g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))\n\n # Which layers to swap for generating a family of plausible real images -> fake image\n if preserve_color:\n id_swap = [9,11,15,16,17]\n else:\n id_swap = list(range(7, generator.n_latent))\n\n for idx in tqdm(range(num_iter)):\n mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)\n in_latent = latents.clone()\n in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]\n\n img = generator(in_latent, input_is_latent=True)\n\n with torch.no_grad():\n real_feat = discriminator(targets)\n fake_feat = discriminator(img)\n\n loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)\n\n wandb.log({\"loss\": loss}, step=idx)\n if idx % log_interval == 0:\n generator.eval()\n my_sample = generator(my_w, input_is_latent=True)\n generator.train()\n my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))\n wandb.log(\n {\"Current stylization\": [wandb.Image(my_sample)]},\n step=idx)\n table_data = [\n wandb.Image(transforms.ToPILImage()(target_im)),\n wandb.Image(img),\n wandb.Image(my_sample),\n ]\n samples.append(table_data)\n\n g_optim.zero_grad()\n loss.backward()\n g_optim.step()\n\n out_table = wandb.Table(data=samples, columns=column_names)\n wandb.log({\"Current Samples\": out_table})\n ```\n4. Save, Download, and Load Model\n\n Here's how to save and download your model.\n\n ```python\n from PIL import Image\n import torch\n torch.backends.cudnn.benchmark = True\n from torchvision import transforms, utils\n from util import *\n import math\n import random\n import numpy as np\n from torch import nn, autograd, optim\n from torch.nn import functional as F\n from tqdm import tqdm\n import lpips\n from model import *\n from e4e_projection import projection as e4e_projection\n \n from copy import deepcopy\n import imageio\n \n import os\n import sys\n import torchvision.transforms as transforms\n from argparse import Namespace\n from e4e.models.psp import pSp\n from util import *\n from huggingface_hub import hf_hub_download\n from google.colab import files\n \n torch.save({\"g\": generator.state_dict()}, \"your-model-name.pt\")\n \n files.download('your-model-name.pt')\n \n latent_dim = 512\n device=\"cuda\"\n model_path_s = hf_hub_download(repo_id=\"akhaliq/jojogan-stylegan2-ffhq-config-f\", filename=\"stylegan2-ffhq-config-f.pt\")\n original_generator = Generator(1024, latent_dim, 8, 2).to(device)\n ckpt = torch.load(model_path_s, map_location=lambda storage, loc: storage)\n original_generator.load_state_dict(ckpt[\"g_ema\"], strict=False)\n mean_latent = original_generator.mean_latent(10000)\n \n generator = deepcopy(original_generator)\n \n ckpt = torch.load(\"/content/JoJoGAN/your-model-name.pt\", map_location=lambda storage, loc: storage)\n generator.load_state_dict(ckpt[\"g\"], strict=False)\n generator.eval()\n \n plt.rcParams['figure.dpi'] = 150\n \n transform = transforms.Compose(\n [\n transforms.Resize((1024, 1024)),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n ]\n )\n \n def inference(img):\n img.save('out.jpg')\n aligned_face = align_face('out.jpg')\n \n my_w = e4e_projection(aligned_face, \"out.pt\", device).unsqueeze(0)\n with torch.no_grad():\n my_sample = generator(my_w, input_is_latent=True)\n \n npimage = my_sample[0].cpu().permute(1, 2, 0).detach().numpy()\n imageio.imwrite('filename.jpeg', npimage)\n return 'filename.jpeg'\n ````\n\n5. Build a Gradio Demo\n\n ```python\n import gradio as gr\n \n title = \"JoJoGAN\"\n description = \"Gradio Demo for JoJoGAN: One Shot Face Stylization. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below.\"\n \n demo = gr.Interface(\n inference,\n gr.Image(type=\"pil\"),\n gr.Image(type=\"file\"),\n title=title,\n description=description\n )\n \n demo.launch(share=True)\n ```\n\n6. Integrate Gradio into your W&B Dashboard\n\n The last step—integrating your Gradio demo with your W&B dashboard—is just one extra line:\n\n ```python\n demo.integrate(wandb=wandb)\n ```\n\n Once you call integrate, a demo will be created and you can integrate it into your dashboard or report.\n\n Outside of W&B with Web components, using the `gradio-app` tags, anyone can embed Gradio demos on HF spaces directly into their blogs, websites, documentation, etc.:\n \n ```html\n \n ```\n\n7. (Optional) Embed W&B plots in your Gradio App\n\n It's also possible to embed W&B plots within Gradio apps. To do so, you can create a W&B Report of your plots and\n embed them within your Gradio app within a `gr.HTML` block.\n\n The Report will need to be public and you will need to wrap the URL within an iFrame like this:\n\n ```python\n import gradio as gr\n \n def wandb_report(url):\n iframe = f'