Model Card Generator Interface: Crafting Clear Insights into AI Models

Community Article Published September 27, 2024

Introduction

Machine learning models are increasingly deployed to drive decisions that impact everything from business strategies to healthcare outcomes. With this growing influence comes the critical need for transparency and accountability. Model cards provide a structured way to document a model’s capabilities, fairness, and ethical considerations.

In this blog, we will explore the Model Card Generator Interface—a tool designed to simplify the creation of these vital reports. The Model Card Generator Interface enables users to effortlessly create interactive HTML reports or static Markdown reports that showcase detailed insights into models without any coding required.

Model Card UI GIF

Table of Contents

Why Model Card Generator?

Imagine creating your ultimate video game character. You spend countless hours building and perfecting your character. How do you showcase all that effort? You need a character profile—a detailed, visually engaging summary that highlights strengths, weaknesses, and how the character might perform in various scenarios. Similarly, think of the Model Card Generator as the character profile of your machine learning model. It helps your audience understand critical questions regarding your model – who and how can one use their model, where does their model excel, what are its limitations? And so on.

Character Card

Whether you are sharing your model with fellow developers, stakeholders, or end-users, the Model Card Generator transforms complex data into a user-friendly, interactive engaging format, making it easy for everyone to grasp the capabilities and limitations of your machine learning creation.

Running the UI

There are two ways to get the Model Card Generator UI running:

Method 1: Running Model Card UI locally

  • Step 1: Clone the XAI GitHub Repository to your local machine using the following command:
git clone https://github.com/Intel/intel-xai-tools.git 
  • Step 2: After cloning the repository, navigate to the Model Card UI directory:
cd intel-xai-tools/model_card_gen/model_card_ui 
  • Step 3: Set Up Your Virtual Environment and Install Dependencies

Before running the UI, you should set up a virtual environment. Here's how you can do it using virtualenv:

python3 -m virtualenv <virtual environment name> 
source mgc_ui_venv/bin/activate 

Replace <virtual environment name> with the name of the virtual environment you want.This creates and activates a new virtual environment with your given name.

Next, install the required Python packages with the following command:

pip install -r requirements.txt 
  • Step 4: Run the Streamlit Application

With your environment ready and dependencies installed, you can now launch the Streamlit application:

streamlit run home.py 

This command starts the Streamlit server. You will see something similar:

Collecting usage statistics. To deactivate, set browser.gatherUsageStats to false. 


You can now view your Streamlit app in your browser. 
Network URL: http://<network-ip>:8501  
External URL: http://<external-ip>:8501 

Please note that <network-ip> and <external-ip> are placeholders for the actual network and external IP addresses of your server. To access your Streamlit application, you can use the Local URL if you are on the same machine as the server, or the appropriate Network or External URL if you are accessing from a different machine

Method 2: Running Model Card UI Using Docker

  • Step 1: Clone the XAI GitHub Repository to your local machine
git clone https://github.com/Intel/intel-xai-tools.git 
  • Step 2: After cloning the repository, navigate to the Model Card UI directory:
cd intel-xai-tools/docker  
  • Step 3: Build the Docker Image

You can build the Docker image with the following command:

docker compose build model_card_gen_ui 
  • Step 4: Verify the Docker Image

To ensure the image has been built successfully, check your Docker images:

docker images | grep -i mcg-ui 

In your terminal, you should see an output similar to this:

intel/ai-tools                               intel-ai-safety-1.1.0-mcg-ui       ab0521fc99ef   About an hour ago     2.7GB
  • Step 5: Run the Model Card Generator UI

Finally, to run the Model Card Generator UI, use the docker run command:

docker run --rm -p 8051:8051 --name mcg-ui intel/ai-tools:intel-ai-safety-1.1.0-mcg-ui 

This command runs the container and makes the UI accessible through port 8051.

  • Step 6: Access the UI

Finally, to access the Model Card Generator UI, navigate to <HOST_NAME>:8051 in your web browser. Replace HOST_NAME with the name or IP address of the server where the container is running.

By using either of these methods, you can effectively run the Model Card UI and start creating detailed model cards for your machine learning models.

Getting Started with UI

There are two ways to fill details in Model Card:

  1. Upload an existing Model Card in JSON format, see the examples of JSON files as a reference of the JSON template. Upon uploading the Model Card JSON, the fields will be automatically populated with the information extracted from the JSON file.

  2. Manually fill your model card details by selecting the respective sections from the sidebar.

Model Card Generator Sections

The Model Card template is divided into 4 subsections: Model Details, Model Parameters, Considerations, and Quantitative Analysis or Performance Analysis.

Note: Each field is titled with the following format: <UI field name>: <JSON variable name>

Model Details: model_details

This section contains the information corresponding to the model metadata.

Model Name: name

Provide the name of the Model.

Model Path: path

Provide the Path where the model is stored and can be accessed from.

Model Card Overview: overview

Provide a brief description or summary of the model card.

Model Documentation: documentation

This section contains the model's general information, including its usage and version, as well as details about its implementation, specifying whether it is based on a borrowed architecture or an original design. Any disclaimers or copyrights should also be noted here. Additionally, details regarding the datasets used for training, fine-tuning, and validation should be included. Wherever possible, provide links or references.

Model Owners: owners
List the individuals or teams that own the model. You can select the number of owners from the drop-down list. For each owner, provide information in one or both of the following fields:

  • Name of the owner: name
    Name of the Model owner.

  • Contact of the owner: contact
    The contact information for the model owner. This could be an individual email address or a team mailing list.

Model Version: version

Information regarding the Model Version including the following fields:

  • Version Name: name
    Version Name of the model.

  • Version Date: date
    The date when the model version was released.

  • Difference from the previous version: diff
    The changes from the previous model version.

Licenses:licenses

List the name or specify a custom license for the model. You can choose the number of licenses from the drop-down list. For each license, provide information in one or both of the following fields:

  • Identifier: identifier
    Provide a standard SPDX license identifier, or proprietary for an unlicensed module.

  • Custom License text: custom_text
    Mention the custom license for the model.

References: references

List the links that provide more information about the model. You can select the number of references from the drop-down list. For each reference, provide information in the following field:

  • Reference: reference

Links providing more information about the model. You can link to foundational research, technical documentation, or other materials that may be useful to your audience.

Citations: citations

List the details on how to cite this model card. You can choose the number of citations from the drop-down list. For each citation, provide information in one or both of the following fields:

  • Style: style
    The citation style, such as MLA, APA, Chicago, or IEEE.

  • Citation: citation
    The citation to refer the model.

Model Overview Graphics: graphics

Static graphics illustrating the overview of the model.

  • Uploaded Graphic: collection
    Upload static graphics (in PNG format) to illustrate the model overview. When using the UI interface, the graphic or image name (name) is automatically extracted from the uploaded file's name. The uploaded image (image) is encoded as a base64 string.

  • Graphics Description: description
    Provide the description for the collection of the overview graphics.

Model Parameters: model_parameters

This section includes details regarding the parameters for construction of the model. It is helpful to users interested in the process of model development.

Model Architecture: model_architecture

Contains the architecture of the Model.

Input Data Format: input_format

Contains the data format for inputs to the model.

Input Format Map: input_format_map

List the data format for inputs to the model, in key-value format.

Output Data Format: output_format

Contains the data format for outputs of the model.

Input Format Map: output_format_map

List the data format for outputs of the model, in key-value format.

Considerations: considerations

This section details the model's applications, its foreseeable users, and the considerations that should be taken into account concerning the model's construction, training, and application

Users: users

Mention or list the intended users of the model, which may include researchers, developers, and/or clients. Additionally, consider providing information about the downstream users expected to interact with or be impacted by the model.

Use Cases: use_cases

Mention or list the intended use cases of the model. Also mention out-of-scope use cases.

Limitations: limitations

Mention or list the known limitations of the model. This may include technical limitations or conditions that may degrade model performance.

Tradeoffs: tradeoffs

Describe the accuracy/performance tradeoffs for the model.

Ethical Considerations: ethical_considerations

Mention or list the ethical risks associated with the application of the model. You can select the number of ethical risks from the drop-down list. For each risk, you have the option to provide information in one or both of the following fields:

  • Name of Risk: name
    Mention the ethical risk involved.

  • Mitigation Strategy: mitigation_strategy
    For the risk mentioned, provide a mitigation strategy you have implemented or suggest to users.

Performance Quantitative Analysis:

This section provides details regarding the model performance metrics being reported.

Performance Graphics: graphics

Static Graphics illustrating the model performance.

  • Uploaded Graphic: collection
    Upload the static graphics (in PNG format) illustrating the model performance. When using the UI interface, the graphic or image name (name) is automatically extracted from the uploaded file's name. The uploaded image (image) is encoded as a base64 string.

  • Graphics Description: description
    Provide the Description for the collection of the performance graphics.

The Model Card Generator enables users to upload model performance metrics in CSV format, categorized by different thresholds or groups. In response, it automatically generates interactive plots for HTML model cards or static plots for Markdown model cards. To view an example of the CSV file format that can be uploaded to the Model Card Generator, click here. For a step-by-step guide on creating these files, follow this link for further instructions.

Model Card Metrics Graph

Metrics By Threshold:

The "Metrics by Threshold" feature allows users to visually explore how metric values change with different classification probability thresholds. It assists in selecting an optimal threshold based on performance trade-offs. Additionally, the plots help identify thresholds that produce extreme metric values, which may indicate overfitting or other issues, guiding users to make informed decisions on model tuning to achieve their specific goals. The Overall Metric Performance charts further provide a comprehensive overview of all metrics' performance across varying thresholds.

Model Card Metrics by Threshold Graph

Metrics By Group:

"Metrics by Group" is used to organize and display a model's performance metrics by distinct groups or subcategories within the data. This is particularly useful for analyzing the model's performance across various segments or classes within your dataset, which is essential for understanding model behavior in different contexts and identifying biases.

Model Card Metrics by Group Graph

Next Steps:

Now that we have seen how to work with the Model Card Generator UI, go ahead and try experimenting with our interface! After creating your Model Card, you can view it based on the chosen template type and export it as JSON, HTML, or Markdown.

We continually add more use cases and templates to our Model Card Generator. In the meantime, we welcome your feedback on our Model Card Interface UI and suggestions for expanding the Model Card Generator to meet your needs.

To improve the usability of our tool, we plan to host the Interface on a public platform, providing users with an intuitive UI to generate Model Cards, bypassing the need for command-line interactions entirely. Stay tuned for this update, as we strive to make the process of documenting and understanding machine learning models even more seamless for everyone.

If you are interested in the code behind our Model Card Generator and its UI, or if you want to adapt our UI for your purposes, visit our GitHub Repository.

Conclusion:

In summary, Model Cards are essential for transparently presenting the strengths and limitations of machine learning models, much like how a detailed character profile is crucial for understanding a video game character. The Model Card Generator Interface is pivotal in crafting these informative reports, ensuring that individuals, regardless of their technical expertise, can appreciate and use these advanced AI tools responsibly.

Acknowledgments

I would like to thank my colleagues Tyler Wilbers, Daniel De León, and Abolfazl Shahbazi for their contributions and for helping to review this blog.