id
stringlengths 14
15
| text
stringlengths 112
2.01k
| metadata
dict |
---|---|---|
f7c061c33796-0 | Quickstart
Installing MLflow
You install MLflow by running:
# Install MLflow
pip
install
mlflow
# Install MLflow with extra ML libraries and 3rd-party tools
pip
install
mlflow
[extras
# Install a lightweight version of MLflow
pip
install
mlflow-skinny
install.packages
"mlflow"
Note
MLflow works on MacOS. If you run into issues with the default system Python on MacOS, try
installing Python 3 through the Homebrew package manager using
brew install python. (In this case, installing MLflow is now pip3 install mlflow).
Note
To use certain MLflow modules and functionality (ML model persistence/inference,
artifact storage options, etc), you may need to install extra libraries. For example, the
mlflow.tensorflow module requires TensorFlow to be installed. See
https://github.com/mlflow/mlflow/blob/master/EXTRA_DEPENDENCIES.rst for more details.
Note
When using MLflow skinny, you may need to install additional dependencies if you wish to use
certain MLflow modules and functionalities. For example, usage of SQL-based storage for
MLflow Tracking (e.g. mlflow.set_tracking_uri("sqlite:///my.db")) requires
pip install mlflow-skinny sqlalchemy alembic sqlparse. If using MLflow skinny for serving,
a minimally functional installation would require pip install mlflow-skinny flask.
At this point we recommend you follow the tutorial for a walk-through on how you
can leverage MLflow in your daily workflow.
Downloading the Quickstart
Download the quickstart code by cloning MLflow via git clone https://github.com/mlflow/mlflow,
and cd into the examples subdirectory of the repository. We’ll use this working directory for
running the quickstart. | {
"url": "https://mlflow.org/docs/latest/quickstart.html"
} |
f7c061c33796-1 | We avoid running directly from our clone of MLflow as doing so would cause the tutorial to
use MLflow from source, rather than your PyPi installation of MLflow.
Using the Tracking API
The MLflow Tracking API lets you log metrics and artifacts (files) from your data
science code and see a history of your runs. You can try it out by writing a simple Python script
as follows (this example is also included in quickstart/mlflow_tracking.py):
import
os
from
random
import
random
randint
from
mlflow
import
log_metric
log_param
log_artifacts
if
__name__
==
"__main__"
# Log a parameter (key-value pair)
log_param
"param1"
randint
100
))
# Log a metric; metrics can be updated throughout the run
log_metric
"foo"
random
())
log_metric
"foo"
random
()
log_metric
"foo"
random
()
# Log an artifact (output file)
if
not
os
path
exists
"outputs"
):
os
makedirs
"outputs"
with
open
"outputs/test.txt"
"w"
as
write
"hello world!"
log_artifacts
"outputs"
library
mlflow
# Log a parameter (key-value pair)
mlflow_log_param
"param1"
# Log a metric; metrics can be updated throughout the run
mlflow_log_metric
"foo"
mlflow_log_metric
"foo"
mlflow_log_metric
"foo"
# Log an artifact (output file)
writeLines
"Hello world!"
"output.txt"
mlflow_log_artifact
"output.txt"
Viewing the Tracking UI | {
"url": "https://mlflow.org/docs/latest/quickstart.html"
} |
f7c061c33796-2 | mlflow_log_artifact
"output.txt"
Viewing the Tracking UI
By default, wherever you run your program, the tracking API writes data into files into a local
./mlruns directory. You can then run MLflow’s Tracking UI:
mlflow
ui
mlflow_ui
()
and view it at http://localhost:5000.
Note
If you see message [CRITICAL] WORKER TIMEOUT in the MLflow UI or error logs, try using http://localhost:5000 instead of http://127.0.0.1:5000.
Running MLflow Projects
MLflow allows you to package code and its dependencies as a project that can be run in a
reproducible fashion on other data. Each project includes its code and a MLproject file that
defines its dependencies (for example, Python environment) as well as what commands can be run into the
project and what arguments they take.
You can easily run existing projects with the mlflow run command, which runs a project from
either a local directory or a GitHub URI:
mlflow
run
sklearn_elasticnet_wine
P
alpha
0.5
mlflow
run
https://github.com/mlflow/mlflow-example.git
P
alpha
5.0
There’s a sample project in tutorial, including a MLproject file that
specifies its dependencies. if you haven’t configured a tracking server,
projects log their Tracking API data in the local mlruns directory so you can see these
runs using mlflow ui.
Note
By default mlflow run installs all dependencies using virtualenv.
To run a project without using virtualenv, you can provide the --env-manager=local option to
mlflow run. In this case, you must ensure that the necessary dependencies are already installed
in your Python environment.
For more information, see MLflow Projects.
Saving and Serving Models | {
"url": "https://mlflow.org/docs/latest/quickstart.html"
} |
f7c061c33796-3 | For more information, see MLflow Projects.
Saving and Serving Models
MLflow includes a generic MLmodel format for saving models from a variety of tools in diverse
flavors. For example, many models can be served as Python functions, so an MLmodel file can
declare how each model should be interpreted as a Python function in order to let various tools
serve it. MLflow also includes tools for running such models locally and exporting them to Docker
containers or commercial serving platforms.
To illustrate this functionality, the mlflow.sklearn package can log scikit-learn models as
MLflow artifacts and then load them again for serving. There is an example training application in
sklearn_logistic_regression/train.py that you can run as follows:
python
sklearn_logistic_regression/train.py
When you run the example, it outputs an MLflow run ID for that experiment. If you look at
mlflow ui, you will also see that the run saved a model folder containing an MLmodel
description file and a pickled scikit-learn model. You can pass the run ID and the path of the model
within the artifacts directory (here “model”) to various tools. For example, MLflow includes a
simple REST server for python-based models:
mlflow
models
serve
m
runs:/<RUN_ID>/model
Note
By default the server runs on port 5000. If that port is already in use, use the –port option to
specify a different port. For example: mlflow models serve -m runs:/<RUN_ID>/model --port 1234
Once you have started the server, you can pass it some sample data and see the
predictions. | {
"url": "https://mlflow.org/docs/latest/quickstart.html"
} |
f7c061c33796-4 | Once you have started the server, you can pass it some sample data and see the
predictions.
The following example uses curl to send a JSON-serialized pandas DataFrame with the split
orientation to the model server. For more information about the input data formats accepted by
the pyfunc model server, see the MLflow deployment tools documentation.
curl
d
'{"dataframe_split": {"columns": ["x"], "data": [[1], [-1]]}}'
H
'Content-Type: application/json'
X
POST
localhost:5000/invocations
which returns:
For more information, see MLflow Models.
Logging to a Remote Tracking Server
In the examples above, MLflow logs data to the local filesystem of the machine it’s running on.
To manage results centrally or share them across a team, you can configure MLflow to log to a remote
tracking server. To get access to a remote tracking server:
Launch a Tracking Server on a Remote Machine
Launch a tracking server on a remote machine.
You can then log to the remote tracking server by
setting the MLFLOW_TRACKING_URI environment variable to your server’s URI, or
by adding the following to the start of your program:
import
mlflow
mlflow
set_tracking_uri
"http://YOUR-SERVER:4040"
mlflow
set_experiment
"my-experiment"
library
mlflow
install_mlflow
()
mlflow_set_tracking_uri
"http://YOUR-SERVER:4040"
mlflow_set_experiment
"/my-experiment"
Log to Databricks Community Edition | {
"url": "https://mlflow.org/docs/latest/quickstart.html"
} |
f7c061c33796-5 | mlflow_set_experiment
"/my-experiment"
Log to Databricks Community Edition
Alternatively, sign up for Databricks Community Edition,
a free service that includes a hosted tracking server. Note that
Community Edition is intended for quick experimentation rather than production use cases.
After signing up, run databricks configure to create a credentials file for MLflow, specifying
https://community.cloud.databricks.com as the host.
To log to the Community Edition server, set the MLFLOW_TRACKING_URI environment variable
to “databricks”, or add the following to the start of your program:
import
mlflow
mlflow
set_tracking_uri
"databricks"
# Note: on Databricks, the experiment name passed to set_experiment must be a valid path
# in the workspace, like '/Users/<your-username>/my-experiment'. See
# https://docs.databricks.com/user-guide/workspace.html for more info.
mlflow
set_experiment
"/my-experiment"
library
mlflow
install_mlflow
()
mlflow_set_tracking_uri
"databricks"
# Note: on Databricks, the experiment name passed to mlflow_set_experiment must be a
# valid path in the workspace, like '/Users/<your-username>/my-experiment'. See
# https://docs.databricks.com/user-guide/workspace.html for more info.
mlflow_set_experiment
"/my-experiment" | {
"url": "https://mlflow.org/docs/latest/quickstart.html"
} |
2eb3b7e954eb-0 | MLflow Tracking
The MLflow Tracking component is an API and UI for logging parameters, code versions, metrics, and output files
when running your machine learning code and for later visualizing the results.
MLflow Tracking lets you log and query experiments using Python, REST, R API, and Java API APIs.
Table of Contents
Concepts
Where Runs Are Recorded
How Runs and Artifacts are Recorded
Scenario 1: MLflow on localhost
Scenario 2: MLflow on localhost with SQLite
Scenario 3: MLflow on localhost with Tracking Server
Scenario 4: MLflow with remote Tracking Server, backend and artifact stores
Scenario 5: MLflow Tracking Server enabled with proxied artifact storage access
Scenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access
Logging Data to Runs
Logging Functions
Launching Multiple Runs in One Program
Performance Tracking with Metrics
Visualizing Metrics
Automatic Logging
Scikit-learn
Keras
Gluon
XGBoost
LightGBM
Statsmodels
Spark
Fastai
Pytorch
Organizing Runs in Experiments
Managing Experiments and Runs with the Tracking Service API
Tracking UI
Querying Runs Programmatically
MLflow Tracking Servers
Storage
Networking
Using the Tracking Server for proxied artifact access
Logging to a Tracking Server
System Tags
Concepts
MLflow Tracking is organized around the concept of runs, which are executions of some piece of
data science code. Each run records the following information:
Git commit hash used for the run, if it was run from an MLflow Project.
Start and end time of the run
Name of the file to launch the run, or the project name and entry point for the run
if run from an MLflow Project.
Key-value input parameters of your choice. Both keys and values are strings. | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-1 | Key-value input parameters of your choice. Both keys and values are strings.
Key-value metrics, where the value is numeric. Each metric can be updated throughout the
course of the run (for example, to track how your model’s loss function is converging), and
MLflow records and lets you visualize the metric’s full history.
Output files in any format. For example, you can record images (for example, PNGs), models
(for example, a pickled scikit-learn model), and data files (for example, a
Parquet file) as artifacts.
You can record runs using MLflow Python, R, Java, and REST APIs from anywhere you run your code. For
example, you can record them in a standalone program, on a remote cloud machine, or in an
interactive notebook. If you record runs in an MLflow Project, MLflow
remembers the project URI and source version.
You can optionally organize runs into experiments, which group together runs for a
specific task. You can create an experiment using the mlflow experiments CLI, with
mlflow.create_experiment(), or using the corresponding REST parameters. The MLflow API and
UI let you create and search for experiments.
Once your runs have been recorded, you can query them using the Tracking UI or the MLflow
API.
Where Runs Are Recorded
MLflow runs can be recorded to local files, to a SQLAlchemy compatible database, or remotely
to a tracking server. By default, the MLflow Python API logs runs locally to files in an mlruns directory wherever you
ran your program. You can then run mlflow ui to see the logged runs.
To log runs remotely, set the MLFLOW_TRACKING_URI environment variable to a tracking server’s URI or
call mlflow.set_tracking_uri().
There are different kinds of remote tracking URIs: | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-2 | There are different kinds of remote tracking URIs:
Local file path (specified as file:/my/local/dir), where data is just directly stored locally.
Database encoded as <dialect>+<driver>://<username>:<password>@<host>:<port>/<database>. MLflow supports the dialects mysql, mssql, sqlite, and postgresql. For more details, see SQLAlchemy database uri.
HTTP server (specified as https://my-server:5000), which is a server hosting an MLflow tracking server.
Databricks workspace (specified as databricks or as databricks://<profileName>, a Databricks CLI profile.
Refer to Access the MLflow tracking server from outside Databricks [AWS]
[Azure], or the quickstart to
easily get started with hosted MLflow on Databricks Community Edition.
How Runs and Artifacts are Recorded
As mentioned above, MLflow runs can be recorded to local files, to a SQLAlchemy compatible database, or remotely to a tracking server. MLflow artifacts can be persisted to local files
and a variety of remote file storage solutions. For storing runs and artifacts, MLflow uses two components for storage: backend store and artifact store. While the backend store persists
MLflow entities (runs, parameters, metrics, tags, notes, metadata, etc), the artifact store persists artifacts
(files, models, images, in-memory objects, or model summary, etc).
The MLflow server can be configured with an artifacts HTTP proxy, passing artifact requests through the tracking server
to store and retrieve artifacts without having to interact with underlying object store services.
Usage of the proxied artifact access feature is described in Scenarios 5 and 6 below.
The MLflow client can interface with a variety of backend and artifact storage configurations.
Here are four common configuration scenarios:
Scenario 1: MLflow on localhost | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-3 | Scenario 1: MLflow on localhost
Many developers run MLflow on their local machine, where both the backend and artifact store share a directory
on the local filesystem—./mlruns—as shown in the diagram. The MLflow client directly interfaces with an
instance of a FileStore and LocalArtifactRepository.
In this simple scenario, the MLflow client uses the following interfaces to record MLflow entities and artifacts:
An instance of a LocalArtifactRepository (to store artifacts)
An instance of a FileStore (to save MLflow entities)
Scenario 2: MLflow on localhost with SQLite
Many users also run MLflow on their local machines with a SQLAlchemy-compatible database: SQLite. In this case, artifacts
are stored under the local ./mlruns directory, and MLflow entities are inserted in a SQLite database file mlruns.db.
In this scenario, the MLflow client uses the following interfaces to record MLflow entities and artifacts:
An instance of a LocalArtifactRepository (to save artifacts)
An instance of an SQLAlchemyStore (to store MLflow entities to a SQLite file mlruns.db)
Scenario 3: MLflow on localhost with Tracking Server
Similar to scenario 1 but a tracking server is launched, listening for REST request calls at the default port 5000.
The arguments supplied to the mlflow server <args> dictate what backend and artifact stores are used.
The default is local FileStore. For example, if a user launched a tracking server as
mlflow server --backend-store-uri sqlite:///mydb.sqlite, then SQLite would be used for backend storage instead.
As in scenario 1, MLflow uses a local mlruns filesystem directory as a backend store and artifact store. With a tracking
server running, the MLflow client interacts with the tracking server via REST requests, as shown in the diagram.
Command to run the tracking server in this configuration
mlflow
server
-backend-store-uri | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-4 | Command to run the tracking server in this configuration
mlflow
server
-backend-store-uri
file:///path/to/mlruns
-no-serve-artifacts
To store all runs’ MLflow entities, the MLflow client interacts with the tracking server via a series of REST requests:
Part 1a and b:
The MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities
The Tracking Server creates an instance of a FileStore to save MLflow entities and writes directly to the local mlruns directory
For the artifacts, the MLflow client interacts with the tracking server via a REST request:
Part 2a, b, and c:
The MLflow client uses RestStore to send a REST request to fetch the artifact store URI location
The Tracking Server responds with an artifact store URI location
The MLflow client creates an instance of a LocalArtifactRepository and saves artifacts to the local filesystem location specified by the artifact store URI (a subdirectory of mlruns)
Scenario 4: MLflow with remote Tracking Server, backend and artifact stores
MLflow also supports distributed architectures, where the tracking server, backend store, and artifact store
reside on remote hosts. This example scenario depicts an architecture with a remote MLflow Tracking Server,
a Postgres database for backend entity storage, and an S3 bucket for artifact storage.
Command to run the tracking server in this configuration
mlflow
server
-backend-store-uri
postgresql://user:password@postgres:5432/mlflowdb
-default-artifact-root
s3://bucket_name
-host
remote_host
-no-serve-artifacts
To record all runs’ MLflow entities, the MLflow client interacts with the tracking server via a series of REST requests:
Part 1a and b:
The MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-5 | The Tracking Server creates an instance of an SQLAlchemyStore and connects to the remote host to
insert MLflow entities in the database
For artifact logging, the MLflow client interacts with the remote Tracking Server and artifact storage host:
Part 2a, b, and c:
The MLflow client uses RestStore to send a REST request to fetch the artifact store URI location from the Tracking Server
The Tracking Server responds with an artifact store URI location (an S3 storage URI in this case)
The MLflow client creates an instance of an S3ArtifactRepository, connects to the remote AWS host using the
boto client libraries, and uploads the artifacts to the S3 bucket URI location
FileStore,
RestStore,
and
SQLAlchemyStore are
concrete implementations of the abstract class
AbstractStore,
and the
LocalArtifactRepository and
S3ArtifactRepository are
concrete implementations of the abstract class
ArtifactRepository.
Scenario 5: MLflow Tracking Server enabled with proxied artifact storage access
MLflow’s Tracking Server supports utilizing the host as a proxy server for operations involving artifacts.
Once configured with the appropriate access requirements, an administrator can start the tracking server to enable
assumed-role operations involving the saving, loading, or listing of model artifacts, images, documents, and files.
This eliminates the need to allow end users to have direct path access to a remote object store (e.g., s3, adls, gcs, hdfs) for artifact handling and eliminates the
need for an end-user to provide access credentials to interact with an underlying object store.
Command to run the tracking server in this configuration
mlflow
server
-backend-store-uri
postgresql://user:password@postgres:5432/mlflowdb
# Artifact access is enabled through the proxy URI 'mlflow-artifacts:/',
# giving users access to this location without having to manage credentials
# or permissions.
-artifacts-destination | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-6 | # or permissions.
-artifacts-destination
s3://bucket_name
-host
remote_host
Enabling the Tracking Server to perform proxied artifact access in order to route client artifact requests to an object store location:
Part 1a and b:
The MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities
The Tracking Server creates an instance of an SQLAlchemyStore and connects to the remote host for inserting
tracking information in the database (i.e., metrics, parameters, tags, etc.)
Part 1c and d:
Retrieval requests by the client return information from the configured SQLAlchemyStore table
Part 2a and b:
Logging events for artifacts are made by the client using the HttpArtifactRepository to write files to MLflow Tracking Server
The Tracking Server then writes these files to the configured object store location with assumed role authentication
Part 2c and d:
Retrieving artifacts from the configured backend store for a user request is done with the same authorized authentication that was configured at server start
Artifacts are passed to the end user through the Tracking Server through the interface of the HttpArtifactRepository
Note
When an experiment is created, the artifact storage location from the configuration of the tracking server is logged in the experiment’s metadata.
When enabling proxied artifact storage, any existing experiments that were created while operating a tracking server in
non-proxied mode will continue to use a non-proxied artifact location. In order to use proxied artifact logging, a new experiment must be created.
If the intention of enabling a tracking server in -serve-artifacts mode is to eliminate the need for a client to have authentication to
the underlying storage, new experiments should be created for use by clients so that the tracking server can handle authentication after this migration.
Warning | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-7 | Warning
The MLflow artifact proxied access service enables users to have an assumed role of access to all artifacts that are accessible to the Tracking Server.
Administrators who are enabling this feature should ensure that the access level granted to the Tracking Server for artifact
operations meets all security requirements prior to enabling the Tracking Server to operate in a proxied file handling role.
Scenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access
MLflow’s Tracking Server can be used in an exclusive artifact proxied artifact handling role. Specifying the
--artifacts-only flag restricts an MLflow server instance to only serve artifact-related API requests by proxying to an underlying object store.
Note
Starting a Tracking Server with the --artifacts-only parameter will disable all Tracking Server functionality apart from API calls related to saving, loading, or listing artifacts.
Creating runs, logging metrics or parameters, and accessing other attributes about experiments are all not permitted in this mode.
Command to run the tracking server in this configuration
mlflow
server
-artifacts-destination
s3://bucket_name
-artifacts-only
-host
remote_host
Running an MLFlow server in --artifacts-only mode:
Part 1a and b:
The MLflow client will interact with the Tracking Server using the HttpArtifactRepository interface.
Listing artifacts associated with a run will be conducted from the Tracking Server using the access credentials set at server startup
Saving of artifacts will transmit the files to the Tracking Server which will then write the files to the file store using credentials set at server start.
Part 1c and d:
Listing of artifact responses will pass from the file store through the Tracking Server to the client
Loading of artifacts will utilize the access credentials of the MLflow Tracking Server to acquire the files which are then passed on to the client
Note | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-8 | Note
If migrating from Scenario 5 to Scenario 6 due to request volumes, it is important to perform two validations:
Ensure that the new tracking server that is operating in --artifacts-only mode has access permissions to the
location set by --artifacts-destination that the former multi-role tracking server had.
The former multi-role tracking server that was serving artifacts must have the -serve-artifacts argument disabled.
Warning
Operating the Tracking Server in proxied artifact access mode by setting the parameter --serve-artifacts during server start, even in --artifacts-only mode,
will give access to artifacts residing on the object store to any user that has authentication to access the Tracking Server. Ensure that any per-user
security posture that you are required to maintain is applied accordingly to the proxied access that the Tracking Server will have in this mode
of operation.
Logging Data to Runs
You can log data to runs using the MLflow Python, R, Java, or REST API. This section
shows the Python API.
In this section:
Logging Functions
Launching Multiple Runs in One Program
Performance Tracking with Metrics
Visualizing Metrics
Logging Functions
mlflow.set_tracking_uri() connects to a tracking URI. You can also set the
MLFLOW_TRACKING_URI environment variable to have MLflow find a URI from there. In both cases,
the URI can either be a HTTP/HTTPS URI for a remote server, a database connection string, or a
local path to log data to a directory. The URI defaults to mlruns.
mlflow.get_tracking_uri() returns the current tracking URI.
mlflow.create_experiment() creates a new experiment and returns its ID. Runs can be
launched under the experiment by passing the experiment ID to mlflow.start_run. | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-9 | mlflow.set_experiment() sets an experiment as active. If the experiment does not exist,
creates a new experiment. If you do not specify an experiment in mlflow.start_run(), new
runs are launched under this experiment.
mlflow.start_run() returns the currently active run (if one exists), or starts a new run
and returns a mlflow.ActiveRun object usable as a context manager for the
current run. You do not need to call start_run explicitly: calling one of the logging functions
with no active run automatically starts a new one.
Note
If the argument run_name is not set within mlflow.start_run(), a unique run name will be generated for each run.
mlflow.end_run() ends the currently active run, if any, taking an optional run status.
mlflow.active_run() returns a mlflow.entities.Run object corresponding to the
currently active run, if any.
Note: You cannot access currently-active run attributes
(parameters, metrics, etc.) through the run returned by mlflow.active_run. In order to access
such attributes, use the MlflowClient as follows:
client
mlflow
MlflowClient
()
data
client
get_run
mlflow
active_run
()
info
run_id
data
mlflow.last_active_run() retuns a mlflow.entities.Run object corresponding to the
currently active run, if any. Otherwise, it returns a mlflow.entities.Run object corresponding
the last run started from the current Python process that reached a terminal status (i.e. FINISHED, FAILED, or KILLED).
mlflow.log_param() logs a single key-value param in the currently active run. The key and
value are both strings. Use mlflow.log_params() to log multiple params at once. | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-10 | mlflow.log_metric() logs a single key-value metric. The value must always be a number.
MLflow remembers the history of values for each metric. Use mlflow.log_metrics() to log
multiple metrics at once.
mlflow.set_tag() sets a single key-value tag in the currently active run. The key and
value are both strings. Use mlflow.set_tags() to set multiple tags at once.
mlflow.log_artifact() logs a local file or directory as an artifact, optionally taking an
artifact_path to place it in within the run’s artifact URI. Run artifacts can be organized into
directories, so you can place the artifact in a directory this way.
mlflow.log_artifacts() logs all the files in a given directory as artifacts, again taking
an optional artifact_path.
mlflow.get_artifact_uri() returns the URI that artifacts from the current run should be
logged to.
Launching Multiple Runs in One Program
Sometimes you want to launch multiple MLflow runs in the same program: for example, maybe you are
performing a hyperparameter search locally or your experiments are just very fast to run. This is
easy to do because the ActiveRun object returned by mlflow.start_run() is a Python
context manager. You can “scope” each run to
just one block of code as follows:
with
mlflow
start_run
():
mlflow
log_param
"x"
mlflow
log_metric
"y"
...
The run remains open throughout the with statement, and is automatically closed when the
statement exits, even if it exits due to an exception.
Performance Tracking with Metrics
You log MLflow metrics with log methods in the Tracking API. The log methods support two alternative methods for distinguishing metric values on the x-axis: timestamp and step. | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-11 | timestamp is an optional long value that represents the time that the metric was logged. timestamp defaults to the current time. step is an optional integer that represents any measurement of training progress (number of training iterations, number of epochs, and so on). step defaults to 0 and has the following requirements and properties:
Must be a valid 64-bit integer value.
Can be negative.
Can be out of order in successive write calls. For example, (1, 3, 2) is a valid sequence.
Can have “gaps” in the sequence of values specified in successive write calls. For example, (1, 5, 75, -20) is a valid sequence.
If you specify both a timestamp and a step, metrics are recorded against both axes independently.
Examples
with mlflow.start_run():
for epoch in range(0, 3):
mlflow.log_metric(key="quality", value=2 * epoch, step=epoch)
MlflowClient client = new MlflowClient();
RunInfo run = client.createRun();
for (int epoch = 0; epoch < 3; epoch ++) {
client.logMetric(run.getRunId(), "quality", 2 * epoch, System.currentTimeMillis(), epoch);
}
Visualizing Metrics
Here is an example plot of the quick start tutorial with the step x-axis and two timestamp axes:
X-axis step
X-axis wall time - graphs the absolute time each metric was logged
X-axis relative time - graphs the time relative to the first metric logged, for each run
Automatic Logging
Automatic logging allows you to log metrics, parameters, and models without the need for explicit log statements.
There are two ways to use autologging:
Call mlflow.autolog() before your training code. This will enable autologging for each supported library you have installed as soon as you import it. | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-12 | Use library-specific autolog calls for each library you use in your code. See below for examples.
The following libraries support autologging:
Scikit-learn
Keras
Gluon
XGBoost
LightGBM
Statsmodels
Spark
Fastai
Pytorch
For flavors that automatically save models as an artifact, additional files for dependency management are logged.
You can access the most recent autolog run through the mlflow.last_active_run() function. Here’s a short sklearn autolog example that makes use of this function:
import
mlflow
from
sklearn.model_selection
import
train_test_split
from
sklearn.datasets
import
load_diabetes
from
sklearn.ensemble
import
RandomForestRegressor
mlflow
autolog
()
db
load_diabetes
()
X_train
X_test
y_train
y_test
train_test_split
db
data
db
target
# Create and train models.
rf
RandomForestRegressor
n_estimators
100
max_depth
max_features
rf
fit
X_train
y_train
# Use the model to make predictions on the test dataset.
predictions
rf
predict
X_test
autolog_run
mlflow
last_active_run
()
Scikit-learn
Call mlflow.sklearn.autolog() before your training code to enable automatic logging of sklearn metrics, params, and models.
See example usage here.
Autologging for estimators (e.g. LinearRegression) and meta estimators (e.g. Pipeline) creates a single run and logs:
Metrics
Parameters
Tags
Artifacts
Training score obtained
by estimator.score
Parameters obtained by
estimator.get_params
Class name
Fully qualified class name
Fitted estimator
Autologging for parameter search estimators (e.g. GridSearchCV) creates a single parent run and nested child runs | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-13 | containing the following data:
Parent
run
Child
run
Child
run
...
containing the following data:
Run type
Metrics
Parameters
Tags
Artifacts
Parent
Training score
Parameter search estimator’s parameters
Best parameter combination
Class name
Fully qualified class name
Fitted parameter search estimator
Fitted best estimator
Search results csv file
Child
CV test score for
each parameter combination
Each parameter combination
Class name
Fully qualified class name
Keras
Call mlflow.tensorflow.autolog() before your training code to enable automatic logging of metrics and parameters. See example usages with Keras and
TensorFlow.
Note that only tensorflow>=2.3 are supported.
The respective metrics associated with tf.estimator and EarlyStopping are automatically logged.
As an example, try running the MLflow TensorFlow examples.
Autologging captures the following information:
Framework/module
Metrics
Parameters
Tags
Artifacts
tf.keras
Training loss; validation loss; user-specified metrics
fit() parameters; optimizer name; learning rate; epsilon
Model summary on training start; MLflow Model (Keras model); TensorBoard logs on training end
tf.keras.callbacks.EarlyStopping
Metrics from the EarlyStopping callbacks. For example,
stopped_epoch, restored_epoch,
restore_best_weight, etc
fit() parameters from EarlyStopping.
For example, min_delta, patience, baseline,
restore_best_weights, etc
If no active run exists when autolog() captures data, MLflow will automatically create a run to log information to.
Also, MLflow will then automatically end the run once training ends via calls to tf.keras.fit().
If a run already exists when autolog() captures data, MLflow will log to that run but not automatically end that run after training.
Gluon | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-14 | Gluon
Call mlflow.gluon.autolog() before your training code to enable automatic logging of metrics and parameters.
See example usages with Gluon .
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
Gluon
Training loss; validation loss; user-specified metrics
Number of layers; optimizer name; learning rate; epsilon
MLflow Model (Gluon model); on training end
XGBoost
Call mlflow.xgboost.autolog() before your training code to enable automatic logging of metrics and parameters.
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
XGBoost
user-specified metrics
xgboost.train parameters
MLflow Model (XGBoost model) with model signature on training end; feature importance; input example
If early stopping is activated, metrics at the best iteration will be logged as an extra step/iteration.
LightGBM
Call mlflow.lightgbm.autolog() before your training code to enable automatic logging of metrics and parameters.
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
LightGBM
user-specified metrics
lightgbm.train parameters
MLflow Model (LightGBM model) with model signature on training end; feature importance; input example
If early stopping is activated, metrics at the best iteration will be logged as an extra step/iteration.
Statsmodels
Call mlflow.statsmodels.autolog() before your training code to enable automatic logging of metrics and parameters.
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
Statsmodels
user-specified metrics
statsmodels.base.model.Model.fit parameters
MLflow Model (statsmodels.base.wrapper.ResultsWrapper) on training end
Note
Each model subclass that overrides fit expects and logs its own parameters. | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-15 | Note
Each model subclass that overrides fit expects and logs its own parameters.
Spark
Initialize a SparkSession with the mlflow-spark JAR attached (e.g.
SparkSession.builder.config("spark.jars.packages", "org.mlflow.mlflow-spark")) and then
call mlflow.spark.autolog() to enable automatic logging of Spark datasource
information at read-time, without the need for explicit
log statements. Note that autologging of Spark ML (MLlib) models is not yet supported.
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
Spark
Single tag containing source path, version, format. The tag contains one line per datasource
Note
Moreover, Spark datasource autologging occurs asynchronously - as such, it’s possible (though unlikely) to see race conditions when launching short-lived MLflow runs that result in datasource information not being logged.
Fastai
Call mlflow.fastai.autolog() before your training code to enable automatic logging of metrics and parameters.
See an example usage with Fastai.
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
fastai
user-specified metrics
Logs optimizer data as parameters. For example,
epochs, lr, opt_func, etc;
Logs the parameters of the EarlyStoppingCallback and
OneCycleScheduler callbacks
Model checkpoints are logged to a ‘models’ directory; MLflow Model (fastai Learner model) on training end; Model summary text is logged
Pytorch
Call mlflow.pytorch.autolog() before your Pytorch Lightning training code to enable automatic logging of metrics, parameters, and models. See example usages here. Note
that currently, Pytorch autologging supports only models trained using Pytorch Lightning.
Autologging is triggered on calls to pytorch_lightning.trainer.Trainer.fit and captures the following information:
Framework/module
Metrics
Parameters | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-16 | Framework/module
Metrics
Parameters
Tags
Artifacts
pytorch_lightning.trainer.Trainer
Training loss; validation loss; average_test_accuracy;
user-defined-metrics.
fit() parameters; optimizer name; learning rate; epsilon.
Model summary on training start, MLflow Model (Pytorch model) on training end;
pytorch_lightning.callbacks.earlystopping
Training loss; validation loss; average_test_accuracy;
user-defined-metrics.
Metrics from the EarlyStopping callbacks.
For example, stopped_epoch, restored_epoch,
restore_best_weight, etc.
fit() parameters; optimizer name; learning rate; epsilon
Parameters from the EarlyStopping callbacks.
For example, min_delta, patience, baseline,``restore_best_weights``, etc
Model summary on training start; MLflow Model (Pytorch model) on training end;
Best Pytorch model checkpoint, if training stops due to early stopping callback.
If no active run exists when autolog() captures data, MLflow will automatically create a run to log information, ending the run once
the call to pytorch_lightning.trainer.Trainer.fit() completes.
If a run already exists when autolog() captures data, MLflow will log to that run but not automatically end that run after training.
Note
Parameters not explicitly passed by users (parameters that use default values) while using pytorch_lightning.trainer.Trainer.fit() are not currently automatically logged
In case of a multi-optimizer scenario (such as usage of autoencoder), only the parameters for the first optimizer are logged
Organizing Runs in Experiments
Command-Line Interface (
mlflow
experiments
mlflow.create_experiment() Python API. You can pass the experiment name for an individual run
using the CLI (for example,
mlflow
run
...
-experiment-name
[name]
MLFLOW_EXPERIMENT_NAME
-experiment-id
MLFLOW_EXPERIMENT_ID | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-17 | [name]
MLFLOW_EXPERIMENT_NAME
-experiment-id
MLFLOW_EXPERIMENT_ID
# Set the experiment via environment variables
export
MLFLOW_EXPERIMENT_NAME
=fraud-detection
mlflow
experiments
create
-experiment-name
fraud-detection
# Launch a run. The experiment is inferred from the MLFLOW_EXPERIMENT_NAME environment
# variable, or from the --experiment-name parameter passed to the MLflow CLI (the latter
# taking precedence)
with
mlflow
start_run
():
mlflow
log_param
"a"
mlflow
log_metric
"b"
Managing Experiments and Runs with the Tracking Service API
MLflow provides a more detailed Tracking Service API for managing experiments and runs directly,
which is available through client SDK in the mlflow.client module.
This makes it possible to query data about past runs, log additional information about them, create experiments,
add tags to a run, and more.
Example
from
mlflow.tracking
import
MlflowClient
client
MlflowClient
()
experiments
client
search_experiments
()
# returns a list of mlflow.entities.Experiment
run
client
create_run
experiments
experiment_id
# returns mlflow.entities.Run
client
log_param
run
info
run_id
"hello"
"world"
client
set_terminated
run
info
run_id
Adding Tags to Runs
The MlflowClient.set_tag() function lets you add custom tags to runs. A tag can only have a single unique value mapped to it at a time. For example:
client
set_tag
run
info
run_id
"tag_key"
"tag_value"
Important | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-18 | run
info
run_id
"tag_key"
"tag_value"
Important
Do not use the prefix mlflow. (e.g. mlflow.note) for a tag. This prefix is reserved for use by MLflow. See System Tags for a list of reserved tag keys.
Tracking UI
The Tracking UI lets you visualize, search and compare runs, as well as download run artifacts or
metadata for analysis in other tools. If you log runs to a local mlruns directory,
run mlflow ui in the directory above it, and it loads the corresponding runs.
Alternatively, the MLflow tracking server serves the same UI and enables remote storage of run artifacts.
In that case, you can view the UI using URL http://<ip address of your MLflow tracking server>:5000 in your browser from any
machine, including any remote machine that can connect to your tracking server.
The UI contains the following key features:
Experiment-based run listing and comparison (including run comparison across multiple experiments)
Searching for runs by parameter or metric value
Visualizing run metrics
Downloading run results
Querying Runs Programmatically
You can access all of the functions in the Tracking UI programmatically. This makes it easy to do several common tasks:
Query and compare runs using any data analysis tool of your choice, for example, pandas.
Determine the artifact URI for a run to feed some of its artifacts into a new run when executing a workflow. For an example of querying runs and constructing a multistep workflow, see the MLflow Multistep Workflow Example project.
Load artifacts from past runs as MLflow Models. For an example of training, exporting, and loading a model, and predicting using the model, see the MLflow TensorFlow example. | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-19 | Run automated parameter search algorithms, where you query the metrics from various runs to submit new ones. For an example of running automated parameter search algorithms, see the MLflow Hyperparameter Tuning Example project.
MLflow Tracking Servers
In this section:
Storage
Backend Stores
Artifact Stores
File store performance
Deletion Behavior
SQLAlchemy Options
Networking
Using the Tracking Server for proxied artifact access
Optionally using a Tracking Server instance exclusively for artifact handling
Logging to a Tracking Server
Tracking Server versioning
You run an MLflow tracking server using mlflow server. An example configuration for a server is:
Command to run the tracking server in this configuration
mlflow
server
-backend-store-uri
/mnt/persistent-disk
-default-artifact-root
s3://my-mlflow-bucket/
-host
0.0.0.0
Note
When started in --artifacts-only mode, the tracking server will not permit any operation other than saving, loading, and listing artifacts.
Storage
An MLflow tracking server has two components for storage: a backend store and an artifact store.
Backend Stores
The backend store is where MLflow Tracking Server stores experiment and run metadata as well as
params, metrics, and tags for runs. MLflow supports two types of backend stores: file store and
database-backed store.
Note
In order to use model registry functionality, you must run your server using a database-backed store.
Use --backend-store-uri to configure the type of backend store. You specify:
A file store backend as ./path_to_store or file:/path_to_store | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-20 | A file store backend as ./path_to_store or file:/path_to_store
A database-backed store as SQLAlchemy database URI.
The database URI typically takes the format <dialect>+<driver>://<username>:<password>@<host>:<port>/<database>.
MLflow supports the database dialects mysql, mssql, sqlite, and postgresql.
Drivers are optional. If you do not specify a driver, SQLAlchemy uses a dialect’s default driver.
For example, --backend-store-uri sqlite:///mlflow.db would use a local SQLite database.
Important
mlflow server will fail against a database-backed store with an out-of-date database schema.
To prevent this, upgrade your database schema to the latest supported version using
mlflow db upgrade [db_uri]. Schema migrations can result in database downtime, may
take longer on larger databases, and are not guaranteed to be transactional. You should always
take a backup of your database prior to running mlflow db upgrade - consult your database’s
documentation for instructions on taking a backup.
By default --backend-store-uri is set to the local ./mlruns directory (the same as when
running mlflow run locally), but when running a server, make sure that this points to a
persistent (that is, non-ephemeral) file system location.
Artifact Stores
In this section:
Amazon S3 and S3-compatible storage
Azure Blob Storage
Google Cloud Storage
FTP server
SFTP Server
NFS
HDFS
The artifact store is a location suitable for large data (such as an S3 bucket or shared NFS
file system) and is where clients log their artifact output (for example, models).
artifact_location is a property recorded on mlflow.entities.Experiment for
default location to store artifacts for all runs in this experiment. Additionally, artifact_uri
is a property on mlflow.entities.RunInfo to indicate location where all artifacts for
this run are stored. | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-21 | The MLflow client caches artifact location information on a per-run basis.
It is therefore not recommended to alter a run’s artifact location before it has terminated.
In addition to local file paths, MLflow supports the following storage systems as artifact
stores: Amazon S3, Azure Blob Storage, Google Cloud Storage, SFTP server, and NFS.
Use --default-artifact-root (defaults to local ./mlruns directory) to configure default
location to server’s artifact store. This will be used as artifact location for newly-created
experiments that do not specify one. Once you create an experiment, --default-artifact-root
is no longer relevant to that experiment.
By default, a server is launched with the --serve-artifacts flag to enable proxied access for artifacts.
The uri mlflow-artifacts:/ replaces an otherwise explicit object store destination (e.g., “s3:/my_bucket/mlartifacts”)
for interfacing with artifacts. The client can access artifacts via HTTP requests to the MLflow Tracking Server.
This simplifies access requirements for users of the MLflow client, eliminating the need to
configure access tokens or username and password environment variables for the underlying object store when writing or retrieving artifacts.
To disable proxied access for artifacts, specify --no-serve-artifacts.
Provided an Mlflow server configuration where the --default-artifact-root is s3://my-root-bucket,
the following patterns will all resolve to the configured proxied object store location of s3://my-root-bucket/mlartifacts:
https://<host>:<port>/mlartifacts
http://<host>/mlartifacts
mlflow-artifacts://<host>/mlartifacts
mlflow-artifacts://<host>:<port>/mlartifacts
mlflow-artifacts:/mlartifacts | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-22 | mlflow-artifacts:/mlartifacts
If the host or host:port declaration is absent in client artifact requests to the MLflow server, the client API
will assume that the host is the same as the MLflow Tracking uri.
Note
If an MLflow server is running with the --artifact-only flag, the client should interact with this server explicitly by
including either a host or host:port definition for uri location references for artifacts.
Otherwise, all artifact requests will route to the MLflow Tracking server, defeating the purpose of running a distinct artifact server.
Important
Access credentials and configuration for the artifact storage location are configured once during server initialization in the place
of having users handle access credentials for artifact-based operations. Note that all users who have access to the
Tracking Server in this mode will have access to artifacts served through this assumed role.
To allow the server and clients to access the artifact location, you should configure your cloud
provider credentials as normal. For example, for S3, you can set the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY environment variables, use an IAM role, or configure a default
profile in ~/.aws/credentials.
See Set up AWS Credentials and Region for Development for more info.
Important
If you do not specify a --default-artifact-root or an artifact URI when creating the experiment
(for example, mlflow experiments create --artifact-location s3://<my-bucket>), the artifact root
is a path inside the file store. Typically this is not an appropriate location, as the client and
server probably refer to different physical locations (that is, the same path on different disks).
You may set an MLflow environment variable to configure the timeout for artifact uploads and downloads:
MLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the timeout for artifact upload/download in seconds (Default set by individual artifact stores). | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-23 | Amazon S3 and S3-compatible storage
MinIO or
Digital Ocean Spaces), specify a URI of the form
s3://<bucket>/<path>
~/.aws/credentials
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Set up AWS Credentials and Region for Development.
To add S3 file upload extra arguments, set MLFLOW_S3_UPLOAD_EXTRA_ARGS to a JSON object of key/value pairs.
For example, if you want to upload to a KMS Encrypted bucket using the KMS Key 1234:
export
MLFLOW_S3_UPLOAD_EXTRA_ARGS
'{"ServerSideEncryption": "aws:kms", "SSEKMSKeyId": "1234"}'
For a list of available extra args see Boto3 ExtraArgs Documentation.
To store artifacts in a custom endpoint, set the MLFLOW_S3_ENDPOINT_URL to your endpoint’s URL. For example, if you are using Digital Ocean Spaces:
export
MLFLOW_S3_ENDPOINT_URL
=https://<region>.digitaloceanspaces.com
If you have a MinIO server at 1.2.3.4 on port 9000:
export
MLFLOW_S3_ENDPOINT_URL
=http://1.2.3.4:9000
If the MinIO server is configured with using SSL self-signed or signed using some internal-only CA certificate, you could set MLFLOW_S3_IGNORE_TLS or AWS_CA_BUNDLE variables (not both at the same time!) to disable certificate signature check, or add a custom CA bundle to perform this check, respectively:
export
MLFLOW_S3_IGNORE_TLS
true
#or
export
AWS_CA_BUNDLE
=/some/ca/bundle.pem
Additionally, if MinIO server is configured with non-default region, you should set AWS_DEFAULT_REGION variable:
export
AWS_DEFAULT_REGION
=my_region
Warning
-default-artifact-root
$MLFLOW_S3_ENDPOINT_URL | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-24 | =my_region
Warning
-default-artifact-root
$MLFLOW_S3_ENDPOINT_URL
MLFLOW_S3_ENDPOINT_URL
-default-artifact-root
MLFLOW_S3_ENDPOINT_URL
MLFLOW_S3_ENDPOINT_URL
https://<bucketname>.s3.<region>.amazonaws.com/<key>/<bucketname>/<key>
s3://<bucketname>/<key>/<bucketname>/<key>
unset
Complete list of configurable values for an S3 client is available in boto3 documentation.
Azure Blob Storage
To store artifacts in Azure Blob Storage, specify a URI of the form
wasbs://<container>@<storage-account>.blob.core.windows.net/<path>.
MLflow expects Azure Storage access credentials in the
AZURE_STORAGE_CONNECTION_STRING, AZURE_STORAGE_ACCESS_KEY environment variables
or having your credentials configured such that the DefaultAzureCredential(). class can pick them up.
The order of precedence is:
AZURE_STORAGE_CONNECTION_STRING
AZURE_STORAGE_ACCESS_KEY
DefaultAzureCredential()
You must set one of these options on both your client application and your MLflow tracking server.
Also, you must run pip install azure-storage-blob separately (on both your client and the server) to access Azure Blob Storage.
Finally, if you want to use DefaultAzureCredential, you must pip install azure-identity;
MLflow does not declare a dependency on these packages by default.
You may set an MLflow environment variable to configure the timeout for artifact uploads and downloads:
MLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the timeout for artifact upload/download in seconds (Default: 600 for Azure blob).
Google Cloud Storage | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-25 | Google Cloud Storage
To store artifacts in Google Cloud Storage, specify a URI of the form gs://<bucket>/<path>.
You should configure credentials for accessing the GCS container on the client and server as described
in the GCS documentation.
Finally, you must run pip install google-cloud-storage (on both your client and the server)
to access Google Cloud Storage; MLflow does not declare a dependency on this package by default.
You may set some MLflow environment variables to troubleshoot GCS read-timeouts (eg. due to slow transfer speeds) using the following variables:
MLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the standard timeout for transfer operations in seconds (Default: 60 for GCS). Use -1 for indefinite timeout.
MLFLOW_GCS_DEFAULT_TIMEOUT - (Deprecated, please use MLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT) Sets the standard timeout for transfer operations in seconds (Default: 60). Use -1 for indefinite timeout.
MLFLOW_GCS_UPLOAD_CHUNK_SIZE - Sets the standard upload chunk size for bigger files in bytes (Default: 104857600 ≙ 100MiB), must be multiple of 256 KB.
MLFLOW_GCS_DOWNLOAD_CHUNK_SIZE - Sets the standard download chunk size for bigger files in bytes (Default: 104857600 ≙ 100MiB), must be multiple of 256 KB
FTP server
To store artifacts in a FTP server, specify a URI of the form ftp://user@host/path/to/directory .
The URI may optionally include a password for logging into the server, e.g. ftp://user:pass@host/path/to/directory
SFTP Server | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-26 | SFTP Server
To store artifacts in an SFTP server, specify a URI of the form sftp://user@host/path/to/directory.
You should configure the client to be able to log in to the SFTP server without a password over SSH (e.g. public key, identity file in ssh_config, etc.).
The format sftp://user:pass@host/ is supported for logging in. However, for safety reasons this is not recommended.
When using this store, pysftp must be installed on both the server and the client. Run pip install pysftp to install the required package.
NFS
To store artifacts in an NFS mount, specify a URI as a normal file system path, e.g., /mnt/nfs.
This path must be the same on both the server and the client – you may need to use symlinks or remount
the client in order to enforce this property.
HDFS
To store artifacts in HDFS, specify a hdfs: URI. It can contain host and port: hdfs://<host>:<port>/<path> or just the path: hdfs://<path>.
There are also two ways to authenticate to HDFS:
Use current UNIX account authorization
Kerberos credentials using following environment variables:
export
MLFLOW_KERBEROS_TICKET_CACHE
=/tmp/krb5cc_22222222
export
MLFLOW_KERBEROS_USER
=user_name_to_use
Most of the cluster contest settings are read from hdfs-site.xml accessed by the HDFS native
driver using the CLASSPATH environment variable.
The used HDFS driver is libhdfs.
File store performance | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-27 | The used HDFS driver is libhdfs.
File store performance
MLflow will automatically try to use LibYAML bindings if they are already installed.
However if you notice any performance issues when using file store backend, it could mean LibYAML is not installed on your system.
On Linux or Mac you can easily install it using your system package manager:
# On Ubuntu/Debian
apt-get
install
libyaml-cpp-dev
libyaml-dev
# On macOS using Homebrew
brew
install
yaml-cpp
libyaml
After installing LibYAML, you need to reinstall PyYAML:
# Reinstall PyYAML
pip
-no-cache-dir
install
-force-reinstall
I
pyyaml
Deletion Behavior
In order to allow MLflow Runs to be restored, Run metadata and artifacts are not automatically removed
from the backend store or artifact store when a Run is deleted. The mlflow gc CLI is provided
for permanently removing Run metadata and artifacts for deleted runs.
SQLAlchemy Options
You can inject some SQLAlchemy connection pooling options using environment variables.
MLflow Environment Variable
SQLAlchemy QueuePool Option
MLFLOW_SQLALCHEMYSTORE_POOL_SIZE
pool_size
MLFLOW_SQLALCHEMYSTORE_POOL_RECYCLE
pool_recycle
MLFLOW_SQLALCHEMYSTORE_MAX_OVERFLOW
max_overflow
Networking
The --host option exposes the service on all interfaces. If running a server in production, we
would recommend not exposing the built-in server broadly (as it is unauthenticated and unencrypted),
and instead putting it behind a reverse proxy like NGINX or Apache httpd, or connecting over VPN.
You can then pass authentication headers to MLflow using these environment variables.
Additionally, you should ensure that the --backend-store-uri (which defaults to the
./mlruns directory) points to a persistent (non-ephemeral) disk or database connection.
Using the Tracking Server for proxied artifact access | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-28 | Using the Tracking Server for proxied artifact access
To use an instance of the MLflow Tracking server for artifact operations ( Scenario 5: MLflow Tracking Server enabled with proxied artifact storage access ),
start a server with the optional parameters --serve-artifacts to enable proxied artifact access and set a
path to record artifacts to by providing a value for the argument --artifacts-destination. The tracking server will,
in this mode, stream any artifacts that a client is logging directly through an assumed (server-side) identity,
eliminating the need for access credentials to be handled by end-users.
Note
Authentication access to the value set by --artifacts-destination must be configured when starting the tracking
server, if required.
To start the MLflow server with proxy artifact access enabled to an HDFS location (as an example):
export
HADOOP_USER_NAME
=mlflowserverauth
mlflow
server
-host
0.0.0.0
-port
8885
-artifacts-destination
hdfs://myhost:8887/mlprojects/models
Optionally using a Tracking Server instance exclusively for artifact handling
If the volume of tracking server requests is sufficiently large and performance issues are noticed, a tracking server
can be configured to serve in --artifacts-only mode ( Scenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access ), operating in tandem with an instance that
operates with --no-serve-artifacts specified. This configuration ensures that the processing of artifacts is isolated
from all other tracking server event handling. | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-29 | When a tracking server is configured in --artifacts-only mode, any tasks apart from those concerned with artifact
handling (i.e., model logging, loading models, logging artifacts, listing artifacts, etc.) will return an HTTPError.
See the following example of a client REST call in Python attempting to list experiments from a server that is configured in
--artifacts-only mode:
import
requests
response
requests
get
"http://0.0.0.0:8885/api/2.0/mlflow/experiments/list"
Output
>> HTTPError: Endpoint: /api/2.0/mlflow/experiments/list disabled due to the mlflow server running in `--artifacts-only` mode.
Using an additional MLflow server to handle artifacts exclusively can be useful for large-scale MLOps infrastructure.
Decoupling the longer running and more compute-intensive tasks of artifact handling from the faster and higher-volume
metadata functionality of the other Tracking API requests can help minimize the burden of an otherwise single MLflow
server handling both types of payloads.
Logging to a Tracking Server
To log to a tracking server, set the MLFLOW_TRACKING_URI environment variable to the server’s URI,
along with its scheme and port (for example, http://10.0.0.1:5000) or call mlflow.set_tracking_uri().
The mlflow.start_run(), mlflow.log_param(), and mlflow.log_metric() calls
then make API requests to your remote tracking server.
import
mlflow
remote_server_uri
"..."
# set to your server URI
mlflow
set_tracking_uri
remote_server_uri
# Note: on Databricks, the experiment name passed to mlflow_set_experiment must be a
# valid path in the workspace
mlflow
set_experiment
"/my-experiment"
with
mlflow
start_run
():
mlflow
log_param
"a"
mlflow | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-30 | mlflow
start_run
():
mlflow
log_param
"a"
mlflow
log_metric
"b"
library
mlflow
install_mlflow
()
remote_server_uri
"..."
# set to your server URI
mlflow_set_tracking_uri
remote_server_uri
# Note: on Databricks, the experiment name passed to mlflow_set_experiment must be a
# valid path in the workspace
mlflow_set_experiment
"/my-experiment"
mlflow_log_param
"a"
"1"
In addition to the MLFLOW_TRACKING_URI environment variable, the following environment variables
allow passing HTTP authentication to the tracking server:
MLFLOW_TRACKING_USERNAME and MLFLOW_TRACKING_PASSWORD - username and password to use with HTTP
Basic authentication. To use Basic authentication, you must set both environment variables .
MLFLOW_TRACKING_TOKEN - token to use with HTTP Bearer authentication. Basic authentication takes precedence if set.
MLFLOW_TRACKING_INSECURE_TLS - If set to the literal true, MLflow does not verify the TLS connection,
meaning it does not validate certificates or hostnames for https:// tracking URIs. This flag is not recommended for
production environments. If this is set to true then MLFLOW_TRACKING_SERVER_CERT_PATH must not be set.
MLFLOW_TRACKING_SERVER_CERT_PATH - Path to a CA bundle to use. Sets the verify param of the
requests.request function
(see requests main interface).
When you use a self-signed server certificate you can use this to verify it on client side.
If this is set MLFLOW_TRACKING_INSECURE_TLS must not be set (false).
MLFLOW_TRACKING_CLIENT_CERT_PATH - Path to ssl client cert file (.pem). Sets the cert param
of the requests.request function
(see requests main interface).
This can be used to use a (self-signed) client certificate.
Note | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-31 | Note
If the MLflow server is not configured with the --serve-artifacts option, the client directly pushes artifacts
to the artifact store. It does not proxy these through the tracking server by default.
For this reason, the client needs direct access to the artifact store. For instructions on setting up these credentials,
see Artifact Stores.
Tracking Server versioning
The version of MLflow running on the server can be found by querying the /version endpoint.
This can be used to check that the client-side version of MLflow is up-to-date with a remote tracking server prior to running experiments.
For example:
import
requests
import
mlflow
response
requests
get
"http://<mlflow-host>:<mlflow-port>/version"
assert
response
text
==
mlflow
__version__
# Checking for a strict version match
System Tags
You can annotate runs with arbitrary tags. Tag keys that start with mlflow. are reserved for
internal use. The following tags are set automatically by MLflow, when appropriate:
Key
Description
mlflow.note.content
A descriptive note about this run. This reserved tag is not set automatically and can
be overridden by the user to include additional information about the run. The content
is displayed on the run’s page under the Notes section.
mlflow.parentRunId
The ID of the parent run, if this is a nested run.
mlflow.user
Identifier of the user who created the run.
mlflow.source.type
Source type. Possible values: "NOTEBOOK", "JOB", "PROJECT",
"LOCAL", and "UNKNOWN"
mlflow.source.name
Source identifier (e.g., GitHub URL, local Python filename, name of notebook)
mlflow.source.git.commit
Commit hash of the executed code, if in a git repository.
mlflow.source.git.branch | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
2eb3b7e954eb-32 | Commit hash of the executed code, if in a git repository.
mlflow.source.git.branch
Name of the branch of the executed code, if in a git repository.
mlflow.source.git.repoURL
URL that the executed code was cloned from.
mlflow.project.env
The runtime context used by the MLflow project.
Possible values: "docker" and "conda".
mlflow.project.entryPoint
Name of the project entry point associated with the current run, if any.
mlflow.docker.image.name
Name of the Docker image used to execute this run.
mlflow.docker.image.id
ID of the Docker image used to execute this run.
mlflow.log-model.history
Model metadata collected by log-model calls. Includes the serialized
form of the MLModel model files logged to a run, although the exact format and
information captured is subject to change. | {
"url": "https://mlflow.org/docs/latest/tracking.html#automatic-logging"
} |
72da79a93de6-0 | MLflow Tracking
The MLflow Tracking component is an API and UI for logging parameters, code versions, metrics, and output files
when running your machine learning code and for later visualizing the results.
MLflow Tracking lets you log and query experiments using Python, REST, R API, and Java API APIs.
Table of Contents
Concepts
Where Runs Are Recorded
How Runs and Artifacts are Recorded
Scenario 1: MLflow on localhost
Scenario 2: MLflow on localhost with SQLite
Scenario 3: MLflow on localhost with Tracking Server
Scenario 4: MLflow with remote Tracking Server, backend and artifact stores
Scenario 5: MLflow Tracking Server enabled with proxied artifact storage access
Scenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access
Logging Data to Runs
Logging Functions
Launching Multiple Runs in One Program
Performance Tracking with Metrics
Visualizing Metrics
Automatic Logging
Scikit-learn
Keras
Gluon
XGBoost
LightGBM
Statsmodels
Spark
Fastai
Pytorch
Organizing Runs in Experiments
Managing Experiments and Runs with the Tracking Service API
Tracking UI
Querying Runs Programmatically
MLflow Tracking Servers
Storage
Networking
Using the Tracking Server for proxied artifact access
Logging to a Tracking Server
System Tags
Concepts
MLflow Tracking is organized around the concept of runs, which are executions of some piece of
data science code. Each run records the following information:
Git commit hash used for the run, if it was run from an MLflow Project.
Start and end time of the run
Name of the file to launch the run, or the project name and entry point for the run
if run from an MLflow Project.
Key-value input parameters of your choice. Both keys and values are strings. | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-1 | Key-value input parameters of your choice. Both keys and values are strings.
Key-value metrics, where the value is numeric. Each metric can be updated throughout the
course of the run (for example, to track how your model’s loss function is converging), and
MLflow records and lets you visualize the metric’s full history.
Output files in any format. For example, you can record images (for example, PNGs), models
(for example, a pickled scikit-learn model), and data files (for example, a
Parquet file) as artifacts.
You can record runs using MLflow Python, R, Java, and REST APIs from anywhere you run your code. For
example, you can record them in a standalone program, on a remote cloud machine, or in an
interactive notebook. If you record runs in an MLflow Project, MLflow
remembers the project URI and source version.
You can optionally organize runs into experiments, which group together runs for a
specific task. You can create an experiment using the mlflow experiments CLI, with
mlflow.create_experiment(), or using the corresponding REST parameters. The MLflow API and
UI let you create and search for experiments.
Once your runs have been recorded, you can query them using the Tracking UI or the MLflow
API.
Where Runs Are Recorded
MLflow runs can be recorded to local files, to a SQLAlchemy compatible database, or remotely
to a tracking server. By default, the MLflow Python API logs runs locally to files in an mlruns directory wherever you
ran your program. You can then run mlflow ui to see the logged runs.
To log runs remotely, set the MLFLOW_TRACKING_URI environment variable to a tracking server’s URI or
call mlflow.set_tracking_uri().
There are different kinds of remote tracking URIs: | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-2 | There are different kinds of remote tracking URIs:
Local file path (specified as file:/my/local/dir), where data is just directly stored locally.
Database encoded as <dialect>+<driver>://<username>:<password>@<host>:<port>/<database>. MLflow supports the dialects mysql, mssql, sqlite, and postgresql. For more details, see SQLAlchemy database uri.
HTTP server (specified as https://my-server:5000), which is a server hosting an MLflow tracking server.
Databricks workspace (specified as databricks or as databricks://<profileName>, a Databricks CLI profile.
Refer to Access the MLflow tracking server from outside Databricks [AWS]
[Azure], or the quickstart to
easily get started with hosted MLflow on Databricks Community Edition.
How Runs and Artifacts are Recorded
As mentioned above, MLflow runs can be recorded to local files, to a SQLAlchemy compatible database, or remotely to a tracking server. MLflow artifacts can be persisted to local files
and a variety of remote file storage solutions. For storing runs and artifacts, MLflow uses two components for storage: backend store and artifact store. While the backend store persists
MLflow entities (runs, parameters, metrics, tags, notes, metadata, etc), the artifact store persists artifacts
(files, models, images, in-memory objects, or model summary, etc).
The MLflow server can be configured with an artifacts HTTP proxy, passing artifact requests through the tracking server
to store and retrieve artifacts without having to interact with underlying object store services.
Usage of the proxied artifact access feature is described in Scenarios 5 and 6 below.
The MLflow client can interface with a variety of backend and artifact storage configurations.
Here are four common configuration scenarios:
Scenario 1: MLflow on localhost | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-3 | Scenario 1: MLflow on localhost
Many developers run MLflow on their local machine, where both the backend and artifact store share a directory
on the local filesystem—./mlruns—as shown in the diagram. The MLflow client directly interfaces with an
instance of a FileStore and LocalArtifactRepository.
In this simple scenario, the MLflow client uses the following interfaces to record MLflow entities and artifacts:
An instance of a LocalArtifactRepository (to store artifacts)
An instance of a FileStore (to save MLflow entities)
Scenario 2: MLflow on localhost with SQLite
Many users also run MLflow on their local machines with a SQLAlchemy-compatible database: SQLite. In this case, artifacts
are stored under the local ./mlruns directory, and MLflow entities are inserted in a SQLite database file mlruns.db.
In this scenario, the MLflow client uses the following interfaces to record MLflow entities and artifacts:
An instance of a LocalArtifactRepository (to save artifacts)
An instance of an SQLAlchemyStore (to store MLflow entities to a SQLite file mlruns.db)
Scenario 3: MLflow on localhost with Tracking Server
Similar to scenario 1 but a tracking server is launched, listening for REST request calls at the default port 5000.
The arguments supplied to the mlflow server <args> dictate what backend and artifact stores are used.
The default is local FileStore. For example, if a user launched a tracking server as
mlflow server --backend-store-uri sqlite:///mydb.sqlite, then SQLite would be used for backend storage instead.
As in scenario 1, MLflow uses a local mlruns filesystem directory as a backend store and artifact store. With a tracking
server running, the MLflow client interacts with the tracking server via REST requests, as shown in the diagram.
Command to run the tracking server in this configuration
mlflow
server
-backend-store-uri | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-4 | Command to run the tracking server in this configuration
mlflow
server
-backend-store-uri
file:///path/to/mlruns
-no-serve-artifacts
To store all runs’ MLflow entities, the MLflow client interacts with the tracking server via a series of REST requests:
Part 1a and b:
The MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities
The Tracking Server creates an instance of a FileStore to save MLflow entities and writes directly to the local mlruns directory
For the artifacts, the MLflow client interacts with the tracking server via a REST request:
Part 2a, b, and c:
The MLflow client uses RestStore to send a REST request to fetch the artifact store URI location
The Tracking Server responds with an artifact store URI location
The MLflow client creates an instance of a LocalArtifactRepository and saves artifacts to the local filesystem location specified by the artifact store URI (a subdirectory of mlruns)
Scenario 4: MLflow with remote Tracking Server, backend and artifact stores
MLflow also supports distributed architectures, where the tracking server, backend store, and artifact store
reside on remote hosts. This example scenario depicts an architecture with a remote MLflow Tracking Server,
a Postgres database for backend entity storage, and an S3 bucket for artifact storage.
Command to run the tracking server in this configuration
mlflow
server
-backend-store-uri
postgresql://user:password@postgres:5432/mlflowdb
-default-artifact-root
s3://bucket_name
-host
remote_host
-no-serve-artifacts
To record all runs’ MLflow entities, the MLflow client interacts with the tracking server via a series of REST requests:
Part 1a and b:
The MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-5 | The Tracking Server creates an instance of an SQLAlchemyStore and connects to the remote host to
insert MLflow entities in the database
For artifact logging, the MLflow client interacts with the remote Tracking Server and artifact storage host:
Part 2a, b, and c:
The MLflow client uses RestStore to send a REST request to fetch the artifact store URI location from the Tracking Server
The Tracking Server responds with an artifact store URI location (an S3 storage URI in this case)
The MLflow client creates an instance of an S3ArtifactRepository, connects to the remote AWS host using the
boto client libraries, and uploads the artifacts to the S3 bucket URI location
FileStore,
RestStore,
and
SQLAlchemyStore are
concrete implementations of the abstract class
AbstractStore,
and the
LocalArtifactRepository and
S3ArtifactRepository are
concrete implementations of the abstract class
ArtifactRepository.
Scenario 5: MLflow Tracking Server enabled with proxied artifact storage access
MLflow’s Tracking Server supports utilizing the host as a proxy server for operations involving artifacts.
Once configured with the appropriate access requirements, an administrator can start the tracking server to enable
assumed-role operations involving the saving, loading, or listing of model artifacts, images, documents, and files.
This eliminates the need to allow end users to have direct path access to a remote object store (e.g., s3, adls, gcs, hdfs) for artifact handling and eliminates the
need for an end-user to provide access credentials to interact with an underlying object store.
Command to run the tracking server in this configuration
mlflow
server
-backend-store-uri
postgresql://user:password@postgres:5432/mlflowdb
# Artifact access is enabled through the proxy URI 'mlflow-artifacts:/',
# giving users access to this location without having to manage credentials
# or permissions.
-artifacts-destination | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-6 | # or permissions.
-artifacts-destination
s3://bucket_name
-host
remote_host
Enabling the Tracking Server to perform proxied artifact access in order to route client artifact requests to an object store location:
Part 1a and b:
The MLflow client creates an instance of a RestStore and sends REST API requests to log MLflow entities
The Tracking Server creates an instance of an SQLAlchemyStore and connects to the remote host for inserting
tracking information in the database (i.e., metrics, parameters, tags, etc.)
Part 1c and d:
Retrieval requests by the client return information from the configured SQLAlchemyStore table
Part 2a and b:
Logging events for artifacts are made by the client using the HttpArtifactRepository to write files to MLflow Tracking Server
The Tracking Server then writes these files to the configured object store location with assumed role authentication
Part 2c and d:
Retrieving artifacts from the configured backend store for a user request is done with the same authorized authentication that was configured at server start
Artifacts are passed to the end user through the Tracking Server through the interface of the HttpArtifactRepository
Note
When an experiment is created, the artifact storage location from the configuration of the tracking server is logged in the experiment’s metadata.
When enabling proxied artifact storage, any existing experiments that were created while operating a tracking server in
non-proxied mode will continue to use a non-proxied artifact location. In order to use proxied artifact logging, a new experiment must be created.
If the intention of enabling a tracking server in -serve-artifacts mode is to eliminate the need for a client to have authentication to
the underlying storage, new experiments should be created for use by clients so that the tracking server can handle authentication after this migration.
Warning | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-7 | Warning
The MLflow artifact proxied access service enables users to have an assumed role of access to all artifacts that are accessible to the Tracking Server.
Administrators who are enabling this feature should ensure that the access level granted to the Tracking Server for artifact
operations meets all security requirements prior to enabling the Tracking Server to operate in a proxied file handling role.
Scenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access
MLflow’s Tracking Server can be used in an exclusive artifact proxied artifact handling role. Specifying the
--artifacts-only flag restricts an MLflow server instance to only serve artifact-related API requests by proxying to an underlying object store.
Note
Starting a Tracking Server with the --artifacts-only parameter will disable all Tracking Server functionality apart from API calls related to saving, loading, or listing artifacts.
Creating runs, logging metrics or parameters, and accessing other attributes about experiments are all not permitted in this mode.
Command to run the tracking server in this configuration
mlflow
server
-artifacts-destination
s3://bucket_name
-artifacts-only
-host
remote_host
Running an MLFlow server in --artifacts-only mode:
Part 1a and b:
The MLflow client will interact with the Tracking Server using the HttpArtifactRepository interface.
Listing artifacts associated with a run will be conducted from the Tracking Server using the access credentials set at server startup
Saving of artifacts will transmit the files to the Tracking Server which will then write the files to the file store using credentials set at server start.
Part 1c and d:
Listing of artifact responses will pass from the file store through the Tracking Server to the client
Loading of artifacts will utilize the access credentials of the MLflow Tracking Server to acquire the files which are then passed on to the client
Note | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-8 | Note
If migrating from Scenario 5 to Scenario 6 due to request volumes, it is important to perform two validations:
Ensure that the new tracking server that is operating in --artifacts-only mode has access permissions to the
location set by --artifacts-destination that the former multi-role tracking server had.
The former multi-role tracking server that was serving artifacts must have the -serve-artifacts argument disabled.
Warning
Operating the Tracking Server in proxied artifact access mode by setting the parameter --serve-artifacts during server start, even in --artifacts-only mode,
will give access to artifacts residing on the object store to any user that has authentication to access the Tracking Server. Ensure that any per-user
security posture that you are required to maintain is applied accordingly to the proxied access that the Tracking Server will have in this mode
of operation.
Logging Data to Runs
You can log data to runs using the MLflow Python, R, Java, or REST API. This section
shows the Python API.
In this section:
Logging Functions
Launching Multiple Runs in One Program
Performance Tracking with Metrics
Visualizing Metrics
Logging Functions
mlflow.set_tracking_uri() connects to a tracking URI. You can also set the
MLFLOW_TRACKING_URI environment variable to have MLflow find a URI from there. In both cases,
the URI can either be a HTTP/HTTPS URI for a remote server, a database connection string, or a
local path to log data to a directory. The URI defaults to mlruns.
mlflow.get_tracking_uri() returns the current tracking URI.
mlflow.create_experiment() creates a new experiment and returns its ID. Runs can be
launched under the experiment by passing the experiment ID to mlflow.start_run. | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-9 | mlflow.set_experiment() sets an experiment as active. If the experiment does not exist,
creates a new experiment. If you do not specify an experiment in mlflow.start_run(), new
runs are launched under this experiment.
mlflow.start_run() returns the currently active run (if one exists), or starts a new run
and returns a mlflow.ActiveRun object usable as a context manager for the
current run. You do not need to call start_run explicitly: calling one of the logging functions
with no active run automatically starts a new one.
Note
If the argument run_name is not set within mlflow.start_run(), a unique run name will be generated for each run.
mlflow.end_run() ends the currently active run, if any, taking an optional run status.
mlflow.active_run() returns a mlflow.entities.Run object corresponding to the
currently active run, if any.
Note: You cannot access currently-active run attributes
(parameters, metrics, etc.) through the run returned by mlflow.active_run. In order to access
such attributes, use the MlflowClient as follows:
client
mlflow
MlflowClient
()
data
client
get_run
mlflow
active_run
()
info
run_id
data
mlflow.last_active_run() retuns a mlflow.entities.Run object corresponding to the
currently active run, if any. Otherwise, it returns a mlflow.entities.Run object corresponding
the last run started from the current Python process that reached a terminal status (i.e. FINISHED, FAILED, or KILLED).
mlflow.log_param() logs a single key-value param in the currently active run. The key and
value are both strings. Use mlflow.log_params() to log multiple params at once. | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-10 | mlflow.log_metric() logs a single key-value metric. The value must always be a number.
MLflow remembers the history of values for each metric. Use mlflow.log_metrics() to log
multiple metrics at once.
mlflow.set_tag() sets a single key-value tag in the currently active run. The key and
value are both strings. Use mlflow.set_tags() to set multiple tags at once.
mlflow.log_artifact() logs a local file or directory as an artifact, optionally taking an
artifact_path to place it in within the run’s artifact URI. Run artifacts can be organized into
directories, so you can place the artifact in a directory this way.
mlflow.log_artifacts() logs all the files in a given directory as artifacts, again taking
an optional artifact_path.
mlflow.get_artifact_uri() returns the URI that artifacts from the current run should be
logged to.
Launching Multiple Runs in One Program
Sometimes you want to launch multiple MLflow runs in the same program: for example, maybe you are
performing a hyperparameter search locally or your experiments are just very fast to run. This is
easy to do because the ActiveRun object returned by mlflow.start_run() is a Python
context manager. You can “scope” each run to
just one block of code as follows:
with
mlflow
start_run
():
mlflow
log_param
"x"
mlflow
log_metric
"y"
...
The run remains open throughout the with statement, and is automatically closed when the
statement exits, even if it exits due to an exception.
Performance Tracking with Metrics
You log MLflow metrics with log methods in the Tracking API. The log methods support two alternative methods for distinguishing metric values on the x-axis: timestamp and step. | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-11 | timestamp is an optional long value that represents the time that the metric was logged. timestamp defaults to the current time. step is an optional integer that represents any measurement of training progress (number of training iterations, number of epochs, and so on). step defaults to 0 and has the following requirements and properties:
Must be a valid 64-bit integer value.
Can be negative.
Can be out of order in successive write calls. For example, (1, 3, 2) is a valid sequence.
Can have “gaps” in the sequence of values specified in successive write calls. For example, (1, 5, 75, -20) is a valid sequence.
If you specify both a timestamp and a step, metrics are recorded against both axes independently.
Examples
with mlflow.start_run():
for epoch in range(0, 3):
mlflow.log_metric(key="quality", value=2 * epoch, step=epoch)
MlflowClient client = new MlflowClient();
RunInfo run = client.createRun();
for (int epoch = 0; epoch < 3; epoch ++) {
client.logMetric(run.getRunId(), "quality", 2 * epoch, System.currentTimeMillis(), epoch);
}
Visualizing Metrics
Here is an example plot of the quick start tutorial with the step x-axis and two timestamp axes:
X-axis step
X-axis wall time - graphs the absolute time each metric was logged
X-axis relative time - graphs the time relative to the first metric logged, for each run
Automatic Logging
Automatic logging allows you to log metrics, parameters, and models without the need for explicit log statements.
There are two ways to use autologging:
Call mlflow.autolog() before your training code. This will enable autologging for each supported library you have installed as soon as you import it. | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-12 | Use library-specific autolog calls for each library you use in your code. See below for examples.
The following libraries support autologging:
Scikit-learn
Keras
Gluon
XGBoost
LightGBM
Statsmodels
Spark
Fastai
Pytorch
For flavors that automatically save models as an artifact, additional files for dependency management are logged.
You can access the most recent autolog run through the mlflow.last_active_run() function. Here’s a short sklearn autolog example that makes use of this function:
import
mlflow
from
sklearn.model_selection
import
train_test_split
from
sklearn.datasets
import
load_diabetes
from
sklearn.ensemble
import
RandomForestRegressor
mlflow
autolog
()
db
load_diabetes
()
X_train
X_test
y_train
y_test
train_test_split
db
data
db
target
# Create and train models.
rf
RandomForestRegressor
n_estimators
100
max_depth
max_features
rf
fit
X_train
y_train
# Use the model to make predictions on the test dataset.
predictions
rf
predict
X_test
autolog_run
mlflow
last_active_run
()
Scikit-learn
Call mlflow.sklearn.autolog() before your training code to enable automatic logging of sklearn metrics, params, and models.
See example usage here.
Autologging for estimators (e.g. LinearRegression) and meta estimators (e.g. Pipeline) creates a single run and logs:
Metrics
Parameters
Tags
Artifacts
Training score obtained
by estimator.score
Parameters obtained by
estimator.get_params
Class name
Fully qualified class name
Fitted estimator
Autologging for parameter search estimators (e.g. GridSearchCV) creates a single parent run and nested child runs | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-13 | containing the following data:
Parent
run
Child
run
Child
run
...
containing the following data:
Run type
Metrics
Parameters
Tags
Artifacts
Parent
Training score
Parameter search estimator’s parameters
Best parameter combination
Class name
Fully qualified class name
Fitted parameter search estimator
Fitted best estimator
Search results csv file
Child
CV test score for
each parameter combination
Each parameter combination
Class name
Fully qualified class name
Keras
Call mlflow.tensorflow.autolog() before your training code to enable automatic logging of metrics and parameters. See example usages with Keras and
TensorFlow.
Note that only tensorflow>=2.3 are supported.
The respective metrics associated with tf.estimator and EarlyStopping are automatically logged.
As an example, try running the MLflow TensorFlow examples.
Autologging captures the following information:
Framework/module
Metrics
Parameters
Tags
Artifacts
tf.keras
Training loss; validation loss; user-specified metrics
fit() parameters; optimizer name; learning rate; epsilon
Model summary on training start; MLflow Model (Keras model); TensorBoard logs on training end
tf.keras.callbacks.EarlyStopping
Metrics from the EarlyStopping callbacks. For example,
stopped_epoch, restored_epoch,
restore_best_weight, etc
fit() parameters from EarlyStopping.
For example, min_delta, patience, baseline,
restore_best_weights, etc
If no active run exists when autolog() captures data, MLflow will automatically create a run to log information to.
Also, MLflow will then automatically end the run once training ends via calls to tf.keras.fit().
If a run already exists when autolog() captures data, MLflow will log to that run but not automatically end that run after training.
Gluon | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-14 | Gluon
Call mlflow.gluon.autolog() before your training code to enable automatic logging of metrics and parameters.
See example usages with Gluon .
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
Gluon
Training loss; validation loss; user-specified metrics
Number of layers; optimizer name; learning rate; epsilon
MLflow Model (Gluon model); on training end
XGBoost
Call mlflow.xgboost.autolog() before your training code to enable automatic logging of metrics and parameters.
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
XGBoost
user-specified metrics
xgboost.train parameters
MLflow Model (XGBoost model) with model signature on training end; feature importance; input example
If early stopping is activated, metrics at the best iteration will be logged as an extra step/iteration.
LightGBM
Call mlflow.lightgbm.autolog() before your training code to enable automatic logging of metrics and parameters.
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
LightGBM
user-specified metrics
lightgbm.train parameters
MLflow Model (LightGBM model) with model signature on training end; feature importance; input example
If early stopping is activated, metrics at the best iteration will be logged as an extra step/iteration.
Statsmodels
Call mlflow.statsmodels.autolog() before your training code to enable automatic logging of metrics and parameters.
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
Statsmodels
user-specified metrics
statsmodels.base.model.Model.fit parameters
MLflow Model (statsmodels.base.wrapper.ResultsWrapper) on training end
Note
Each model subclass that overrides fit expects and logs its own parameters. | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-15 | Note
Each model subclass that overrides fit expects and logs its own parameters.
Spark
Initialize a SparkSession with the mlflow-spark JAR attached (e.g.
SparkSession.builder.config("spark.jars.packages", "org.mlflow.mlflow-spark")) and then
call mlflow.spark.autolog() to enable automatic logging of Spark datasource
information at read-time, without the need for explicit
log statements. Note that autologging of Spark ML (MLlib) models is not yet supported.
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
Spark
Single tag containing source path, version, format. The tag contains one line per datasource
Note
Moreover, Spark datasource autologging occurs asynchronously - as such, it’s possible (though unlikely) to see race conditions when launching short-lived MLflow runs that result in datasource information not being logged.
Fastai
Call mlflow.fastai.autolog() before your training code to enable automatic logging of metrics and parameters.
See an example usage with Fastai.
Autologging captures the following information:
Framework
Metrics
Parameters
Tags
Artifacts
fastai
user-specified metrics
Logs optimizer data as parameters. For example,
epochs, lr, opt_func, etc;
Logs the parameters of the EarlyStoppingCallback and
OneCycleScheduler callbacks
Model checkpoints are logged to a ‘models’ directory; MLflow Model (fastai Learner model) on training end; Model summary text is logged
Pytorch
Call mlflow.pytorch.autolog() before your Pytorch Lightning training code to enable automatic logging of metrics, parameters, and models. See example usages here. Note
that currently, Pytorch autologging supports only models trained using Pytorch Lightning.
Autologging is triggered on calls to pytorch_lightning.trainer.Trainer.fit and captures the following information:
Framework/module
Metrics
Parameters | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-16 | Framework/module
Metrics
Parameters
Tags
Artifacts
pytorch_lightning.trainer.Trainer
Training loss; validation loss; average_test_accuracy;
user-defined-metrics.
fit() parameters; optimizer name; learning rate; epsilon.
Model summary on training start, MLflow Model (Pytorch model) on training end;
pytorch_lightning.callbacks.earlystopping
Training loss; validation loss; average_test_accuracy;
user-defined-metrics.
Metrics from the EarlyStopping callbacks.
For example, stopped_epoch, restored_epoch,
restore_best_weight, etc.
fit() parameters; optimizer name; learning rate; epsilon
Parameters from the EarlyStopping callbacks.
For example, min_delta, patience, baseline,``restore_best_weights``, etc
Model summary on training start; MLflow Model (Pytorch model) on training end;
Best Pytorch model checkpoint, if training stops due to early stopping callback.
If no active run exists when autolog() captures data, MLflow will automatically create a run to log information, ending the run once
the call to pytorch_lightning.trainer.Trainer.fit() completes.
If a run already exists when autolog() captures data, MLflow will log to that run but not automatically end that run after training.
Note
Parameters not explicitly passed by users (parameters that use default values) while using pytorch_lightning.trainer.Trainer.fit() are not currently automatically logged
In case of a multi-optimizer scenario (such as usage of autoencoder), only the parameters for the first optimizer are logged
Organizing Runs in Experiments
Command-Line Interface (
mlflow
experiments
mlflow.create_experiment() Python API. You can pass the experiment name for an individual run
using the CLI (for example,
mlflow
run
...
-experiment-name
[name]
MLFLOW_EXPERIMENT_NAME
-experiment-id
MLFLOW_EXPERIMENT_ID | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-17 | [name]
MLFLOW_EXPERIMENT_NAME
-experiment-id
MLFLOW_EXPERIMENT_ID
# Set the experiment via environment variables
export
MLFLOW_EXPERIMENT_NAME
=fraud-detection
mlflow
experiments
create
-experiment-name
fraud-detection
# Launch a run. The experiment is inferred from the MLFLOW_EXPERIMENT_NAME environment
# variable, or from the --experiment-name parameter passed to the MLflow CLI (the latter
# taking precedence)
with
mlflow
start_run
():
mlflow
log_param
"a"
mlflow
log_metric
"b"
Managing Experiments and Runs with the Tracking Service API
MLflow provides a more detailed Tracking Service API for managing experiments and runs directly,
which is available through client SDK in the mlflow.client module.
This makes it possible to query data about past runs, log additional information about them, create experiments,
add tags to a run, and more.
Example
from
mlflow.tracking
import
MlflowClient
client
MlflowClient
()
experiments
client
search_experiments
()
# returns a list of mlflow.entities.Experiment
run
client
create_run
experiments
experiment_id
# returns mlflow.entities.Run
client
log_param
run
info
run_id
"hello"
"world"
client
set_terminated
run
info
run_id
Adding Tags to Runs
The MlflowClient.set_tag() function lets you add custom tags to runs. A tag can only have a single unique value mapped to it at a time. For example:
client
set_tag
run
info
run_id
"tag_key"
"tag_value"
Important | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-18 | run
info
run_id
"tag_key"
"tag_value"
Important
Do not use the prefix mlflow. (e.g. mlflow.note) for a tag. This prefix is reserved for use by MLflow. See System Tags for a list of reserved tag keys.
Tracking UI
The Tracking UI lets you visualize, search and compare runs, as well as download run artifacts or
metadata for analysis in other tools. If you log runs to a local mlruns directory,
run mlflow ui in the directory above it, and it loads the corresponding runs.
Alternatively, the MLflow tracking server serves the same UI and enables remote storage of run artifacts.
In that case, you can view the UI using URL http://<ip address of your MLflow tracking server>:5000 in your browser from any
machine, including any remote machine that can connect to your tracking server.
The UI contains the following key features:
Experiment-based run listing and comparison (including run comparison across multiple experiments)
Searching for runs by parameter or metric value
Visualizing run metrics
Downloading run results
Querying Runs Programmatically
You can access all of the functions in the Tracking UI programmatically. This makes it easy to do several common tasks:
Query and compare runs using any data analysis tool of your choice, for example, pandas.
Determine the artifact URI for a run to feed some of its artifacts into a new run when executing a workflow. For an example of querying runs and constructing a multistep workflow, see the MLflow Multistep Workflow Example project.
Load artifacts from past runs as MLflow Models. For an example of training, exporting, and loading a model, and predicting using the model, see the MLflow TensorFlow example. | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-19 | Run automated parameter search algorithms, where you query the metrics from various runs to submit new ones. For an example of running automated parameter search algorithms, see the MLflow Hyperparameter Tuning Example project.
MLflow Tracking Servers
In this section:
Storage
Backend Stores
Artifact Stores
File store performance
Deletion Behavior
SQLAlchemy Options
Networking
Using the Tracking Server for proxied artifact access
Optionally using a Tracking Server instance exclusively for artifact handling
Logging to a Tracking Server
Tracking Server versioning
You run an MLflow tracking server using mlflow server. An example configuration for a server is:
Command to run the tracking server in this configuration
mlflow
server
-backend-store-uri
/mnt/persistent-disk
-default-artifact-root
s3://my-mlflow-bucket/
-host
0.0.0.0
Note
When started in --artifacts-only mode, the tracking server will not permit any operation other than saving, loading, and listing artifacts.
Storage
An MLflow tracking server has two components for storage: a backend store and an artifact store.
Backend Stores
The backend store is where MLflow Tracking Server stores experiment and run metadata as well as
params, metrics, and tags for runs. MLflow supports two types of backend stores: file store and
database-backed store.
Note
In order to use model registry functionality, you must run your server using a database-backed store.
Use --backend-store-uri to configure the type of backend store. You specify:
A file store backend as ./path_to_store or file:/path_to_store | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-20 | A file store backend as ./path_to_store or file:/path_to_store
A database-backed store as SQLAlchemy database URI.
The database URI typically takes the format <dialect>+<driver>://<username>:<password>@<host>:<port>/<database>.
MLflow supports the database dialects mysql, mssql, sqlite, and postgresql.
Drivers are optional. If you do not specify a driver, SQLAlchemy uses a dialect’s default driver.
For example, --backend-store-uri sqlite:///mlflow.db would use a local SQLite database.
Important
mlflow server will fail against a database-backed store with an out-of-date database schema.
To prevent this, upgrade your database schema to the latest supported version using
mlflow db upgrade [db_uri]. Schema migrations can result in database downtime, may
take longer on larger databases, and are not guaranteed to be transactional. You should always
take a backup of your database prior to running mlflow db upgrade - consult your database’s
documentation for instructions on taking a backup.
By default --backend-store-uri is set to the local ./mlruns directory (the same as when
running mlflow run locally), but when running a server, make sure that this points to a
persistent (that is, non-ephemeral) file system location.
Artifact Stores
In this section:
Amazon S3 and S3-compatible storage
Azure Blob Storage
Google Cloud Storage
FTP server
SFTP Server
NFS
HDFS
The artifact store is a location suitable for large data (such as an S3 bucket or shared NFS
file system) and is where clients log their artifact output (for example, models).
artifact_location is a property recorded on mlflow.entities.Experiment for
default location to store artifacts for all runs in this experiment. Additionally, artifact_uri
is a property on mlflow.entities.RunInfo to indicate location where all artifacts for
this run are stored. | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-21 | The MLflow client caches artifact location information on a per-run basis.
It is therefore not recommended to alter a run’s artifact location before it has terminated.
In addition to local file paths, MLflow supports the following storage systems as artifact
stores: Amazon S3, Azure Blob Storage, Google Cloud Storage, SFTP server, and NFS.
Use --default-artifact-root (defaults to local ./mlruns directory) to configure default
location to server’s artifact store. This will be used as artifact location for newly-created
experiments that do not specify one. Once you create an experiment, --default-artifact-root
is no longer relevant to that experiment.
By default, a server is launched with the --serve-artifacts flag to enable proxied access for artifacts.
The uri mlflow-artifacts:/ replaces an otherwise explicit object store destination (e.g., “s3:/my_bucket/mlartifacts”)
for interfacing with artifacts. The client can access artifacts via HTTP requests to the MLflow Tracking Server.
This simplifies access requirements for users of the MLflow client, eliminating the need to
configure access tokens or username and password environment variables for the underlying object store when writing or retrieving artifacts.
To disable proxied access for artifacts, specify --no-serve-artifacts.
Provided an Mlflow server configuration where the --default-artifact-root is s3://my-root-bucket,
the following patterns will all resolve to the configured proxied object store location of s3://my-root-bucket/mlartifacts:
https://<host>:<port>/mlartifacts
http://<host>/mlartifacts
mlflow-artifacts://<host>/mlartifacts
mlflow-artifacts://<host>:<port>/mlartifacts
mlflow-artifacts:/mlartifacts | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-22 | mlflow-artifacts:/mlartifacts
If the host or host:port declaration is absent in client artifact requests to the MLflow server, the client API
will assume that the host is the same as the MLflow Tracking uri.
Note
If an MLflow server is running with the --artifact-only flag, the client should interact with this server explicitly by
including either a host or host:port definition for uri location references for artifacts.
Otherwise, all artifact requests will route to the MLflow Tracking server, defeating the purpose of running a distinct artifact server.
Important
Access credentials and configuration for the artifact storage location are configured once during server initialization in the place
of having users handle access credentials for artifact-based operations. Note that all users who have access to the
Tracking Server in this mode will have access to artifacts served through this assumed role.
To allow the server and clients to access the artifact location, you should configure your cloud
provider credentials as normal. For example, for S3, you can set the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY environment variables, use an IAM role, or configure a default
profile in ~/.aws/credentials.
See Set up AWS Credentials and Region for Development for more info.
Important
If you do not specify a --default-artifact-root or an artifact URI when creating the experiment
(for example, mlflow experiments create --artifact-location s3://<my-bucket>), the artifact root
is a path inside the file store. Typically this is not an appropriate location, as the client and
server probably refer to different physical locations (that is, the same path on different disks).
You may set an MLflow environment variable to configure the timeout for artifact uploads and downloads:
MLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the timeout for artifact upload/download in seconds (Default set by individual artifact stores). | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-23 | Amazon S3 and S3-compatible storage
MinIO or
Digital Ocean Spaces), specify a URI of the form
s3://<bucket>/<path>
~/.aws/credentials
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Set up AWS Credentials and Region for Development.
To add S3 file upload extra arguments, set MLFLOW_S3_UPLOAD_EXTRA_ARGS to a JSON object of key/value pairs.
For example, if you want to upload to a KMS Encrypted bucket using the KMS Key 1234:
export
MLFLOW_S3_UPLOAD_EXTRA_ARGS
'{"ServerSideEncryption": "aws:kms", "SSEKMSKeyId": "1234"}'
For a list of available extra args see Boto3 ExtraArgs Documentation.
To store artifacts in a custom endpoint, set the MLFLOW_S3_ENDPOINT_URL to your endpoint’s URL. For example, if you are using Digital Ocean Spaces:
export
MLFLOW_S3_ENDPOINT_URL
=https://<region>.digitaloceanspaces.com
If you have a MinIO server at 1.2.3.4 on port 9000:
export
MLFLOW_S3_ENDPOINT_URL
=http://1.2.3.4:9000
If the MinIO server is configured with using SSL self-signed or signed using some internal-only CA certificate, you could set MLFLOW_S3_IGNORE_TLS or AWS_CA_BUNDLE variables (not both at the same time!) to disable certificate signature check, or add a custom CA bundle to perform this check, respectively:
export
MLFLOW_S3_IGNORE_TLS
true
#or
export
AWS_CA_BUNDLE
=/some/ca/bundle.pem
Additionally, if MinIO server is configured with non-default region, you should set AWS_DEFAULT_REGION variable:
export
AWS_DEFAULT_REGION
=my_region
Warning
-default-artifact-root
$MLFLOW_S3_ENDPOINT_URL | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-24 | =my_region
Warning
-default-artifact-root
$MLFLOW_S3_ENDPOINT_URL
MLFLOW_S3_ENDPOINT_URL
-default-artifact-root
MLFLOW_S3_ENDPOINT_URL
MLFLOW_S3_ENDPOINT_URL
https://<bucketname>.s3.<region>.amazonaws.com/<key>/<bucketname>/<key>
s3://<bucketname>/<key>/<bucketname>/<key>
unset
Complete list of configurable values for an S3 client is available in boto3 documentation.
Azure Blob Storage
To store artifacts in Azure Blob Storage, specify a URI of the form
wasbs://<container>@<storage-account>.blob.core.windows.net/<path>.
MLflow expects Azure Storage access credentials in the
AZURE_STORAGE_CONNECTION_STRING, AZURE_STORAGE_ACCESS_KEY environment variables
or having your credentials configured such that the DefaultAzureCredential(). class can pick them up.
The order of precedence is:
AZURE_STORAGE_CONNECTION_STRING
AZURE_STORAGE_ACCESS_KEY
DefaultAzureCredential()
You must set one of these options on both your client application and your MLflow tracking server.
Also, you must run pip install azure-storage-blob separately (on both your client and the server) to access Azure Blob Storage.
Finally, if you want to use DefaultAzureCredential, you must pip install azure-identity;
MLflow does not declare a dependency on these packages by default.
You may set an MLflow environment variable to configure the timeout for artifact uploads and downloads:
MLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the timeout for artifact upload/download in seconds (Default: 600 for Azure blob).
Google Cloud Storage | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-25 | Google Cloud Storage
To store artifacts in Google Cloud Storage, specify a URI of the form gs://<bucket>/<path>.
You should configure credentials for accessing the GCS container on the client and server as described
in the GCS documentation.
Finally, you must run pip install google-cloud-storage (on both your client and the server)
to access Google Cloud Storage; MLflow does not declare a dependency on this package by default.
You may set some MLflow environment variables to troubleshoot GCS read-timeouts (eg. due to slow transfer speeds) using the following variables:
MLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT - (Experimental, may be changed or removed) Sets the standard timeout for transfer operations in seconds (Default: 60 for GCS). Use -1 for indefinite timeout.
MLFLOW_GCS_DEFAULT_TIMEOUT - (Deprecated, please use MLFLOW_ARTIFACT_UPLOAD_DOWNLOAD_TIMEOUT) Sets the standard timeout for transfer operations in seconds (Default: 60). Use -1 for indefinite timeout.
MLFLOW_GCS_UPLOAD_CHUNK_SIZE - Sets the standard upload chunk size for bigger files in bytes (Default: 104857600 ≙ 100MiB), must be multiple of 256 KB.
MLFLOW_GCS_DOWNLOAD_CHUNK_SIZE - Sets the standard download chunk size for bigger files in bytes (Default: 104857600 ≙ 100MiB), must be multiple of 256 KB
FTP server
To store artifacts in a FTP server, specify a URI of the form ftp://user@host/path/to/directory .
The URI may optionally include a password for logging into the server, e.g. ftp://user:pass@host/path/to/directory
SFTP Server | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-26 | SFTP Server
To store artifacts in an SFTP server, specify a URI of the form sftp://user@host/path/to/directory.
You should configure the client to be able to log in to the SFTP server without a password over SSH (e.g. public key, identity file in ssh_config, etc.).
The format sftp://user:pass@host/ is supported for logging in. However, for safety reasons this is not recommended.
When using this store, pysftp must be installed on both the server and the client. Run pip install pysftp to install the required package.
NFS
To store artifacts in an NFS mount, specify a URI as a normal file system path, e.g., /mnt/nfs.
This path must be the same on both the server and the client – you may need to use symlinks or remount
the client in order to enforce this property.
HDFS
To store artifacts in HDFS, specify a hdfs: URI. It can contain host and port: hdfs://<host>:<port>/<path> or just the path: hdfs://<path>.
There are also two ways to authenticate to HDFS:
Use current UNIX account authorization
Kerberos credentials using following environment variables:
export
MLFLOW_KERBEROS_TICKET_CACHE
=/tmp/krb5cc_22222222
export
MLFLOW_KERBEROS_USER
=user_name_to_use
Most of the cluster contest settings are read from hdfs-site.xml accessed by the HDFS native
driver using the CLASSPATH environment variable.
The used HDFS driver is libhdfs.
File store performance | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-27 | The used HDFS driver is libhdfs.
File store performance
MLflow will automatically try to use LibYAML bindings if they are already installed.
However if you notice any performance issues when using file store backend, it could mean LibYAML is not installed on your system.
On Linux or Mac you can easily install it using your system package manager:
# On Ubuntu/Debian
apt-get
install
libyaml-cpp-dev
libyaml-dev
# On macOS using Homebrew
brew
install
yaml-cpp
libyaml
After installing LibYAML, you need to reinstall PyYAML:
# Reinstall PyYAML
pip
-no-cache-dir
install
-force-reinstall
I
pyyaml
Deletion Behavior
In order to allow MLflow Runs to be restored, Run metadata and artifacts are not automatically removed
from the backend store or artifact store when a Run is deleted. The mlflow gc CLI is provided
for permanently removing Run metadata and artifacts for deleted runs.
SQLAlchemy Options
You can inject some SQLAlchemy connection pooling options using environment variables.
MLflow Environment Variable
SQLAlchemy QueuePool Option
MLFLOW_SQLALCHEMYSTORE_POOL_SIZE
pool_size
MLFLOW_SQLALCHEMYSTORE_POOL_RECYCLE
pool_recycle
MLFLOW_SQLALCHEMYSTORE_MAX_OVERFLOW
max_overflow
Networking
The --host option exposes the service on all interfaces. If running a server in production, we
would recommend not exposing the built-in server broadly (as it is unauthenticated and unencrypted),
and instead putting it behind a reverse proxy like NGINX or Apache httpd, or connecting over VPN.
You can then pass authentication headers to MLflow using these environment variables.
Additionally, you should ensure that the --backend-store-uri (which defaults to the
./mlruns directory) points to a persistent (non-ephemeral) disk or database connection.
Using the Tracking Server for proxied artifact access | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-28 | Using the Tracking Server for proxied artifact access
To use an instance of the MLflow Tracking server for artifact operations ( Scenario 5: MLflow Tracking Server enabled with proxied artifact storage access ),
start a server with the optional parameters --serve-artifacts to enable proxied artifact access and set a
path to record artifacts to by providing a value for the argument --artifacts-destination. The tracking server will,
in this mode, stream any artifacts that a client is logging directly through an assumed (server-side) identity,
eliminating the need for access credentials to be handled by end-users.
Note
Authentication access to the value set by --artifacts-destination must be configured when starting the tracking
server, if required.
To start the MLflow server with proxy artifact access enabled to an HDFS location (as an example):
export
HADOOP_USER_NAME
=mlflowserverauth
mlflow
server
-host
0.0.0.0
-port
8885
-artifacts-destination
hdfs://myhost:8887/mlprojects/models
Optionally using a Tracking Server instance exclusively for artifact handling
If the volume of tracking server requests is sufficiently large and performance issues are noticed, a tracking server
can be configured to serve in --artifacts-only mode ( Scenario 6: MLflow Tracking Server used exclusively as proxied access host for artifact storage access ), operating in tandem with an instance that
operates with --no-serve-artifacts specified. This configuration ensures that the processing of artifacts is isolated
from all other tracking server event handling. | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-29 | When a tracking server is configured in --artifacts-only mode, any tasks apart from those concerned with artifact
handling (i.e., model logging, loading models, logging artifacts, listing artifacts, etc.) will return an HTTPError.
See the following example of a client REST call in Python attempting to list experiments from a server that is configured in
--artifacts-only mode:
import
requests
response
requests
get
"http://0.0.0.0:8885/api/2.0/mlflow/experiments/list"
Output
>> HTTPError: Endpoint: /api/2.0/mlflow/experiments/list disabled due to the mlflow server running in `--artifacts-only` mode.
Using an additional MLflow server to handle artifacts exclusively can be useful for large-scale MLOps infrastructure.
Decoupling the longer running and more compute-intensive tasks of artifact handling from the faster and higher-volume
metadata functionality of the other Tracking API requests can help minimize the burden of an otherwise single MLflow
server handling both types of payloads.
Logging to a Tracking Server
To log to a tracking server, set the MLFLOW_TRACKING_URI environment variable to the server’s URI,
along with its scheme and port (for example, http://10.0.0.1:5000) or call mlflow.set_tracking_uri().
The mlflow.start_run(), mlflow.log_param(), and mlflow.log_metric() calls
then make API requests to your remote tracking server.
import
mlflow
remote_server_uri
"..."
# set to your server URI
mlflow
set_tracking_uri
remote_server_uri
# Note: on Databricks, the experiment name passed to mlflow_set_experiment must be a
# valid path in the workspace
mlflow
set_experiment
"/my-experiment"
with
mlflow
start_run
():
mlflow
log_param
"a"
mlflow | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-30 | mlflow
start_run
():
mlflow
log_param
"a"
mlflow
log_metric
"b"
library
mlflow
install_mlflow
()
remote_server_uri
"..."
# set to your server URI
mlflow_set_tracking_uri
remote_server_uri
# Note: on Databricks, the experiment name passed to mlflow_set_experiment must be a
# valid path in the workspace
mlflow_set_experiment
"/my-experiment"
mlflow_log_param
"a"
"1"
In addition to the MLFLOW_TRACKING_URI environment variable, the following environment variables
allow passing HTTP authentication to the tracking server:
MLFLOW_TRACKING_USERNAME and MLFLOW_TRACKING_PASSWORD - username and password to use with HTTP
Basic authentication. To use Basic authentication, you must set both environment variables .
MLFLOW_TRACKING_TOKEN - token to use with HTTP Bearer authentication. Basic authentication takes precedence if set.
MLFLOW_TRACKING_INSECURE_TLS - If set to the literal true, MLflow does not verify the TLS connection,
meaning it does not validate certificates or hostnames for https:// tracking URIs. This flag is not recommended for
production environments. If this is set to true then MLFLOW_TRACKING_SERVER_CERT_PATH must not be set.
MLFLOW_TRACKING_SERVER_CERT_PATH - Path to a CA bundle to use. Sets the verify param of the
requests.request function
(see requests main interface).
When you use a self-signed server certificate you can use this to verify it on client side.
If this is set MLFLOW_TRACKING_INSECURE_TLS must not be set (false).
MLFLOW_TRACKING_CLIENT_CERT_PATH - Path to ssl client cert file (.pem). Sets the cert param
of the requests.request function
(see requests main interface).
This can be used to use a (self-signed) client certificate.
Note | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-31 | Note
If the MLflow server is not configured with the --serve-artifacts option, the client directly pushes artifacts
to the artifact store. It does not proxy these through the tracking server by default.
For this reason, the client needs direct access to the artifact store. For instructions on setting up these credentials,
see Artifact Stores.
Tracking Server versioning
The version of MLflow running on the server can be found by querying the /version endpoint.
This can be used to check that the client-side version of MLflow is up-to-date with a remote tracking server prior to running experiments.
For example:
import
requests
import
mlflow
response
requests
get
"http://<mlflow-host>:<mlflow-port>/version"
assert
response
text
==
mlflow
__version__
# Checking for a strict version match
System Tags
You can annotate runs with arbitrary tags. Tag keys that start with mlflow. are reserved for
internal use. The following tags are set automatically by MLflow, when appropriate:
Key
Description
mlflow.note.content
A descriptive note about this run. This reserved tag is not set automatically and can
be overridden by the user to include additional information about the run. The content
is displayed on the run’s page under the Notes section.
mlflow.parentRunId
The ID of the parent run, if this is a nested run.
mlflow.user
Identifier of the user who created the run.
mlflow.source.type
Source type. Possible values: "NOTEBOOK", "JOB", "PROJECT",
"LOCAL", and "UNKNOWN"
mlflow.source.name
Source identifier (e.g., GitHub URL, local Python filename, name of notebook)
mlflow.source.git.commit
Commit hash of the executed code, if in a git repository.
mlflow.source.git.branch | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
72da79a93de6-32 | Commit hash of the executed code, if in a git repository.
mlflow.source.git.branch
Name of the branch of the executed code, if in a git repository.
mlflow.source.git.repoURL
URL that the executed code was cloned from.
mlflow.project.env
The runtime context used by the MLflow project.
Possible values: "docker" and "conda".
mlflow.project.entryPoint
Name of the project entry point associated with the current run, if any.
mlflow.docker.image.name
Name of the Docker image used to execute this run.
mlflow.docker.image.id
ID of the Docker image used to execute this run.
mlflow.log-model.history
Model metadata collected by log-model calls. Includes the serialized
form of the MLModel model files logged to a run, although the exact format and
information captured is subject to change. | {
"url": "https://mlflow.org/docs/latest/tracking.html"
} |
ced13a6bcc4c-0 | MLflow Projects
An MLflow Project is a format for packaging data science code in a reusable and reproducible way,
based primarily on conventions. In addition, the Projects component includes an API and command-line
tools for running projects, making it possible to chain together projects into workflows.
Table of Contents
Overview
Specifying Projects
Running Projects
Iterating Quickly
Building Multistep Workflows
Overview
At the core, MLflow Projects are just a convention for organizing and describing your code to let
other data scientists (or automated tools) run it. Each project is simply a directory of files, or
a Git repository, containing your code. MLflow can run some projects based on a convention for
placing files in this directory (for example, a conda.yaml file is treated as a
Conda environment), but you can describe your project in more detail by
adding a MLproject file, which is a YAML formatted
text file. Each project can specify several properties:
A human-readable name for the project.
Commands that can be run within the project, and information about their
parameters. Most projects contain at least one entry point that you want other users to
call. Some projects can also contain more than one entry point: for example, you might have a
single Git repository containing multiple featurization algorithms. You can also call
any .py or .sh file in the project as an entry point. If you list your entry points in
a MLproject file, however, you can also specify parameters for them, including data
types and default values.
The software environment that should be used to execute project entry points. This includes all
library dependencies required by the project code. See Project Environments for more
information about the software environments supported by MLflow Projects, including
Conda environments,
Virtualenv environments, and
Docker containers. | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-1 | You can run any project from a Git URI or from a local directory using the mlflow run
command-line tool, or the mlflow.projects.run() Python API. These APIs also allow submitting the
project for remote execution on Databricks and
Kubernetes.
Important
By default, MLflow uses a new, temporary working directory for Git projects.
This means that you should generally pass any file arguments to MLflow
project using absolute, not relative, paths. If your project declares its parameters, MLflow
automatically makes paths absolute for parameters of type path.
Specifying Projects
By default, any Git repository or local directory can be treated as an MLflow project; you can
invoke any bash or Python script contained in the directory as a project entry point. The
Project Directories section describes how MLflow interprets directories as projects.
To provide additional control over a project’s attributes, you can also include an MLproject
file in your project’s repository or directory.
Finally, MLflow projects allow you to specify the software environment
that is used to execute project entry points.
Project Environments
MLflow currently supports the following project environments: Virtualenv environment, conda environment, Docker container environment, and system environment.
Virtualenv environments support Python packages available on PyPI. When an MLflow Project
specifies a Virtualenv environment, MLflow will download the specified version of Python by using
pyenv and create an isolated environment that contains the project dependencies using virtualenv,
activating it as the execution environment prior to running the project code.
You can specify a Virtualenv environment for your MLflow Project by including a python_env entry in your
MLproject file. For details, see the Project Directories and Specifying an Environment sections. | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-2 | Docker containers allow you to capture
non-Python dependencies such as Java libraries.
When you run an MLflow project that specifies a Docker image, MLflow adds a new Docker layer
that copies the project’s contents into the /mlflow/projects/code directory. This step produces
a new image. MLflow then runs the new image and invokes the project entrypoint in the resulting
container.
Environment variables, such as MLFLOW_TRACKING_URI, are propagated inside the Docker container
during project execution. Additionally, runs and
experiments created by the project are saved to the
tracking server specified by your tracking URI. When running
against a local tracking URI, MLflow mounts the host system’s tracking directory
(e.g., a local mlruns directory) inside the container so that metrics, parameters, and
artifacts logged during project execution are accessible afterwards.
See Dockerized Model Training with MLflow for an example of an MLflow
project with a Docker environment.
To specify a Docker container environment, you must add an
MLproject file to your project. For information about specifying
a Docker container environment in an MLproject file, see
Specifying an Environment.
Conda environments support
both Python packages and native libraries (e.g, CuDNN or Intel MKL). When an MLflow Project
specifies a Conda environment, it is activated before project code is run.
Warning
By using conda, you’re responsible for adhering to Anaconda’s terms of service. | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-3 | By default, MLflow uses the system path to find and run the conda binary. You can use a
different Conda installation by setting the MLFLOW_CONDA_HOME environment variable; in this
case, MLflow attempts to run the binary at $MLFLOW_CONDA_HOME/bin/conda.
You can specify a Conda environment for your MLflow project by including a conda.yaml
file in the root of the project directory or by including a conda_env entry in your
MLproject file. For details, see the Project Directories and Specifying an Environment sections.
The mlflow run command supports running a conda environment project as a virtualenv environment project.
To do this, run mlflow run with --env-manager virtualenv:
mlflow run /path/to/conda/project --env-manager virtualenv
Warning
When a conda environment project is executed as a virtualenv environment project,
conda dependencies will be ignored and only pip dependencies will be installed.
You can also run MLflow Projects directly in your current system environment. All of the
project’s dependencies must be installed on your system prior to project execution. The system
environment is supplied at runtime. It is not part of the MLflow Project’s directory contents
or MLproject file. For information about using the system environment when running
a project, see the Environment parameter description in the Running Projects section.
Project Directories
When running an MLflow Project directory or repository that does not contain an MLproject
file, MLflow uses the following conventions to determine the project’s attributes:
The project’s name is the name of the directory.
The Conda environment
is specified in conda.yaml, if present. If no conda.yaml file is present, MLflow
uses a Conda environment containing only Python (specifically, the latest Python available to
Conda) when running the project. | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-4 | Any .py and .sh file in the project can be an entry point. MLflow uses Python
to execute entry points with the .py extension, and it uses bash to execute entry points with
the .sh extension. For more information about specifying project entrypoints at runtime,
see Running Projects.
By default, entry points do not have any parameters when an MLproject file is not included.
Parameters can be supplied at runtime via the mlflow run CLI or the
mlflow.projects.run() Python API. Runtime parameters are passed to the entry point on the
command line using --key value syntax. For more information about running projects and
with runtime parameters, see Running Projects.
MLproject File
You can get more control over an MLflow Project by adding an MLproject file, which is a text
file in YAML syntax, to the project’s root directory. The following is an example of an
MLproject file:
name
My Project
python_env
python_env.yaml
# or
# conda_env: my_env.yaml
# or
# docker_env:
# image: mlflow-docker-example
entry_points
main
parameters
data_file
path
regularization
type
float
default
0.1
command
"python
train.py
r
{regularization}
{data_file}"
validate
parameters
data_file
path
command
"python
validate.py
{data_file}"
The file can specify a name and a Conda or Docker environment, as well as more detailed information about each entry point.
Specifically, each entry point defines a command to run and
parameters to pass to the command (including data types).
Specifying an Environment
This section describes how to specify Conda and Docker container environments in an MLproject file.
MLproject files cannot specify both a Conda environment and a Docker environment. | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-5 | Include a top-level python_env entry in the MLproject file.
The value of this entry must be a relative path to a python_env YAML file
within the MLflow project’s directory. The following is an example MLProject
file with a python_env definition:
python_env: files/config/python_env.yaml
python_env refers to an environment file located at
<MLFLOW_PROJECT_DIRECTORY>/files/config/python_env.yaml, where
<MLFLOW_PROJECT_DIRECTORY> is the path to the MLflow project’s root directory.
The following is an example of a python_env.yaml file:
# Python version required to run the project.
python: "3.8.15"
# Dependencies required to build packages. This field is optional.
build_dependencies:
- pip
- setuptools
- wheel==0.37.1
# Dependencies required to run the project.
dependencies:
- mlflow
- scikit-learn==1.0.2
Include a top-level conda_env entry in the MLproject file.
The value of this entry must be a relative path to a Conda environment YAML file
within the MLflow project’s directory. In the following example:
conda_env: files/config/conda_environment.yaml
conda_env refers to an environment file located at
<MLFLOW_PROJECT_DIRECTORY>/files/config/conda_environment.yaml, where
<MLFLOW_PROJECT_DIRECTORY> is the path to the MLflow project’s root directory.
Include a top-level docker_env entry in the MLproject file. The value of this entry must be the name
of a Docker image that is accessible on the system executing the project; this image name
may include a registry path and tags. Here are a couple of examples.
Example 1: Image without a registry path
docker_env:
image: mlflow-docker-example-environment | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-6 | In this example, docker_env refers to the Docker image with name
mlflow-docker-example-environment and default tag latest. Because no registry path is
specified, Docker searches for this image on the system that runs the MLflow project. If the
image is not found, Docker attempts to pull it from DockerHub.
Example 2: Mounting volumes and specifying environment variables
You can also specify local volumes to mount in the docker image (as you normally would with Docker’s -v option), and additional environment variables (as per Docker’s -e option). Environment variables can either be copied from the host system’s environment variables, or specified as new variables for the Docker environment. The environment field should be a list. Elements in this list can either be lists of two strings (for defining a new variable) or single strings (for copying variables from the host system). For example:
docker_env:
image: mlflow-docker-example-environment
volumes: ["/local/path:/container/mount/path"]
environment: [["NEW_ENV_VAR", "new_var_value"], "VAR_TO_COPY_FROM_HOST_ENVIRONMENT"]
In this example our docker container will have one additional local volume mounted, and two additional environment variables: one newly-defined, and one copied from the host system.
Example 3: Image in a remote registry
docker_env:
image: 012345678910.dkr.ecr.us-west-2.amazonaws.com/mlflow-docker-example-environment:7.0 | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-7 | In this example, docker_env refers to the Docker image with name
mlflow-docker-example-environment and tag 7.0 in the Docker registry with path
012345678910.dkr.ecr.us-west-2.amazonaws.com, which corresponds to an
Amazon ECR registry.
When the MLflow project is run, Docker attempts to pull the image from the specified registry.
The system executing the MLflow project must have credentials to pull this image from the specified registry.
Example 4: Build a new image
docker_env:
image: python:3.8
mlflow run ... --build-image
To build a new image that’s based on the specified image and files contained in
the project directory, use the --build-image argument. In the above example, the image
python:3.8 is pulled from Docker Hub if it’s not present locally, and a new image is built
based on it. The project is executed in a container created from this image.
Command Syntax
MLproject
format string syntax.
All of the parameters declared in the entry point’s
parameters
parameters
-key
value
MLproject
Before substituting parameters in the command, MLflow escapes them using the Python
shlex.quote function, so you don’t
need to worry about adding quotes inside your command field.
Specifying Parameters
MLflow allows specifying a data type and default value for each parameter. You can specify just the
data type by writing:
parameter_name
data_type
in your YAML file, or add a default value as well using one of the following syntaxes (which are
equivalent in YAML):
parameter_name
type
data_type
default
value
# Short syntax
parameter_name
# Long syntax
type
data_type
default
value | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-8 | # Short syntax
parameter_name
# Long syntax
type
data_type
default
value
MLflow supports four parameter types, some of which it treats specially (for example, downloading
data to local files). Any undeclared parameters are treated as string. The parameter types are:
A text string.
A real number. MLflow validates that the parameter is a number.
A path on the local file system. MLflow converts any relative path parameters to absolute
paths. MLflow also downloads any paths passed as distributed storage URIs
(s3://, dbfs://, gs://, etc.) to local files. Use this type for programs that can only read local
files.
A URI for data either in a local or distributed storage system. MLflow converts
relative paths to absolute paths, as in the path type. Use this type for programs
that know how to read from distributed storage (e.g., programs that use Spark).
Running Projects
MLflow provides two ways to run projects: the mlflow run command-line tool, or
the mlflow.projects.run() Python API. Both tools take the following parameters:
A directory on the local file system or a Git repository path,
specified as a URI of the form https://<repo> (to use HTTPS) or user@host:path
(to use Git over SSH). To run against an MLproject file located in a subdirectory of the project,
add a ‘#’ to the end of the URI argument, followed by the relative path from the project’s root directory
to the subdirectory containing the desired project.
For Git-based projects, the commit hash or branch name in the Git repository.
The name of the entry point, which defaults to main. You can use any
entry point named in the MLproject file, or any .py or .sh file in the project,
given as a path from the project root (for example, src/test.py). | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-9 | Key-value parameters. Any parameters with
declared types are validated and transformed if needed.
Both the command-line and API let you launch projects remotely
in a Databricks environment. This includes setting cluster
parameters such as a VM type. Of course, you can also run projects on any other computing
infrastructure of your choice using the local version of the mlflow run command (for
example, submit a script that does mlflow run to a standard job queueing system).
You can also launch projects remotely on Kubernetes clusters
using the mlflow run CLI (see Run an MLflow Project on Kubernetes).
By default, MLflow Projects are run in the environment specified by the project directory
or the MLproject file (see Specifying Project Environments).
You can ignore a project’s specified environment and run the project in the current
system environment by supplying the --env-manager=local flag, but this can lead to
unexpected results if there are dependency mismatches between the project environment and
the current system environment.
For example, the tutorial creates and publishes an MLflow Project that trains a linear model. The
project is also published on GitHub at https://github.com/mlflow/mlflow-example. To run
this project:
mlflow
run
git@github.com:mlflow/mlflow-example.git
P
alpha
0.5
There are also additional options for disabling the creation of a Conda environment, which can be
useful if you quickly want to test a project in your existing shell environment.
Run an MLflow Project on Databricks
You can run MLflow Projects remotely on Databricks. To use this feature, you must have an enterprise
Databricks account (Community Edition is not supported) and you must have set up the
Databricks CLI. Find detailed instructions
in the Databricks docs (Azure Databricks,
Databricks on AWS). | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-10 | Run an MLflow Project on Kubernetes
You can run MLflow Projects with Docker environments
on Kubernetes. The following sections provide an overview of the feature, including a simple
Project execution guide with examples.
To see this feature in action, you can also refer to the
Docker example, which includes
the required Kubernetes backend configuration (kubernetes_backend.json) and Kubernetes Job Spec
(kubernetes_job_template.yaml) files.
How it works
When you run an MLflow Project on Kubernetes, MLflow constructs a new Docker image
containing the Project’s contents; this image inherits from the Project’s
Docker environment. MLflow then pushes the new
Project image to your specified Docker registry and starts a
Kubernetes Job
on your specified Kubernetes cluster. This Kubernetes Job downloads the Project image and starts
a corresponding Docker container. Finally, the container invokes your Project’s
entry point, logging parameters, tags, metrics, and artifacts to your
MLflow tracking server.
Execution guide
You can run your MLflow Project on Kubernetes by following these steps:
Add a Docker environment to your MLflow Project, if one does not already exist. For
reference, see Specifying an Environment.
Create a backend configuration JSON file with the following entries: | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-11 | Create a backend configuration JSON file with the following entries:
kube-context
The Kubernetes context
where MLflow will run the job. If not provided, MLflow will use the current context.
If no context is available, MLflow will assume it is running in a Kubernetes cluster
and it will use the Kubernetes service account running the current pod (‘in-cluster’ configuration).
repository-uri
The URI of the docker repository where the Project execution Docker image will be uploaded
(pushed). Your Kubernetes cluster must have access to this repository in order to run your
MLflow Project.
kube-job-template-path
The path to a YAML configuration file for your Kubernetes Job - a Kubernetes Job Spec.
MLflow reads the Job Spec and replaces certain fields to facilitate job execution and
monitoring; MLflow does not modify the original template file. For more information about
writing Kubernetes Job Spec templates for use with MLflow, see the
Job Templates section.
Example Kubernetes backend configuration
"kube-context"
"docker-for-desktop"
"repository-uri"
"username/mlflow-kubernetes-example"
"kube-job-template-path"
"/Users/username/path/to/kubernetes_job_template.yaml"
If necessary, obtain credentials to access your Project’s Docker and Kubernetes resources, including:
The Docker environment image specified in the MLproject
file.
The Docker repository referenced by repository-uri in your backend configuration file.
The Kubernetes context
referenced by kube-context in your backend configuration file.
MLflow expects these resources to be accessible via the
docker and
kubectl CLIs before running the
Project.
Run the Project using the MLflow Projects CLI or Python API,
specifying your Project URI and the path to your backend configuration file. For example:
mlflow run <project_uri> --backend kubernetes --backend-config examples/docker/kubernetes_config.json
where <project_uri> is a Git repository URI or a folder.
Job Templates | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-12 | where <project_uri> is a Git repository URI or a folder.
Job Templates
MLflow executes Projects on Kubernetes by creating Kubernetes Job resources.
MLflow creates a Kubernetes Job for an MLflow Project by reading a user-specified
Job Spec.
When MLflow reads a Job Spec, it formats the following fields:
metadata.name Replaced with a string containing the name of the MLflow Project and the time
of Project execution
spec.template.spec.container[0].name Replaced with the name of the MLflow Project
spec.template.spec.container[0].image Replaced with the URI of the Docker image created during
Project execution. This URI includes the Docker image’s digest hash.
spec.template.spec.container[0].command Replaced with the Project entry point command
specified when executing the MLflow Project.
The following example shows a simple Kubernetes Job Spec that is compatible with MLflow Project
execution. Replaced fields are indicated using bracketed text.
Example Kubernetes Job Spec
apiVersion
batch/v1
kind
Job
metadata
name
"{replaced
with
MLflow
Project
name}"
namespace
mlflow
spec
ttlSecondsAfterFinished
100
backoffLimit
template
spec
containers
name
"{replaced
with
MLflow
Project
name}"
image
"{replaced
with
URI
of
Docker
image
created
during
Project
execution}"
command
"{replaced
with
MLflow
Project
entry
point
command}"
env
"{appended
with
MLFLOW_TRACKING_URI,
MLFLOW_RUN_ID
and
MLFLOW_EXPERIMENT_ID}"
resources
limits
memory
512Mi
requests
memory
256Mi
restartPolicy
Never
container.name
container.image
container.command
MLFLOW_TRACKING_URI
MLFLOW_RUN_ID
MLFLOW_EXPERIMENT_ID
container.env | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-13 | MLFLOW_TRACKING_URI
MLFLOW_RUN_ID
MLFLOW_EXPERIMENT_ID
container.env
KUBE_MLFLOW_TRACKING_URI
MLFLOW_TRACKING_URI
Iterating Quickly
If you want to rapidly develop a project, we recommend creating an MLproject file with your
main program specified as the main entry point, and running it with mlflow run ..
To avoid having to write parameters repeatedly, you can add default parameters in your MLproject file.
Building Multistep Workflows
mlflow.projects.run() API, combined with
mlflow.client, makes it possible to build
multi-step workflows with separate projects (or entry points in the same project) as the individual
steps. Each call to
mlflow.projects.run() returns a run object, that you can use with
mlflow.client to determine when the run has ended and get its output artifacts. These artifacts
can then be passed into another step that takes
path
uri
Different users can publish reusable steps for data featurization, training, validation, and so on, that other users or team can run in their workflows. Because MLflow supports Git versioning, another team can lock their workflow to a specific version of a project, or upgrade to a new one on their own schedule.
Using mlflow.projects.run() you can launch multiple runs in parallel either on the local machine or on a cloud platform like Databricks. Your driver program can then inspect the metrics from each run in real time to cancel runs, launch new ones, or select the best performing run on a target metric.
Sometimes you want to run the same training code on different random splits of training and validation data. With MLflow Projects, you can package the project in a way that allows this, for example, by taking a random seed for the train/validation split as a parameter, or by calling another project first that can split the input data. | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
ced13a6bcc4c-14 | For an example of how to construct such a multistep workflow, see the MLflow Multistep Workflow Example project. | {
"url": "https://mlflow.org/docs/latest/projects.html"
} |
fe0d5088bc70-0 | MLflow Models
An MLflow Model is a standard format for packaging machine learning models that can be used in a
variety of downstream tools—for example, real-time serving through a REST API or batch inference
on Apache Spark. The format defines a convention that lets you save a model in different “flavors”
that can be understood by different downstream tools.
Table of Contents
Storage Format
Model Signature And Input Example
Model API
Built-In Model Flavors
Model Evaluation
Model Customization
Built-In Deployment Tools
Deployment to Custom Targets
Community Model Flavors
Storage Format
Each MLflow Model is a directory containing arbitrary files, together with an MLmodel
file in the root of the directory that can define multiple flavors that the model can be viewed
in.
Flavors are the key concept that makes MLflow Models powerful: they are a convention that deployment
tools can use to understand the model, which makes it possible to write tools that work with models
from any ML library without having to integrate each tool with each library. MLflow defines
several “standard” flavors that all of its built-in deployment tools support, such as a “Python
function” flavor that describes how to run the model as a Python function. However, libraries can
also define and use other flavors. For example, MLflow’s mlflow.sklearn library allows
loading models back as a scikit-learn Pipeline object for use in code that is aware of
scikit-learn, or as a generic Python function for use in tools that just need to apply the model
(for example, the mlflow deployments tool with the option -t sagemaker for deploying models
to Amazon SageMaker).
All of the flavors that a particular model supports are defined in its MLmodel file in YAML
format. For example, mlflow.sklearn outputs models as follows: | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-1 | # Directory written by mlflow.sklearn.save_model(model, "my_model")
my_model/
├── MLmodel
├── model.pkl
├── conda.yaml
├── python_env.yaml
└── requirements.txt
And its MLmodel file describes two flavors:
time_created
2018-05-25T17:28:53.35
flavors
sklearn
sklearn_version
0.19.1
pickled_model
model.pkl
python_function
loader_module
mlflow.sklearn
This model can then be used with any tool that supports either the sklearn or
python_function model flavor. For example, the mlflow models serve command
can serve a model with the python_function or the crate (R Function) flavor:
mlflow
models
serve
m
my_model
In addition, the mlflow deployments command-line tool can package and deploy models to AWS
SageMaker as long as they support the python_function flavor:
mlflow
deployments
create
t
sagemaker
m
my_model
[other
options
Fields in the MLmodel Format
Apart from a flavors field listing the model flavors, the MLmodel YAML format can contain
the following fields:
Date and time when the model was created, in UTC ISO 8601 format.
ID of the run that created the model, if the model was saved using MLflow Tracking.
model signature in JSON format.
reference to an artifact with input example.
Databricks runtime version and type, if the model was trained in a Databricks notebook or job.
The version of MLflow that was used to log the model.
Additional Logged Files
conda.yaml
python_env.yaml
requirements.txt
conda
virtualenv
pip
Note | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-2 | conda.yaml
python_env.yaml
requirements.txt
conda
virtualenv
pip
Note
Anaconda Inc. updated their terms of service for anaconda.org channels. Based on the new terms of service you may require a commercial license if you rely on Anaconda’s packaging and distribution. See Anaconda Commercial Edition FAQ for more information. Your use of any Anaconda channels is governed by their terms of service.
v1.18 were by default logged with the conda
defaults
https://repo.anaconda.com/pkgs/) as a dependency. Because of this license change, MLflow has stopped the use of the
defaults
conda-forge
https://conda-forge.org/.
defaults
defaults
channel
conda.yaml
conda.yaml
defaults
name
mlflow-env
channels
defaults
dependencies
python=3.8.8
pip
pip
mlflow
scikit-learn==0.23.2
cloudpickle==1.6.0
If you would like to change the channel used in a model’s environment, you can re-register the model to the model registry with a new conda.yaml. You can do this by specifying the channel in the conda_env parameter of log_model().
For more information on the log_model() API, see the MLflow documentation for the model flavor you are working with, for example, mlflow.sklearn.log_model().
When saving a model, MLflow provides the option to pass in a conda environment parameter that can contain dependencies used by the model. If no conda environment is provided, a default environment is created based on the flavor of the model. This conda environment is then saved in conda.yaml.
This file contains the following information that’s required to restore a model environment using virtualenv:
Python version
Version specifiers for pip, setuptools, and wheel
Pip requirements of the model (reference to requirements.txt) | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-3 | The requirements file is created from the pip portion of the conda.yaml environment specification. Additional pip dependencies can be added to requirements.txt by including them as a pip dependency in a conda environment and logging the model with the environment or using the pip_requirements argument of the mlflow.<flavor>.log_model API.
The following shows an example of saving a model with a manually specified conda environment and the corresponding content of the generated conda.yaml and requirements.txt files.
conda_env
"channels"
"conda-forge"
],
"dependencies"
"python=3.8.8"
"pip"
],
"pip"
"mlflow"
"scikit-learn==0.23.2"
"cloudpickle==1.6.0"
],
"name"
"mlflow-env"
mlflow
sklearn
log_model
model
"my_model"
conda_env
conda_env
The written conda.yaml file:
name
mlflow-env
channels
conda-forge
dependencies
python=3.8.8
pip
pip
mlflow
scikit-learn==0.23.2
cloudpickle==1.6.0
The written python_env.yaml file:
python
3.8.8
build_dependencies
pip==21.1.3
setuptools==57.4.0
wheel==0.37.0
dependencies
r requirements.txt
The written requirements.txt file:
mlflow
scikit-learn==0.23.2
cloudpickle==1.6.0
Model Signature And Input Example | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-4 | Model Signature And Input Example
When working with ML models you often need to know some basic functional properties of the model
at hand, such as “What inputs does it expect?” and “What output does it produce?”. MLflow models can
include the following additional metadata about model inputs and outputs that can be used by
downstream tooling:
Model Signature - description of a model’s inputs and outputs.
Model Input Example - example of a valid model input.
Model Signature
The Model signature defines the schema of a model’s inputs and outputs. Model inputs and outputs can
be either column-based or tensor-based. Column-based inputs and outputs can be described as a
sequence of (optionally) named columns with type specified as one of the
MLflow data types. Tensor-based inputs and outputs can be
described as a sequence of (optionally) named tensors with type specified as one of the
numpy data types.
To include a signature with your model, pass a signature object as an argument to the appropriate log_model call, e.g.
sklearn.log_model(). More details are in the How to log models with signatures section. The signature is stored in
JSON format in the MLmodel file, together with other model metadata.
Model signatures are recognized and enforced by standard MLflow model deployment tools. For example, the mlflow models serve tool,
which deploys a model as a REST API, validates inputs based on the model’s signature.
Column-based Signature Example
All flavors support column-based signatures.
Each column-based input and output is represented by a type corresponding to one of
MLflow data types and an optional name. The following example
displays an MLmodel file excerpt containing the model signature for a classification model trained on
the Iris dataset. The input has 4 named, numeric columns.
The output is an unnamed integer specifying the predicted class.
signature
inputs
'[{"name":
"sepal | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-5 | signature
inputs
'[{"name":
"sepal
length
(cm)",
"type":
"double"},
{"name":
"sepal
width
(cm)",
"type":
"double"},
{"name":
"petal
length
(cm)",
"type":
"double"},
{"name":
"petal
width
(cm)",
"type":
"double"}]'
outputs
'[{"type":
"integer"}]'
Tensor-based Signature Example
Only DL flavors support tensor-based signatures (i.e TensorFlow, Keras, PyTorch, Onnx, and Gluon).
Each tensor-based input and output is represented by a dtype corresponding to one of
numpy data types, shape and an optional name.
When specifying the shape, -1 is used for axes that may be variable in size.
The following example displays an MLmodel file excerpt containing the model signature for a
classification model trained on the MNIST dataset.
The input has one named tensor where input sample is an image represented by a 28 × 28 × 1 array
of float32 numbers. The output is an unnamed tensor that has 10 units specifying the
likelihood corresponding to each of the 10 classes. Note that the first dimension of the input
and the output is the batch size and is thus set to -1 to allow for variable batch sizes.
signature
inputs
'[{"name":
"images",
"dtype":
"uint8",
"shape":
[-1,
28,
28,
1]}]'
outputs
'[{"shape":
[-1,
10],
"dtype":
"float32"}]'
Signature Enforcement | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-6 | [-1,
10],
"dtype":
"float32"}]'
Signature Enforcement
Schema enforcement checks the provided input against the model’s signature
and raises an exception if the input is not compatible. This enforcement is applied in MLflow before
calling the underlying model implementation. Note that this enforcement only applies when using MLflow
model deployment tools or when loading models as python_function. In
particular, it is not applied to models that are loaded in their native format (e.g. by calling
mlflow.sklearn.load_model()).
Name Ordering Enforcement
The input names are checked against the model signature. If there are any missing inputs,
MLflow will raise an exception. Extra inputs that were not declared in the signature will be
ignored. If the input schema in the signature defines input names, input matching is done by name
and the inputs are reordered to match the signature. If the input schema does not have input
names, matching is done by position (i.e. MLflow will only check the number of inputs).
Input Type Enforcement
The input types are checked against the signature.
For models with column-based signatures (i.e DataFrame inputs), MLflow will perform safe type conversions
if necessary. Generally, only conversions that are guaranteed to be lossless are allowed. For
example, int -> long or int -> double conversions are ok, long -> double is not. If the types cannot
be made compatible, MLflow will raise an error.
For models with tensor-based signatures, type checking is strict (i.e an exception will be thrown if
the input type does not match the type specified by the schema).
Handling Integers With Missing Values | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-7 | Handling Integers With Missing Values
Integer data with missing values is typically represented as floats in Python. Therefore, data
types of integer columns in Python can vary depending on the data sample. This type variance can
cause schema enforcement errors at runtime since integer and float are not compatible types. For
example, if your training data did not have any missing values for integer column c, its type will
be integer. However, when you attempt to score a sample of the data that does include a missing
value in column c, its type will be float. If your model signature specified c to have integer type,
MLflow will raise an error since it can not convert float to int. Note that MLflow uses python to
serve models and to deploy models to Spark, so this can affect most model deployments. The best way
to avoid this problem is to declare integer columns as doubles (float64) whenever there can be
missing values.
Handling Date and Timestamp
For datetime values, Python has precision built into the type. For example, datetime values with
day precision have numpy type datetime64[D], while values with nanosecond precision have
type datetime64[ns]. Datetime precision is ignored for column-based model signature but is
enforced for tensor-based signatures.
Handling Ragged Arrays
Ragged arrays can be created in numpy and are produced with a shape of (-1,) and a dytpe of
object. This will be handled by default when using infer_signature, resulting in a
signature containing Tensor('object', (-1,)). A similar signature can be manually created
containing a more detailed representation of a ragged array, for a more expressive signature,
such as Tensor('float64', (-1, -1, -1, 3)). Enforcement will then be done on as much detail
as possible given the signature provided, and will support ragged input arrays as well.
How To Log Models With Signatures | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-8 | How To Log Models With Signatures
To include a signature with your model, pass signature object as an argument to the appropriate log_model call, e.g.
sklearn.log_model(). The model signature object can be created
by hand or inferred from datasets with valid model inputs
(e.g. the training dataset with target column omitted) and valid model outputs (e.g. model
predictions generated on the training dataset).
Column-based Signature Example
The following example demonstrates how to store a model signature for a simple classifier trained
on the Iris dataset:
import
pandas
as
pd
from
sklearn
import
datasets
from
sklearn.ensemble
import
RandomForestClassifier
import
mlflow
import
mlflow.sklearn
from
mlflow.models.signature
import
infer_signature
iris
datasets
load_iris
()
iris_train
pd
DataFrame
iris
data
columns
iris
feature_names
clf
RandomForestClassifier
max_depth
random_state
clf
fit
iris_train
iris
target
signature
infer_signature
iris_train
clf
predict
iris_train
))
mlflow
sklearn
log_model
clf
"iris_rf"
signature
signature
The same signature can be created explicitly as follows:
from
mlflow.models.signature
import
ModelSignature
from
mlflow.types.schema
import
Schema
ColSpec
input_schema
Schema
ColSpec
"double"
"sepal length (cm)"
),
ColSpec
"double"
"sepal width (cm)"
),
ColSpec
"double"
"petal length (cm)"
),
ColSpec
"double"
"petal width (cm)"
),
output_schema
Schema
([
ColSpec
"long"
)])
signature
ModelSignature
inputs
input_schema
outputs
output_schema
Tensor-based Signature Example | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-9 | signature
ModelSignature
inputs
input_schema
outputs
output_schema
Tensor-based Signature Example
The following example demonstrates how to store a model signature for a simple classifier trained
on the MNIST dataset:
from
keras.datasets
import
mnist
from
keras.utils
import
to_categorical
from
keras.models
import
Sequential
from
keras.layers
import
Conv2D
MaxPooling2D
Dense
Flatten
from
keras.optimizers
import
SGD
import
mlflow
from
mlflow.models.signature
import
infer_signature
train_X
train_Y
),
test_X
test_Y
mnist
load_data
()
trainX
train_X
reshape
((
train_X
shape
],
28
28
))
testX
test_X
reshape
((
test_X
shape
],
28
28
))
trainY
to_categorical
train_Y
testY
to_categorical
test_Y
model
Sequential
()
model
add
Conv2D
32
),
activation
"relu"
kernel_initializer
"he_uniform"
input_shape
28
28
),
model
add
MaxPooling2D
((
)))
model
add
Flatten
())
model
add
Dense
100
activation
"relu"
kernel_initializer
"he_uniform"
))
model
add
Dense
10
activation
"softmax"
))
opt
SGD
lr
0.01
momentum
0.9
model
compile
optimizer
opt
loss
"categorical_crossentropy"
metrics
"accuracy"
])
model
fit
trainX
trainY
epochs
10
batch_size
32
validation_data
testX
testY
))
signature
infer_signature
testX
model
predict | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-10 | testX
testY
))
signature
infer_signature
testX
model
predict
testX
))
mlflow
tensorflow
log_model
model
"mnist_cnn"
signature
signature
The same signature can be created explicitly as follows:
import
numpy
as
np
from
mlflow.models.signature
import
ModelSignature
from
mlflow.types.schema
import
Schema
TensorSpec
input_schema
Schema
TensorSpec
np
dtype
np
uint8
),
28
28
)),
output_schema
Schema
([
TensorSpec
np
dtype
np
float32
),
10
))])
signature
ModelSignature
inputs
input_schema
outputs
output_schema
Model Input Example
Similar to model signatures, model inputs can be column-based (i.e DataFrames) or tensor-based
(i.e numpy.ndarrays). A model input example provides an instance of a valid model input.
Input examples are stored with the model as separate artifacts and are referenced in the the
MLmodel file.
To include an input example with your model, add it to the appropriate log_model call, e.g.
sklearn.log_model().
How To Log Model With Column-based Example
For models accepting column-based inputs, an example can be a single record or a batch of records. The
sample input can be passed in as a Pandas DataFrame, list or dictionary. The given
example will be converted to a Pandas DataFrame and then serialized to json using the Pandas split-oriented
format. Bytes are base64-encoded. The following example demonstrates how you can log a column-based
input example with your model:
input_example
"sepal length (cm)"
5.1
"sepal width (cm)"
3.5
"petal length (cm)"
1.4
"petal width (cm)"
0.2 | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-11 | 1.4
"petal width (cm)"
0.2
mlflow
sklearn
log_model
...
input_example
input_example
How To Log Model With Tensor-based Example
For models accepting tensor-based inputs, an example must be a batch of inputs. By default, the axis 0
is the batch axis unless specified otherwise in the model signature. The sample input can be passed in as
a numpy ndarray or a dictionary mapping a string to a numpy array. The following example demonstrates how
you can log a tensor-based input example with your model:
# each input has shape (4, 4)
input_example
np
array
[[
],
134
25
56
],
253
242
195
],
93
82
82
]],
[[
23
46
],
33
13
36
166
],
76
75
255
],
33
44
11
82
]],
],
dtype
np
uint8
mlflow
tensorflow
log_model
...
input_example
input_example
Model API
You can save and load MLflow Models in multiple ways. First, MLflow includes integrations with
several common libraries. For example, mlflow.sklearn contains
save_model, log_model,
and load_model functions for scikit-learn models. Second,
you can use the mlflow.models.Model class to create and write models. This
class has four key functions:
add_flavor to add a flavor to the model. Each flavor
has a string name and a dictionary of key-value attributes, where the values can be any object
that can be serialized to YAML.
save to save the model to a local directory.
log to log the model as an artifact in the
current run using MLflow Tracking.
load to load a model from a local directory or
from an artifact in a previous run. | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-12 | load to load a model from a local directory or
from an artifact in a previous run.
Built-In Model Flavors
MLflow provides several standard flavors that might be useful in your applications. Specifically,
many of its deployment tools support these flavors, so you can export your own model in one of these
flavors to benefit from all these tools:
Python Function (python_function)
R Function (crate)
H2O (h2o)
Keras (keras)
MLeap (mleap)
PyTorch (pytorch)
Scikit-learn (sklearn)
Spark MLlib (spark)
TensorFlow (tensorflow)
ONNX (onnx)
MXNet Gluon (gluon)
XGBoost (xgboost)
LightGBM (lightgbm)
CatBoost (catboost)
Spacy(spaCy)
Fastai(fastai)
Statsmodels (statsmodels)
Prophet (prophet)
Pmdarima (pmdarima)
Diviner (diviner)
Python Function (python_function)
The python_function model flavor serves as a default model interface for MLflow Python models.
Any MLflow Python model is expected to be loadable as a python_function model. This enables
other MLflow tools to work with any python model regardless of which persistence module or
framework was used to produce the model. This interoperability is very powerful because it allows
any Python model to be productionized in a variety of environments. | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 7