id
stringlengths 14
15
| text
stringlengths 112
2.01k
| metadata
dict |
---|---|---|
fe0d5088bc70-13 | In addition, the python_function model flavor defines a generic filesystem model format for Python models and provides utilities for saving and loading models
to and from this format. The format is self-contained in the sense that it includes all the
information necessary to load and use a model. Dependencies are stored either directly with the
model or referenced via conda environment. This model format allows other tools to integrate
their models with MLflow.
How To Save Model As Python Function
Most python_function models are saved as part of other model flavors - for example, all mlflow
built-in flavors include the python_function flavor in the exported models. In addition, the
mlflow.pyfunc module defines functions for creating python_function models explicitly.
This module also includes utilities for creating custom Python models, which is a convenient way of
adding custom python code to ML models. For more information, see the custom Python models
documentation.
How To Load And Score Python Function Models
You can load python_function models in Python by calling the mlflow.pyfunc.load_model()
function. Note that the load_model function assumes that all dependencies are already available
and will not check nor install any dependencies (
see model deployment section for tools to deploy models with
automatic dependency management).
Once loaded, you can score the model by calling the predict
method, which has the following signature:
predict
model_input
pandas
DataFrame
numpy
ndarray
Dict
str
np
ndarray
]])
>
numpy
ndarray
pandas
Series
DataFrame
)]
All PyFunc models will support pandas.DataFrame as an input. In addition to pandas.DataFrame,
DL PyFunc models will also support tensor inputs in the form of numpy.ndarrays. To verify
whether a model flavor supports tensor inputs, please check the flavor’s documentation. | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-14 | For models with a column-based schema, inputs are typically provided in the form of a pandas.DataFrame.
If a dictionary mapping column name to values is provided as input for schemas with named columns or if a
python List or a numpy.ndarray is provided as input for schemas with unnamed columns, MLflow will cast the
input to a DataFrame. Schema enforcement and casting with respect to the expected data types is performed against
the DataFrame.
For models with a tensor-based schema, inputs are typically provided in the form of a numpy.ndarray or a
dictionary mapping the tensor name to its np.ndarray value. Schema enforcement will check the provided input’s
shape and type against the shape and type specified in the model’s schema and throw an error if they do not match.
For models where no schema is defined, no changes to the model inputs and outputs are made. MLflow will
propagate any errors raised by the model if the model does not accept the provided input type.
The python environment that a PyFunc model is loaded into for prediction or inference may differ from the environment
in which it was trained. In the case of an environment mismatch, a warning message will be printed when calling
mlflow.pyfunc.load_model(). This warning statement will identify the packages that have a version mismatch
between those used during training and the current environment. In order to get the full dependencies of the
environment in which the model was trained, you can call mlflow.pyfunc.get_model_dependencies().
Furthermore, if you want to run model inference in the same environment used in model training, you can call
mlflow.pyfunc.spark_udf() with the env_manager argument set as “conda”. This will generate the environment
from the conda.yaml file, ensuring that the python UDF will execute with the exact package versions that were used
during training.
R Function (crate) | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-15 | R Function (crate)
The crate model flavor defines a generic model format for representing an arbitrary R prediction
function as an MLflow model using the crate function from the
carrier package. The prediction function is expected to take a dataframe as input and
produce a dataframe, a vector or a list with the predictions as output.
This flavor requires R to be installed in order to be used.
crate usage
For a minimal crate model, an example configuration for the predict function is:
library
mlflow
library
carrier
# Load iris dataset
data
"iris"
# Learn simple linear regression model
model
<-
lm
Sepal.Width
Sepal.Length
data
iris
# Define a crate model
# call package functions with an explicit :: namespace.
crate_model
<-
crate
function
new_obs
stats
::
predict
model
data.frame
"Sepal.Length"
new_obs
)),
model
model
# log the model
model_path
<-
mlflow_log_model
model
crate_model
artifact_path
"iris_prediction"
# load the logged model and make a prediction
model_uri
<-
paste0
mlflow_get_run
()
artifact_uri
"/iris_prediction"
mlflow_model
<-
mlflow_load_model
model_uri
model_uri
flavor
NULL
client
mlflow_client
())
prediction
<-
mlflow_predict
model
mlflow_model
data
print
prediction
H2O (h2o)
The h2o model flavor enables logging and loading H2O models.
mlflow.h2o module defines
save_model() and
log_model() methods in python, and
mlflow_save_model and
mlflow_log_model in R for saving H2O models in MLflow Model
format.
These methods produce MLflow Models with the
python_function | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-16 | python_function
mlflow.pyfunc.load_model().
This loaded PyFunc model can be scored with only DataFrame input. When you load
MLflow Models with the
h2o
mlflow.pyfunc.load_model(),
the
h2o.init() method is
called. Therefore, the correct version of
h2o(-py)
h2o.init() by modifying the
init
model.h2o/h2o.yaml
Finally, you can use the mlflow.h2o.load_model() method to load MLflow Models with the
h2o flavor as H2O model objects.
For more information, see mlflow.h2o.
Keras (keras)
keras
mlflow_save_model and
mlflow_log_model.
These functions serialize Keras models models as HDF5 files using the Keras library’s built-in
model persistence functions. You can use
mlflow_load_model function in R to load MLflow Models
with the
keras
Keras Model objects.
Keras pyfunc usage
For a minimal Sequential model, an example configuration for the pyfunc predict() method is:
import
mlflow
import
numpy
as
np
import
pathlib
import
shutil
from
tensorflow
import
keras
mlflow
tensorflow
autolog
()
with
mlflow
start_run
():
np
array
([
])
reshape
np
array
([
])
model
keras
Sequential
keras
Input
shape
,)),
keras
layers
Dense
activation
"sigmoid"
),
model
compile
loss
"binary_crossentropy"
optimizer
"adam"
metrics
"accuracy"
])
model
fit
batch_size
epochs
validation_split
0.2
model_info
mlflow
tensorflow
log_model
model
model
artifact_path | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-17 | model_info
mlflow
tensorflow
log_model
model
model
artifact_path
"model"
local_artifact_dir
"/tmp/mlflow/keras_model"
pathlib
Path
local_artifact_dir
mkdir
parents
True
exist_ok
True
keras_pyfunc
mlflow
pyfunc
load_model
model_uri
model_info
model_uri
dst_path
local_artifact_dir
data
np
array
([
10
])
reshape
predictions
keras_pyfunc
predict
data
shutil
rmtree
local_artifact_dir
MLeap (mleap)
The mleap model flavor supports saving Spark models in MLflow format using the
MLeap persistence mechanism. MLeap is an inference-optimized
format and execution engine for Spark models that does not depend on
SparkContext
to evaluate inputs.
Note
You can save Spark models in MLflow format with the mleap flavor by specifying the
sample_input argument of the mlflow.spark.save_model() or
mlflow.spark.log_model() method (recommended). For more details see Spark MLlib.
mlflow.mleap module also
defines
save_model() and
log_model() methods for saving MLeap models in MLflow format,
but these methods do not include the
python_function
mleap
mlflow_save_model
and loaded with
mlflow_load_model, with
mlflow_save_model requiring
A companion module for loading MLflow Models with the MLeap flavor is available in the
mlflow/java package.
For more information, see mlflow.spark, mlflow.mleap, and the
MLeap documentation.
PyTorch (pytorch)
The pytorch model flavor enables logging and loading PyTorch models.
mlflow.pytorch module defines utilities for saving and loading MLflow Models with the
pytorch | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-18 | mlflow.pytorch module defines utilities for saving and loading MLflow Models with the
pytorch
mlflow.pytorch.save_model() and
mlflow.pytorch.log_model() methods to save PyTorch models in MLflow format; both of these
functions use the
torch.save() method to
serialize PyTorch models. Additionally, you can use the
mlflow.pytorch.load_model()
method to load MLflow Models with the
pytorch
mlflow.pytorch.save_model() and
mlflow.pytorch.log_model() contain
the
python_function
mlflow.pyfunc.load_model().
Note
When using the PyTorch flavor, if a GPU is available at prediction time, the default GPU will be used to run
inference. To disable this behavior, users can use the
MLFLOW_DEFAULT_PREDICTION_DEVICE
or pass in a device with the device parameter for the predict function.
Note
In case of multi gpu training, ensure to save the model only with global rank 0 gpu. This avoids
logging multiple copies of the same model.
PyTorch pyfunc usage
For a minimal PyTorch model, an example configuration for the pyfunc predict() method is:
import
numpy
as
np
import
mlflow
import
torch
from
torch
import
nn
net
nn
Linear
loss_function
nn
L1Loss
()
optimizer
torch
optim
Adam
net
parameters
(),
lr
1e-4
torch
randn
torch
randn
epochs
for
epoch
in
range
epochs
):
optimizer
zero_grad
()
outputs
net
loss
loss_function
outputs
loss
backward
()
optimizer
step
()
with
mlflow
start_run
()
as
run
model_info
mlflow
pytorch
log_model
net
"model" | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-19 | run
model_info
mlflow
pytorch
log_model
net
"model"
pytorch_pyfunc
mlflow
pyfunc
load_model
model_uri
model_info
model_uri
predictions
pytorch_pyfunc
predict
torch
randn
numpy
())
print
predictions
For more information, see mlflow.pytorch.
Scikit-learn (sklearn)
sklearn
mlflow.sklearn module defines
save_model() and
log_model() functions that save scikit-learn models in
MLflow format, using either Python’s pickle module (Pickle) or CloudPickle for model serialization.
These functions produce MLflow Models with the
python_function
mlflow.pyfunc.load_model().
This loaded PyFunc model can only be scored with DataFrame input. Finally, you can use the
mlflow.sklearn.load_model() method to load MLflow Models with the
sklearn
Scikit-learn pyfunc usage
For a Scikit-learn LogisticRegression model, an example configuration for the pyfunc predict() method is:
import
mlflow
import
numpy
as
np
from
sklearn.linear_model
import
LogisticRegression
with
mlflow
start_run
():
np
array
([
])
reshape
np
array
([
])
lr
LogisticRegression
()
lr
fit
model_info
mlflow
sklearn
log_model
sk_model
lr
artifact_path
"model"
sklearn_pyfunc
mlflow
pyfunc
load_model
model_uri
model_info
model_uri
data
np
array
([
10
])
reshape
predictions
sklearn_pyfunc
predict
data
For more information, see mlflow.sklearn.
Spark MLlib (spark)
The spark model flavor enables exporting Spark MLlib models as MLflow Models. | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-20 | The spark model flavor enables exporting Spark MLlib models as MLflow Models.
The mlflow.spark module defines
save_model() to save a Spark MLlib model to a DBFS path.
log_model() to upload a Spark MLlib model to the tracking server.
mlflow.spark.load_model() to load MLflow Models with the spark flavor as Spark MLlib pipelines.
python_function
mlflow.pyfunc.load_model().
This loaded PyFunc model can only be scored with DataFrame input.
When a model with the
spark
mlflow.pyfunc.load_model(), a new
SparkContext
is created for model inference; additionally, the function converts all Pandas DataFrame inputs to
Spark DataFrames before scoring. While this initialization overhead and format translation latency
is not ideal for high-performance use cases, it enables you to easily deploy any
MLlib PipelineModel to any production environment supported by MLflow
(SageMaker, AzureML, etc).
Spark MLlib pyfunc usage
from
pyspark.ml.classification
import
LogisticRegression
from
pyspark.ml.linalg
import
Vectors
from
pyspark.sql
import
SparkSession
import
mlflow
# Prepare training data from a list of (label, features) tuples.
spark
SparkSession
builder
appName
"LogisticRegressionExample"
getOrCreate
()
training
spark
createDataFrame
1.0
Vectors
dense
([
0.0
1.1
0.1
])),
0.0
Vectors
dense
([
2.0
1.0
1.0
])),
0.0
Vectors
dense
([
2.0
1.3
1.0
])),
1.0
Vectors
dense
([
0.0
1.2
0.5
])),
],
"label"
"features"
], | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-21 | 1.2
0.5
])),
],
"label"
"features"
],
# Create and fit a LogisticRegression instance
lr
LogisticRegression
maxIter
10
regParam
0.01
lr_model
lr
fit
training
# Serialize the Model
with
mlflow
start_run
():
model_info
mlflow
spark
log_model
lr_model
"spark-model"
# Load saved model
lr_model_saved
mlflow
pyfunc
load_model
model_info
model_uri
# Make predictions on test data.
# The DataFrame used in the predict method must be a Pandas DataFrame
test
spark
createDataFrame
1.0
Vectors
dense
([
1.0
1.5
1.3
])),
0.0
Vectors
dense
([
3.0
2.0
0.1
])),
1.0
Vectors
dense
([
0.0
2.2
1.5
])),
],
"label"
"features"
],
toPandas
()
prediction
lr_model_saved
predict
test
Note
Note that when the sample_input parameter is provided to log_model() or
save_model(), the Spark model is automatically saved as an mleap flavor
by invoking mlflow.mleap.add_to_model().
For example, the follow code block:
training_df
spark
createDataFrame
([
"a b c d e spark"
1.0
),
"b d"
0.0
),
"spark f g h"
1.0
),
"hadoop mapreduce"
0.0
],
"id"
"text"
"label"
])
tokenizer
Tokenizer
inputCol
"text"
outputCol
"words"
hashingTF
HashingTF
inputCol
tokenizer | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-22 | outputCol
"words"
hashingTF
HashingTF
inputCol
tokenizer
getOutputCol
(),
outputCol
"features"
lr
LogisticRegression
maxIter
10
regParam
0.001
pipeline
Pipeline
stages
tokenizer
hashingTF
lr
])
model
pipeline
fit
training_df
mlflow
spark
log_model
model
"spark-model"
sample_input
training_df
results in the following directory structure logged to the MLflow Experiment:
# Directory written by with the addition of mlflow.mleap.add_to_model(model, "spark-model", training_df)
# Note the addition of the mleap directory
spark-model/
├── mleap
├── sparkml
├── MLmodel
├── conda.yaml
├── python_env.yaml
└── requirements.txt
For more information, see mlflow.mleap.
For more information, see mlflow.spark.
TensorFlow (tensorflow)
tensorflow
mlflow.tensorflow.save_model() and
mlflow.tensorflow.log_model() methods. These methods also add the
python_function
mlflow.pyfunc.load_model(). This loaded PyFunc model
can be scored with both DataFrame input and numpy array input. Finally, you can use the
mlflow.tensorflow.load_model() method to load MLflow Models with the
tensorflow
For more information, see mlflow.tensorflow.
ONNX (onnx)
onnx
ONNX models in MLflow format via
the
mlflow.onnx.save_model() and
mlflow.onnx.log_model() methods. These
methods also add the
python_function
mlflow.pyfunc.load_model(). This loaded PyFunc model can be scored with
both DataFrame input and numpy array input. The
python_function | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-23 | python_function
ONNX Runtime execution engine for
evaluation. Finally, you can use the
mlflow.onnx.load_model() method to load MLflow
Models with the
onnx
For more information, see mlflow.onnx and http://onnx.ai/.
ONNX pyfunc usage example
For an ONNX model, an example configuration that uses pytorch to train a dummy model,
converts it to ONNX, logs to mlflow and makes a prediction using pyfunc predict() method is:
import
numpy
as
np
import
mlflow
import
onnx
import
torch
from
torch
import
nn
# define a torch model
net
nn
Linear
loss_function
nn
L1Loss
()
optimizer
torch
optim
Adam
net
parameters
(),
lr
1e-4
torch
randn
torch
randn
# run model training
epochs
for
epoch
in
range
epochs
):
optimizer
zero_grad
()
outputs
net
loss
loss_function
outputs
loss
backward
()
optimizer
step
()
# convert model to ONNX and load it
torch
onnx
export
net
"model.onnx"
onnx_model
onnx
load_model
"model.onnx"
# log the model into a mlflow run
with
mlflow
start_run
():
model_info
mlflow
onnx
log_model
onnx_model
"model"
# load the logged model and make a prediction
onnx_pyfunc
mlflow
pyfunc
load_model
model_info
model_uri
predictions
onnx_pyfunc
predict
numpy
())
print
predictions
MXNet Gluon (gluon)
gluon
Gluon models in MLflow format via
the | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-24 | gluon
Gluon models in MLflow format via
the
mlflow.gluon.save_model() and
mlflow.gluon.log_model() methods. These
methods also add the
python_function
mlflow.pyfunc.load_model(). This loaded PyFunc model can be scored with
both DataFrame input and numpy array input. You can also use the
mlflow.gluon.load_model()
method to load MLflow Models with the
gluon
For more information, see mlflow.gluon.
XGBoost (xgboost)
xgboost
XGBoost models
in MLflow format via the
mlflow.xgboost.save_model() and
mlflow.xgboost.log_model() methods in python and
mlflow_save_model and
mlflow_log_model in R respectively.
These methods also add the
python_function
mlflow.pyfunc.load_model(). This loaded PyFunc model can only be scored with DataFrame input.
You can also use the
mlflow.xgboost.load_model()
method to load MLflow Models with the
xgboost
Note that the xgboost model flavor only supports an instance of xgboost.Booster,
not models that implement the scikit-learn API.
XGBoost pyfunc usage
The example below
Loads the IRIS dataset from scikit-learn
Trains an XGBoost Classifier
Logs the model and params using mlflow
Loads the logged model and makes predictions
from
sklearn.datasets
import
load_iris
from
sklearn.model_selection
import
train_test_split
from
xgboost
import
XGBClassifier
import
mlflow
data
load_iris
()
X_train
X_test
y_train
y_test
train_test_split
data
"data"
],
data
"target"
],
test_size
0.2
xgb_classifier | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-25 | ],
data
"target"
],
test_size
0.2
xgb_classifier
XGBClassifier
n_estimators
10
max_depth
learning_rate
objective
"binary:logistic"
random_state
123
# log fitted model and XGBClassifier parameters
with
mlflow
start_run
():
xgb_classifier
fit
X_train
y_train
clf_params
xgb_classifier
get_xgb_params
()
mlflow
log_params
clf_params
model_info
mlflow
xgboost
log_model
xgb_classifier
"iris-classifier"
# Load saved model and make predictions
xgb_classifier_saved
mlflow
pyfunc
load_model
model_info
model_uri
y_pred
xgb_classifier_saved
predict
X_test
For more information, see mlflow.xgboost.
LightGBM (lightgbm)
lightgbm
LightGBM models
in MLflow format via the
mlflow.lightgbm.save_model() and
mlflow.lightgbm.log_model() methods.
These methods also add the
python_function
mlflow.pyfunc.load_model(). You can also use the
mlflow.lightgbm.load_model()
method to load MLflow Models with the
lightgbm
Note that the scikit-learn API for LightGBM is now supported. For more information, see mlflow.lightgbm.
LightGBM pyfunc usage
The example below
Loads the IRIS dataset from scikit-learn
Trains a LightGBM LGBMClassifier
Logs the model and feature importance’s using mlflow
Loads the logged model and makes predictions
from
lightgbm
import
LGBMClassifier
from
sklearn.datasets
import
load_iris
from
sklearn.model_selection
import
train_test_split
import
mlflow
data
load_iris
() | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-26 | import
train_test_split
import
mlflow
data
load_iris
()
# Remove special characters from feature names to be able to use them as keys for mlflow metrics
feature_names
name
replace
" "
"_"
replace
"("
""
replace
")"
""
for
name
in
data
"feature_names"
X_train
X_test
y_train
y_test
train_test_split
data
"data"
],
data
"target"
],
test_size
0.2
# create model instance
lgb_classifier
LGBMClassifier
n_estimators
10
max_depth
learning_rate
objective
"binary:logistic"
random_state
123
# Fit and save model and LGBMClassifier feature importances as mlflow metrics
with
mlflow
start_run
():
lgb_classifier
fit
X_train
y_train
feature_importances
dict
zip
feature_names
lgb_classifier
feature_importances_
))
feature_importance_metrics
"feature_importance_
feature_name
imp_value
for
feature_name
imp_value
in
feature_importances
items
()
mlflow
log_metrics
feature_importance_metrics
model_info
mlflow
lightgbm
log_model
lgb_classifier
"iris-classifier"
# Load saved model and make predictions
lgb_classifier_saved
mlflow
pyfunc
load_model
model_info
model_uri
y_pred
lgb_classifier_saved
predict
X_test
print
y_pred
CatBoost (catboost)
catboost
CatBoost models
in MLflow format via the
mlflow.catboost.save_model() and
mlflow.catboost.log_model() methods.
These methods also add the
python_function
mlflow.pyfunc.load_model(). You can also use the | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-27 | python_function
mlflow.pyfunc.load_model(). You can also use the
mlflow.catboost.load_model()
method to load MLflow Models with the
catboost
For more information, see mlflow.catboost.
CatBoost pyfunc usage
For a CatBoost Classifier model, an example configuration for the pyfunc predict() method is:
import
mlflow
from
catboost
import
CatBoostClassifier
from
sklearn
import
datasets
# prepare data
datasets
load_wine
as_frame
False
return_X_y
True
# train the model
model
CatBoostClassifier
iterations
loss_function
"MultiClass"
allow_writing_files
False
model
fit
# log the model into a mlflow run
with
mlflow
start_run
():
model_info
mlflow
catboost
log_model
model
"model"
# load the logged model and make a prediction
catboost_pyfunc
mlflow
pyfunc
load_model
model_uri
model_info
model_uri
print
catboost_pyfunc
predict
[:
]))
Spacy(spaCy)
spaCy
spaCy models in MLflow format via
the
mlflow.spacy.save_model() and
mlflow.spacy.log_model() methods. Additionally, these
methods add the
python_function
mlflow.pyfunc.load_model().
This loaded PyFunc model can only be scored with DataFrame input. You can
also use the
mlflow.spacy.load_model() method to load MLflow Models with the
spacy
For more information, see mlflow.spacy.
Spacy pyfunc usage | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-28 | spacy
For more information, see mlflow.spacy.
Spacy pyfunc usage
The example below shows how to train a Spacy TextCategorizer model, log the model artifact and metrics to the
mlflow tracking server and then load the saved model to make predictions. For this example, we will be using the
Polarity 2.0 dataset available in the nltk package. This dataset consists of 10000 positive and 10000 negative
short movie reviews.
First we convert the texts and sentiment labels (“pos” or “neg”) from NLTK native format to Spacy’s DocBin format:
import
pandas
as
pd
import
spacy
from
nltk.corpus
import
movie_reviews
from
spacy
import
Language
from
spacy.tokens
import
DocBin
nltk
download
"movie_reviews"
def
get_sentences
sentiment_type
str
>
pd
DataFrame
"""Reconstruct the sentences from the word lists for each review record for a specific ``sentiment_type``
as a pandas DataFrame with two columns: 'sentence' and 'sentiment'.
"""
file_ids
movie_reviews
fileids
sentiment_type
sent_df
[]
for
file_id
in
file_ids
sentence
" "
join
movie_reviews
words
file_id
))
sent_df
append
({
"sentence"
sentence
"sentiment"
sentiment_type
})
return
pd
DataFrame
sent_df
def
convert
data_df
pd
DataFrame
target_file
str
):
"""Convert a DataFrame with 'sentence' and 'sentiment' columns to a
spacy DocBin object and save it to 'target_file'.
"""
nlp
spacy
blank
"en"
sentiment_labels
data_df
sentiment
unique
() | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-29 | blank
"en"
sentiment_labels
data_df
sentiment
unique
()
spacy_doc
DocBin
()
for
row
in
data_df
iterrows
():
sent_tokens
nlp
make_doc
row
"sentence"
])
# To train a Spacy TextCategorizer model, the label must be attached to the "cats" dictionary of the "Doc"
# object, e.g. {"pos": 1.0, "neg": 0.0} for a "pos" label.
for
label
in
sentiment_labels
sent_tokens
cats
label
1.0
if
label
==
row
"sentiment"
else
0.0
spacy_doc
add
sent_tokens
spacy_doc
to_disk
target_file
# Build a single DataFrame with both positive and negative reviews, one row per review
review_data
get_sentences
sentiment_type
for
sentiment_type
in
"pos"
"neg"
)]
review_data
pd
concat
review_data
axis
# Split the DataFrame into a train and a dev set
train_df
review_data
groupby
"sentiment"
group_keys
False
apply
lambda
sample
frac
0.7
random_state
100
dev_df
review_data
loc
review_data
index
difference
train_df
index
),
:]
# Save the train and dev data files to the current directory as "corpora.train" and "corpora.dev", respectively
convert
train_df
"corpora.train"
convert
dev_df
"corpora.dev"
To set up the training job, we first need to generate a configuration file as described in the Spacy Documentation
For simplicity, we will only use a TextCategorizer in the pipeline. | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-30 | python -m spacy init config --pipeline textcat --lang en mlflow-textcat.cfg
Change the default train and dev paths in the config file to the current directory:
[paths]
train = null
dev = null
+ train = "."
+ dev = "."
In Spacy, the training loop is defined internally in Spacy’s code. Spacy provides a “logging” extension point where
we can use mlflow. To do this,
We have to define a function to write metrics / model input to mlfow
Register it as a logger in Spacy’s component registry
Change the default console logger in the Spacy’s configuration file (mlflow-textcat.cfg)
from
typing
import
IO
Callable
Tuple
Dict
Any
Optional
import
spacy
from
spacy
import
Language
import
mlflow
@spacy
registry
loggers
"mlflow_logger.v1"
def
mlflow_logger
():
"""Returns a function, ``setup_logger`` that returns two functions:
``log_step`` is called internally by Spacy for every evaluation step. We can log the intermediate train and
validation scores to the mlflow tracking server here.
``finalize``: is called internally by Spacy after training is complete. We can log the model artifact to the
mlflow tracking server here.
"""
def
setup_logger
nlp
Language
stdout
IO
sys
stdout
stderr
IO
sys
stderr
>
Tuple
Callable
Callable
]:
def
log_step
info
Optional
Dict
str
Any
]]):
if
info
step
info
"step"
score
info
"score"
metrics
{}
for
pipe_name
in
nlp
pipe_names
loss
info
"losses"
][
pipe_name
metrics
pipe_name | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-31 | loss
info
"losses"
][
pipe_name
metrics
pipe_name
_loss"
loss
metrics
pipe_name
_score"
score
mlflow
log_metrics
metrics
step
step
def
finalize
():
uri
mlflow
spacy
log_model
nlp
"mlflow_textcat_example"
mlflow
end_run
()
return
log_step
finalize
return
setup_logger
Check the spacy-loggers library <https://pypi.org/project/spacy-loggers/> _ for a more complete implementation.
Point to our mlflow logger in Spacy configuration file. For this example, we will lower the number of training steps
and eval frequency:
[training.logger]
@loggers = "spacy.ConsoleLogger.v1"
dev = null
+ @loggers = "mlflow_logger.v1"
[training]
max_steps = 20000
eval_frequency = 100
+ max_steps = 100
+ eval_frequency = 10
Train our model:
from
spacy.cli.train
import
train
as
spacy_train
spacy_train
"mlflow-textcat.cfg"
To make predictions, we load the saved model from the last run:
from
mlflow
import
MlflowClient
# look up the last run info from mlflow
client
MlflowClient
()
last_run
client
search_runs
experiment_ids
"0"
],
max_results
)[
# We need to append the spacy model directory name to the artifact uri
spacy_model
mlflow
pyfunc
load_model
last_run
info
artifact_uri
/mlflow_textcat_example"
predictions_in
dev_df
loc
[:,
"sentence"
]]
predictions_out
spacy_model
predict
predictions_in
squeeze
()
tolist
() | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-32 | predictions_out
spacy_model
predict
predictions_in
squeeze
()
tolist
()
predicted_labels
"pos"
if
row
"pos"
row
"neg"
else
"neg"
for
row
in
predictions_out
print
dev_df
assign
predicted_sentiment
predicted_labels
))
Fastai(fastai)
fastai
fastai Learner models in MLflow format via
the
mlflow.fastai.save_model() and
mlflow.fastai.log_model() methods. Additionally, these
methods add the
python_function
mlflow.pyfunc.load_model(). This loaded PyFunc model can
only be scored with DataFrame input. You can also use the
mlflow.fastai.load_model() method to
load MLflow Models with the
fastai
The interface for utilizing a fastai model loaded as a pyfunc type for generating predictions uses a
Pandas DataFrame argument.
This example runs the fastai tabular tutorial,
logs the experiments, saves the model in fastai format and loads the model to get predictions
using a fastai data loader:
from
fastai.data.external
import
URLs
untar_data
from
fastai.tabular.core
import
Categorify
FillMissing
Normalize
TabularPandas
from
fastai.tabular.data
import
TabularDataLoaders
from
fastai.tabular.learner
import
tabular_learner
from
fastai.data.transforms
import
RandomSplitter
from
fastai.metrics
import
accuracy
from
fastcore.basics
import
range_of
import
pandas
as
pd
import
mlflow
import
mlflow.fastai
def
print_auto_logged_info
):
tags
for
in
data
tags
items
()
if
not
startswith
"mlflow."
)}
artifacts | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-33 | items
()
if
not
startswith
"mlflow."
)}
artifacts
path
for
in
mlflow
MlflowClient
()
list_artifacts
info
run_id
"model"
print
"run_id:
{}
format
info
run_id
))
print
"artifacts:
{}
format
artifacts
))
print
"params:
{}
format
data
params
))
print
"metrics:
{}
format
data
metrics
))
print
"tags:
{}
format
tags
))
def
main
epochs
learning_rate
0.01
):
path
untar_data
URLs
ADULT_SAMPLE
path
ls
()
df
pd
read_csv
path
"adult.csv"
dls
TabularDataLoaders
from_csv
path
"adult.csv"
path
path
y_names
"salary"
cat_names
"workclass"
"education"
"marital-status"
"occupation"
"relationship"
"race"
],
cont_names
"age"
"fnlwgt"
"education-num"
],
procs
Categorify
FillMissing
Normalize
],
splits
RandomSplitter
valid_pct
0.2
)(
range_of
df
))
to
TabularPandas
df
procs
Categorify
FillMissing
Normalize
],
cat_names
"workclass"
"education"
"marital-status"
"occupation"
"relationship"
"race"
],
cont_names
"age"
"fnlwgt"
"education-num"
],
y_names
"salary"
splits
splits
dls
to
dataloaders
bs
64
model | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-34 | splits
splits
dls
to
dataloaders
bs
64
model
tabular_learner
dls
metrics
accuracy
mlflow
fastai
autolog
()
with
mlflow
start_run
()
as
run
model
fit
0.01
mlflow
fastai
log_model
model
"model"
print_auto_logged_info
mlflow
get_run
run_id
run
info
run_id
))
model_uri
"runs:/
{}
/model"
format
run
info
run_id
loaded_model
mlflow
fastai
load_model
model_uri
test_df
df
copy
()
test_df
drop
([
"salary"
],
axis
inplace
True
dl
learn
dls
test_dl
test_df
predictions
loaded_model
get_preds
dl
dl
px
pd
DataFrame
predictions
astype
"float"
px
head
main
()
Output (Pandas DataFrame):
Index
Probability of first class
Probability of second class
0.545088
0.454912
0.503172
0.496828
0.962663
0.037337
0.206107
0.793893
0.807599
0.192401
Alternatively, when using the python_function flavor, get predictions from a DataFrame.
from
fastai.data.external
import
URLs
untar_data
from
fastai.tabular.core
import
Categorify
FillMissing
Normalize
TabularPandas
from
fastai.tabular.data
import
TabularDataLoaders
from
fastai.tabular.learner
import
tabular_learner
from
fastai.data.transforms
import
RandomSplitter
from
fastai.metrics
import
accuracy
from | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-35 | import
RandomSplitter
from
fastai.metrics
import
accuracy
from
fastcore.basics
import
range_of
import
pandas
as
pd
import
mlflow
import
mlflow.fastai
model_uri
...
path
untar_data
URLs
ADULT_SAMPLE
df
pd
read_csv
path
"adult.csv"
test_df
df
copy
()
test_df
drop
([
"salary"
],
axis
inplace
True
loaded_model
mlflow
pyfunc
load_model
model_uri
loaded_model
predict
test_df
Output (Pandas DataFrame):
Index
Probability of first class, Probability of second class
[0.5450878, 0.45491222]
[0.50317234, 0.49682766]
[0.9626626, 0.037337445]
[0.20610662, 0.7938934]
[0.8075987, 0.19240129]
For more information, see mlflow.fastai.
Statsmodels (statsmodels)
statsmodels
Statsmodels models in MLflow format via the
mlflow.statsmodels.save_model()
and
mlflow.statsmodels.log_model() methods.
These methods also add the
python_function
mlflow.pyfunc.load_model(). This loaded PyFunc model can only be scored with DataFrame input.
You can also use the
mlflow.statsmodels.load_model()
method to load MLflow Models with the
statsmodels
As for now, automatic logging is restricted to parameters, metrics and models generated by a call to fit
on a statsmodels model.
For more information, see mlflow.statsmodels.
Prophet (prophet)
prophet
Prophet models in MLflow format via the
mlflow.prophet.save_model()
and | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-36 | Prophet models in MLflow format via the
mlflow.prophet.save_model()
and
mlflow.prophet.log_model() methods.
These methods also add the
python_function
mlflow.pyfunc.load_model(). This loaded PyFunc model can only be scored with DataFrame input.
You can also use the
mlflow.prophet.load_model()
method to load MLflow Models with the
prophet
Prophet pyfunc usage
This example uses a time series dataset from Prophet’s GitHub repository, containing log number of daily views to
Peyton Manning’s Wikipedia page for several years. A sample of the dataset is as follows:
ds
2007-12-10
9.59076113897809
2007-12-11
8.51959031601596
2007-12-12
8.18367658262066
2007-12-13
8.07246736935477
import
numpy
as
np
import
pandas
as
pd
from
prophet
import
Prophet
from
prophet.diagnostics
import
cross_validation
performance_metrics
import
mlflow
# starts on 2007-12-10, ends on 2016-01-20
train_df
pd
read_csv
"https://raw.githubusercontent.com/facebook/prophet/main/examples/example_wp_log_peyton_manning.csv"
# Create a "test" DataFrame with the "ds" column containing 10 days after the end date in train_df
test_dates
pd
date_range
start
"2016-01-21"
end
"2016-01-31"
freq
"D"
test_df
pd
Series
data
test_dates
values
name
"ds"
to_frame
()
prophet_model
Prophet
changepoint_prior_scale
0.5
uncertainty_samples
with
mlflow | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-37 | changepoint_prior_scale
0.5
uncertainty_samples
with
mlflow
start_run
():
prophet_model
fit
train_df
# extract and log parameters such as changepoint_prior_scale in the mlflow run
model_params
name
value
for
name
value
in
vars
prophet_model
items
()
if
np
isscalar
value
mlflow
log_params
model_params
# cross validate with 900 days of data initially, predictions for next 30 days
# walk forward by 30 days
cv_results
cross_validation
prophet_model
initial
"900 days"
period
"30 days"
horizon
"30 days"
# Calculate metrics from cv_results, then average each metric across all backtesting windows and log to mlflow
cv_metrics
"mse"
"rmse"
"mape"
metrics_results
performance_metrics
cv_results
metrics
cv_metrics
average_metrics
metrics_results
loc
[:,
cv_metrics
mean
axis
to_dict
()
mlflow
log_metrics
average_metrics
model_info
mlflow
prophet
log_model
prophet_model
"prophet-model"
# Load saved model
prophet_model_saved
mlflow
pyfunc
load_model
model_info
model_uri
predictions
prophet_model_saved
predict
test_df
Output (Pandas DataFrame):
Index
ds
yhat
yhat_upper
yhat_lower
2016-01-21
8.526513
8.827397
8.328563
2016-01-22
8.541355
9.434994
8.112758
2016-01-23
8.308332
8.633746
8.201323
2016-01-24
8.676326
9.534593 | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-38 | 2016-01-24
8.676326
9.534593
8.020874
2016-01-25
8.983457
9.430136
8.121798
For more information, see mlflow.prophet.
Pmdarima (pmdarima)
pmdarima
pmdarima models in MLflow
format via the
mlflow.pmdarima.save_model() and
mlflow.pmdarima.log_model() methods.
These methods also add the
python_function
mlflow.pyfunc.load_model().
This loaded PyFunc model can only be scored with a DataFrame input.
You can also use the
mlflow.pmdarima.load_model() method to load MLflow Models with the
pmdarima
The interface for utilizing a pmdarima model loaded as a pyfunc type for generating forecast predictions uses
a single-row Pandas DataFrame configuration argument. The following columns in this configuration
Pandas DataFrame are supported:
n_periods (required) - specifies the number of future periods to generate starting from the last datetime valueof the training dataset, utilizing the frequency of the input training series when the model was trained.
(for example, if the training data series elements represent one value per hour, in order to forecast 3 days of
future data, set the column n_periods to 72.
X (optional) - exogenous regressor values (only supported in pmdarima version >= 1.8.0) a 2D array of values forfuture time period events. For more information, read the underlying library
explanation.
return_conf_int (optional) - a boolean (Default: False) for whether to return confidence interval values.See above note.
alpha (optional) - the significance value for calculating confidence intervals. (Default: 0.05) | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-39 | An example configuration for the pyfunc predict of a pmdarima model is shown below, with a future period
prediction count of 100, a confidence interval calculation generation, no exogenous regressor elements, and a default
alpha of 0.05:
Index
n_periods
return_conf_int
100
True
Warning
The Pandas DataFrame passed to a pmdarima pyfunc flavor must only contain 1 row.
Note
pmdarima
predict
DataFrame
return_conf_int
False
None
DataFrame
Pandas
DataFrame
["yhat"]
True
DataFrame
["yhat",
"yhat_lower",
"yhat_upper"]
yhat_lower
yhat_upper
yhat
Example usage of pmdarima artifact loaded as a pyfunc with confidence intervals calculated:
import
pmdarima
import
mlflow
import
pandas
as
pd
data
pmdarima
datasets
load_airpassengers
()
with
mlflow
start_run
():
model
pmdarima
auto_arima
data
seasonal
True
mlflow
pmdarima
save_model
model
"/tmp/model.pmd"
loaded_pyfunc
mlflow
pyfunc
load_model
"/tmp/model.pmd"
prediction_conf
pd
DataFrame
[{
"n_periods"
"return_conf_int"
True
"alpha"
0.1
}]
predictions
loaded_pyfunc
predict
prediction_conf
Output (Pandas DataFrame):
Index
yhat
yhat_lower
yhat_upper
467.573731
423.30995
511.83751
490.494467
416.17449
564.81444
509.138684
420.56255
597.71117
492.554714
397.30634 | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-40 | 420.56255
597.71117
492.554714
397.30634
587.80309
Warning
Signature logging for pmdarima will not function correctly if return_conf_int is set to True from
a non-pyfunc artifact. The output of the native ARIMA.predict() when returning confidence intervals is not
a recognized signature type.
Diviner (diviner)
diviner
diviner models in MLflow format via the
mlflow.diviner.save_model() and
mlflow.diviner.log_model() methods. These methods also add the
python_function
mlflow.pyfunc.load_model().
This loaded PyFunc model can only be scored with a DataFrame input.
You can also use the
mlflow.diviner.load_model() method to load MLflow Models with the
diviner
Diviner Types
Diviner is a library that provides an orchestration framework for performing time series forecasting on groups of
related series. Forecasting in diviner is accomplished through wrapping popular open source libraries such as
prophet and pmdarima. The diviner
library offers a simplified set of APIs to simultaneously generate distinct time series forecasts for multiple data
groupings using a single input DataFrame and a unified high-level API.
Metrics and Parameters logging for Diviner
Unlike other flavors that are supported in MLflow, Diviner has the concept of grouped models. As a collection of many
(perhaps thousands) of individual forecasting models, the burden to the tracking server to log individual metrics
and parameters for each of these models is significant. For this reason, metrics and parameters are exposed for
retrieval from Diviner’s APIs as Pandas DataFrames, rather than discrete primitive values.
To illustrate, let us assume we are forecasting hourly electricity consumption from major cities around the world.
A sample of our input data looks like this:
country
city
datetime
watts
US
NewYork | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-41 | country
city
datetime
watts
US
NewYork
2022-03-01 00:01:00
23568.9
US
NewYork
2022-03-01 00:02:00
22331.7
US
Boston
2022-03-01 00:01:00
14220.1
US
Boston
2022-03-01 00:02:00
14183.4
CA
Toronto
2022-03-01 00:01:00
18562.2
CA
Toronto
2022-03-01 00:02:00
17681.6
MX
MexicoCity
2022-03-01 00:01:00
19946.8
MX
MexicoCity
2022-03-01 00:02:00
19444.0
If we were to fit a model on this data, supplying the grouping keys as:
grouping_keys
"country"
"city"
We will have a model generated for each of the grouping keys that have been supplied:
[(
"US"
"NewYork"
),
"US"
"Boston"
),
"CA"
"Toronto"
),
"MX"
"MexicoCity"
)]
With a model constructed for each of these, entering each of their metrics and parameters wouldn’t be an issue for the
MLflow tracking server. What would become a problem, however, is if we modeled each major city on the planet and ran
this forecasting scenario every day. If we were to adhere to the conditions of the World Bank, that would mean just
over 10,000 models as of 2022. After a mere few weeks of running this forecasting every day we would have a very large
metrics table. | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-42 | To eliminate this issue for large-scale forecasting, the metrics and parameters for diviner are extracted as a
grouping key indexed Pandas DataFrame, as shown below for example (float values truncated for visibility):
grouping_key_columns
country
city
mse
rmse
mae
mape
mdape
smape
“(‘country’, ‘city’)”
CA
Toronto
8276851.6
2801.7
2417.7
0.16
0.16
0.159
“(‘country’, ‘city’)”
MX
MexicoCity
3548872.4
1833.8
1584.5
0.15
0.16
0.159
“(‘country’, ‘city’)”
US
NewYork
3167846.4
1732.4
1498.2
0.15
0.16
0.158
“(‘country’, ‘city’)”
US
Boston
14082666.4
3653.2
3156.2
0.15
0.16
0.159
There are two recommended means of logging the metrics and parameters from a diviner model :
Writing the DataFrames to local storage and using mlflow.log_artifacts()
import
os
import
mlflow
import
tempfile
with
tempfile
TemporaryDirectory
()
as
tmpdir
params
model
extract_model_params
()
metrics
model
cross_validate_and_score
horizon
"72 hours"
period
"240 hours"
initial
"480 hours"
parallel
"threads"
rolling_window
0.1
monthly
False
params
to_csv
tmpdir
/params.csv"
index
False
header
True
metrics
to_csv
tmpdir
/metrics.csv"
index
False
header
True
mlflow | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-43 | tmpdir
/metrics.csv"
index
False
header
True
mlflow
log_artifacts
tmpdir
artifact_path
"data"
Writing directly as a JSON artifact using mlflow.log_dict()
Note
The parameters extract from diviner models may require casting (or dropping of columns) if using the
pd.DataFrame.to_dict() approach due to the inability of this method to serialize objects.
import
mlflow
params
model
extract_model_params
()
metrics
model
cross_validate_and_score
horizon
"72 hours"
period
"240 hours"
initial
"480 hours"
parallel
"threads"
rolling_window
0.1
monthly
False
params
"t_scale"
params
"t_scale"
astype
str
params
"start"
params
"start"
astype
str
params
params
drop
"stan_backend"
axis
mlflow
log_dict
params
to_dict
(),
"params.json"
mlflow
log_dict
metrics
to_dict
(),
"metrics.json"
Logging of the model artifact is shown in the pyfunc example below.
Diviner pyfunc usage
The MLflow Diviner flavor includes an implementation of the pyfunc interface for Diviner models. To control
prediction behavior, you can specify configuration arguments in the first row of a Pandas DataFrame input.
As this configuration is dependent upon the underlying model type (i.e., the diviner.GroupedProphet.forecast()
method has a different signature than does diviner.GroupedPmdarima.predict()), the Diviner pyfunc implementation
attempts to coerce arguments to the types expected by the underlying model.
Note | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-44 | Note
Diviner models support both “full group” and “partial group” forecasting. If a column named "groups" is present
in the configuration DataFrame submitted to the pyfunc flavor, the grouping key values in the first row
will be used to generate a subset of forecast predictions. This functionality removes the need to filter a subset
from the full output of all groups forecasts if the results of only a few (or one) groups are needed.
For a GroupedPmdarima model, an example configuration for the pyfunc predict() method is:
import
mlflow
import
pandas
as
pd
from
pmdarima.arima.auto
import
AutoARIMA
from
diviner
import
GroupedPmdarima
with
mlflow
start_run
():
base_model
AutoARIMA
out_of_sample_size
96
maxiter
200
model
GroupedPmdarima
model_template
base_model
fit
df
df
group_key_columns
"country"
"city"
],
y_col
"watts"
datetime_col
"datetime"
silence_warnings
True
mlflow
diviner
save_model
diviner_model
model
path
"/tmp/diviner_model"
diviner_pyfunc
mlflow
pyfunc
load_model
model_uri
"/tmp/diviner_model"
predict_conf
pd
DataFrame
"n_periods"
120
"groups"
"US"
"NewYork"
),
"CA"
"Toronto"
),
"MX"
"MexicoCity"
),
],
# NB: List of tuples required.
"predict_col"
"wattage_forecast"
"alpha"
0.1
"return_conf_int"
True
"on_error"
"warn"
},
index
],
subset_forecasts | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-45 | True
"on_error"
"warn"
},
index
],
subset_forecasts
diviner_pyfunc
predict
predict_conf
Note
There are several instances in which a configuration DataFrame submitted to the pyfunc predict() method
will cause an MlflowException to be raised:
If neither horizon or n_periods are provided.
The value of n_periods or horizon is not an integer.
If the model is of type GroupedProphet, frequency as a string type must be provided.
If both horizon and n_periods are provided with different values.
Model Evaluation
After building and training your MLflow Model, you can use the mlflow.evaluate() API to
evaluate its performance on one or more datasets of your choosing. mlflow.evaluate()
currently supports evaluation of MLflow Models with the
python_function (pyfunc) model flavor for classification and regression
tasks, computing a variety of task-specific performance metrics, model performance plots, and
model explanations. Evaluation results are logged to MLflow Tracking.
The following example from the MLflow GitHub Repository
uses mlflow.evaluate() to evaluate the performance of a classifier
on the UCI Adult Data Set, logging a
comprehensive collection of MLflow Metrics and Artifacts that provide insight into model performance
and behavior:
import
xgboost
import
shap
import
mlflow
from
sklearn.model_selection
import
train_test_split
# Load the UCI Adult Dataset
shap
datasets
adult
()
# Split the data into training and test sets
X_train
X_test
y_train
y_test
train_test_split
test_size
0.33
random_state
42
# Fit an XGBoost binary classifier on the training data split
model
xgboost
XGBClassifier
()
fit
X_train
y_train
# Build the Evaluation Dataset from the test set
eval_data
X_test | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-46 | y_train
# Build the Evaluation Dataset from the test set
eval_data
X_test
eval_data
"label"
y_test
with
mlflow
start_run
()
as
run
# Log the baseline model to MLflow
mlflow
sklearn
log_model
model
"model"
model_uri
mlflow
get_artifact_uri
"model"
# Evaluate the logged model
result
mlflow
evaluate
model_uri
eval_data
targets
"label"
model_type
"classifier"
evaluators
"default"
],
Evaluating with Custom Metrics
custom_metrics
custom_artifacts
mlflow.evaluate() to produce custom metrics and artifacts for the model(s) that you’re evaluating.
The following
short example from the MLflow GitHub Repository
uses
mlflow.evaluate() with a custom metric function to evaluate the performance of a regressor on the
California Housing Dataset.
from
sklearn.linear_model
import
LinearRegression
from
sklearn.datasets
import
fetch_california_housing
from
sklearn.model_selection
import
train_test_split
import
numpy
as
np
import
mlflow
from
mlflow.models
import
make_metric
import
os
import
matplotlib.pyplot
as
plt
# loading the California housing dataset
cali_housing
fetch_california_housing
as_frame
True
# split the dataset into train and test partitions
X_train
X_test
y_train
y_test
train_test_split
cali_housing
data
cali_housing
target
test_size
0.2
random_state
123
# train the model
lin_reg
LinearRegression
()
fit
X_train
y_train
# creating the evaluation dataframe
eval_data
X_test
copy
()
eval_data
"target"
y_test
def
squared_diff_plus_one | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-47 | ()
eval_data
"target"
y_test
def
squared_diff_plus_one
eval_df
_builtin_metrics
):
"""
This example custom metric function creates a metric based on the ``prediction`` and
``target`` columns in ``eval_df`.
"""
return
np
sum
np
abs
eval_df
"prediction"
eval_df
"target"
*
def
sum_on_target_divided_by_two
_eval_df
builtin_metrics
):
"""
This example custom metric function creates a metric derived from existing metrics in
``builtin_metrics``.
"""
return
builtin_metrics
"sum_on_target"
def
prediction_target_scatter
eval_df
_builtin_metrics
artifacts_dir
):
"""
This example custom artifact generates and saves a scatter plot to ``artifacts_dir`` that
visualizes the relationship between the predictions and targets for the given model to a
file as an image artifact.
"""
plt
scatter
eval_df
"prediction"
],
eval_df
"target"
])
plt
xlabel
"Targets"
plt
ylabel
"Predictions"
plt
title
"Targets vs. Predictions"
plot_path
os
path
join
artifacts_dir
"example_scatter_plot.png"
plt
savefig
plot_path
return
"example_scatter_plot_artifact"
plot_path
with
mlflow
start_run
()
as
run
mlflow
sklearn
log_model
lin_reg
"model"
model_uri
mlflow
get_artifact_uri
"model"
result
mlflow
evaluate
model
model_uri
data
eval_data
targets
"target"
model_type
"regressor"
evaluators
"default"
],
custom_metrics
make_metric
eval_fn
squared_diff_plus_one
greater_is_better | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-48 | custom_metrics
make_metric
eval_fn
squared_diff_plus_one
greater_is_better
False
),
make_metric
eval_fn
sum_on_target_divided_by_two
greater_is_better
True
),
],
custom_artifacts
prediction_target_scatter
],
print
"metrics:
\n
result
metrics
print
"artifacts:
\n
result
artifacts
For a more comprehensive custom metrics usage example, refer to this example from the MLflow GitHub Repository.
Performing Model Validation
mlflow.evaluate() API to perform some checks on the metrics
generated during model evaluation to validate the quality of your model. By specifying a
validation_thresholds
mlflow.models.MetricThreshold
objects, you can specify value thresholds that your model’s evaluation metrics must exceed as well
as absolute and relative gains your model must have in comparison to a specified
baseline_model
mlflow.evaluate()
will throw a
ModelValidationFailedException
import
xgboost
import
shap
from
sklearn.model_selection
import
train_test_split
from
sklearn.dummy
import
DummyClassifier
import
mlflow
from
mlflow.models
import
MetricThreshold
# load UCI Adult Data Set; segment it into training and test sets
shap
datasets
adult
()
X_train
X_test
y_train
y_test
train_test_split
test_size
0.33
random_state
42
# train a candidate XGBoost model
candidate_model
xgboost
XGBClassifier
()
fit
X_train
y_train
# train a baseline dummy model
baseline_model
DummyClassifier
strategy
"uniform"
fit
X_train
y_train
# construct an evaluation dataset from the test set
eval_data
X_test
eval_data
"label"
y_test
# Define criteria for model to be validated against
thresholds | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-49 | "label"
y_test
# Define criteria for model to be validated against
thresholds
"accuracy_score"
MetricThreshold
threshold
0.8
# accuracy should be >=0.8
min_absolute_change
0.05
# accuracy should be at least 0.05 greater than baseline model accuracy
min_relative_change
0.05
# accuracy should be at least 5 percent greater than baseline model accuracy
higher_is_better
True
),
with
mlflow
start_run
()
as
run
candidate_model_uri
mlflow
sklearn
log_model
candidate_model
"candidate_model"
model_uri
baseline_model_uri
mlflow
sklearn
log_model
baseline_model
"baseline_model"
model_uri
mlflow
evaluate
candidate_model_uri
eval_data
targets
"label"
model_type
"classifier"
validation_thresholds
thresholds
baseline_model
baseline_model_uri
Refer to mlflow.models.MetricThreshold to see details on how the thresholds are specified
and checked. For a more comprehensive demonstration on how to use mlflow.evaluate() to perform model validation, refer to
the Model Validation example from the MLflow GitHub Repository.
The logged output within the MLflow UI for the comprehensive example is shown below. Note the two model artifacts that have
been logged: ‘baseline_model’ and ‘candidate_model’ for comparison purposes in the example.
Note
Limitations (when the default evaluator is used):
Model validation results are not included in the active MLflow run.
No metrics are logged nor artifacts produced for the baseline model in the active MLflow run.
Additional information about model evaluation behaviors and outputs is available in the
mlflow.evaluate() API docs.
Note
Differences in the computation of Area under Curve Precision Recall score (metric name
precision_recall_auc) between multi and binary classifiers: | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-50 | Multiclass classifier models, when evaluated, utilize the standard scoring metric from sklearn:
sklearn.metrics.roc_auc_score to calculate the area under the precision recall curve. This
algorithm performs a linear interpolation calculation utilizing the trapezoidal rule to estimate
the area under the precision recall curve. It is well-suited for use in evaluating multi-class
classification models to provide a single numeric value of the quality of fit.
Binary classifier models, on the other hand, use the sklearn.metrics.average_precision_score to
avoid the shortcomings of the roc_auc_score implementation when applied to heavily
imbalanced classes in binary classification. Usage of the roc_auc_score for imbalanced
datasets can give a misleading result (optimistically better than the model’s actual ability
to accurately predict the minority class membership).
For additional information on the topic of why different algorithms are employed for this, as
well as links to the papers that informed the implementation of these metrics within the
sklearn.metrics module, refer to
the documentation.
For simplicity purposes, both methodologies evaluation metric results (whether for multi-class
or binary classification) are unified in the single metric: precision_recall_auc.
Model Customization
While MLflow’s built-in model persistence utilities are convenient for packaging models from various
popular ML libraries in MLflow Model format, they do not cover every use case. For example, you may
want to use a model from an ML library that is not explicitly supported by MLflow’s built-in
flavors. Alternatively, you may want to package custom inference code and data to create an
MLflow Model. Fortunately, MLflow provides two solutions that can be used to accomplish these
tasks: Custom Python Models and Custom Flavors.
In this section:
Custom Python Models
Example: Creating a custom “add n” model
Example: Saving an XGBoost model in MLflow format
Custom Flavors
Custom Python Models | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-51 | Custom Flavors
Custom Python Models
The mlflow.pyfunc module provides save_model() and
log_model() utilities for creating MLflow Models with the
python_function flavor that contain user-specified code and artifact (file) dependencies.
These artifact dependencies may include serialized models produced by any Python ML library.
Because these custom models contain the python_function flavor, they can be deployed
to any of MLflow’s supported production environments, such as SageMaker, AzureML, or local
REST endpoints.
The following examples demonstrate how you can use the mlflow.pyfunc module to create
custom Python models. For additional information about model customization with MLflow’s
python_function utilities, see the
python_function custom models documentation.
Example: Creating a custom “add n” model
This example defines a class for a custom model that adds a specified numeric value, n, to all
columns of a Pandas DataFrame input. Then, it uses the mlflow.pyfunc APIs to save an
instance of this model with n = 5 in MLflow Model format. Finally, it loads the model in
python_function format and uses it to evaluate a sample input.
import
mlflow.pyfunc
# Define the model class
class
AddN
mlflow
pyfunc
PythonModel
):
def
__init__
self
):
self
def
predict
self
context
model_input
):
return
model_input
apply
lambda
column
column
self
# Construct and save the model
model_path
"add_n_model"
add5_model
AddN
mlflow
pyfunc
save_model
path
model_path
python_model
add5_model
# Load the model in `python_function` format
loaded_model
mlflow
pyfunc
load_model
model_path
# Evaluate the model
import
pandas
as
pd
model_input
pd
DataFrame
([ | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-52 | import
pandas
as
pd
model_input
pd
DataFrame
([
range
10
)])
model_output
loaded_model
predict
model_input
assert
model_output
equals
pd
DataFrame
([
range
15
)]))
Example: Saving an XGBoost model in MLflow format
This example begins by training and saving a gradient boosted tree model using the XGBoost
library. Next, it defines a wrapper class around the XGBoost model that conforms to MLflow’s
python_function inference API. Then, it uses the wrapper class and
the saved XGBoost model to construct an MLflow Model that performs inference using the gradient
boosted tree. Finally, it loads the MLflow Model in python_function format and uses it to
evaluate test data.
# Load training and test datasets
from
sys
import
version_info
import
xgboost
as
xgb
from
sklearn
import
datasets
from
sklearn.model_selection
import
train_test_split
PYTHON_VERSION
{major}
{minor}
{micro}
format
major
version_info
major
minor
version_info
minor
micro
version_info
micro
iris
datasets
load_iris
()
iris
data
[:,
:]
iris
target
x_train
x_test
y_train
train_test_split
test_size
0.2
random_state
42
dtrain
xgb
DMatrix
x_train
label
y_train
# Train and save an XGBoost model
xgb_model
xgb
train
params
"max_depth"
10
},
dtrain
dtrain
num_boost_round
10
xgb_model_path
"xgb_model.pth"
xgb_model
save_model
xgb_model_path | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-53 | "xgb_model.pth"
xgb_model
save_model
xgb_model_path
# Create an `artifacts` dictionary that assigns a unique name to the saved XGBoost model file.
# This dictionary will be passed to `mlflow.pyfunc.save_model`, which will copy the model file
# into the new MLflow Model's directory.
artifacts
"xgb_model"
xgb_model_path
# Define the model class
import
mlflow.pyfunc
class
XGBWrapper
mlflow
pyfunc
PythonModel
):
def
load_context
self
context
):
import
xgboost
as
xgb
self
xgb_model
xgb
Booster
()
self
xgb_model
load_model
context
artifacts
"xgb_model"
])
def
predict
self
context
model_input
):
input_matrix
xgb
DMatrix
model_input
values
return
self
xgb_model
predict
input_matrix
# Create a Conda environment for the new MLflow Model that contains all necessary dependencies.
import
cloudpickle
conda_env
"channels"
"defaults"
],
"dependencies"
"python=
{}
format
PYTHON_VERSION
),
"pip"
"pip"
"mlflow"
"xgboost==
{}
format
xgb
__version__
),
"cloudpickle==
{}
format
cloudpickle
__version__
),
],
},
],
"name"
"xgb_env"
# Save the MLflow Model
mlflow_pyfunc_model_path
"xgb_mlflow_pyfunc"
mlflow
pyfunc
save_model
path
mlflow_pyfunc_model_path
python_model
XGBWrapper
(),
artifacts
artifacts
conda_env
conda_env | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-54 | python_model
XGBWrapper
(),
artifacts
artifacts
conda_env
conda_env
# Load the model in `python_function` format
loaded_model
mlflow
pyfunc
load_model
mlflow_pyfunc_model_path
# Evaluate the model
import
pandas
as
pd
test_predictions
loaded_model
predict
pd
DataFrame
x_test
))
print
test_predictions
Custom Flavors
You can also create custom MLflow Models by writing a custom flavor.
As discussed in the Model API and Storage Format sections, an MLflow Model
is defined by a directory of files that contains an MLmodel configuration file. This MLmodel
file describes various model attributes, including the flavors in which the model can be
interpreted. The MLmodel file contains an entry for each flavor name; each entry is
a YAML-formatted collection of flavor-specific attributes.
To create a new flavor to support a custom model, you define the set of flavor-specific attributes
to include in the MLmodel configuration file, as well as the code that can interpret the
contents of the model directory and the flavor’s attributes.
mlflow.pytorch module corresponding to MLflow’s
pytorch
mlflow.pytorch.save_model() method, a PyTorch model is saved
to a specified output directory. Additionally,
mlflow.pytorch.save_model() leverages the
mlflow.models.Model.add_flavor() and
mlflow.models.Model.save() functions to
produce an
MLmodel
pytorch
pytorch_version
save_model(), the
mlflow.pytorch module also
defines a
load_model() method.
mlflow.pytorch.load_model() reads the
MLmodel
pytorch
Built-In Deployment Tools
MLflow provides tools for deploying MLflow models on a local machine and to several production environments.
Not all deployment methods are available for all model flavors.
In this section: | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-55 | In this section:
Deploy MLflow models
Deploy a python_function model on Microsoft Azure ML
Deploy a python_function model on Amazon SageMaker
Export a python_function model as an Apache Spark UDF
Deploy MLflow models
MLflow can deploy models locally as local REST API endpoints or to directly score files. In addition,
MLflow can package models as self-contained Docker images with the REST API endpoint. The image can
be used to safely deploy the model to various environments such as Kubernetes.
You deploy MLflow model locally or generate a Docker image using the CLI interface to the
mlflow.models module.
The REST API defines 4 endpoints:
/ping used for health check
/health (same as /ping)
/version used for getting the mlflow version
/invocations used for scoring
The REST API server accepts csv or json input. The input format must be specified in
Content-Type header. The value of the header must be either application/json or
application/csv.
The csv input must be a valid pandas.DataFrame csv representation. For example,
data = pandas_df.to_csv().
The json input must be a dictionary with exactly one of the following fields that further specify
the type and encoding of the input data
dataframe_split field with pandas DataFrames in the split orientation. For example,
data = {"dataframe_split": pandas_df.to_dict(orient='split').
dataframe_records field with pandas DataFrame in the records orientation. For example,
data = {"dataframe_records": pandas_df.to_dict(orient='records').*We do not
recommend using this format because it is not guaranteed to preserve column ordering.*
instances field with tensor input formatted as described in TF Serving’s API docs where the provided inputs
will be cast to Numpy arrays.
inputs field with tensor input formatted as described in TF Serving’s API docs where the provided inputs
will be cast to Numpy arrays.
Note | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-56 | Note
Since JSON loses type information, MLflow will cast the JSON input to the input type specified
in the model’s schema if available. If your model is sensitive to input types, it is recommended that
a schema is provided for the model to ensure that type mismatch errors do not occur at inference time.
In particular, DL models are typically strict about input types and will need model schema in order
for the model to score correctly. For complex data types, see Encoding complex data below.
Example requests:
# split-oriented DataFrame input
curl
http://127.0.0.1:5000/invocations
H
'Content-Type: application/json'
d
'{
"dataframe_split": {
"columns": ["a", "b", "c"],
"data": [[1, 2, 3], [4, 5, 6]]
}'
# record-oriented DataFrame input (fine for vector rows, loses ordering for JSON records)
curl
http://127.0.0.1:5000/invocations
H
'Content-Type: application/json'
d
'{
"dataframe_records": [
{"a": 1,"b": 2,"c": 3},
{"a": 4,"b": 5,"c": 6}
}'
# numpy/tensor input using TF serving's "instances" format
curl
http://127.0.0.1:5000/invocations
H
'Content-Type: application/json'
d
'{
"instances": [
{"a": "s1", "b": 1, "c": [1, 2, 3]},
{"a": "s2", "b": 2, "c": [4, 5, 6]}, | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-57 | {"a": "s3", "b": 3, "c": [7, 8, 9]}
}'
# numpy/tensor input using TF serving's "inputs" format
curl
http://127.0.0.1:5000/invocations
H
'Content-Type: application/json'
d
'{
"inputs": {"a": ["s1", "s2", "s3"], "b": [1, 2, 3], "c": [[1, 2, 3], [4, 5, 6], [7, 8, 9]]}
}'
For more information about serializing pandas DataFrames, see
pandas.DataFrame.to_json.
For more information about serializing tensor inputs using the TF serving format, see
TF serving’s request format docs.
Serving with MLServer
Python models can be deployed using Seldon’s MLServer as alternative inference server.
MLServer is integrated with two leading open source model deployment tools,
Seldon Core
and KServe (formerly known as KFServing), and can
be used to test and deploy models using these frameworks.
This is especially powerful when building docker images since the docker image
built with MLServer can be deployed directly with both of these frameworks.
MLServer exposes the same scoring API through the /invocations endpoint.
In addition, it supports the standard V2 Inference Protocol.
Note
To use MLServer with MLflow, please install mlflow as:
pip
install
mlflow
[extras
To serve a MLflow model using MLServer, you can use the --enable-mlserver flag,
such as:
mlflow
models
serve
m
my_model
-enable-mlserver
Similarly, to build a Docker image built with MLServer you can use the
--enable-mlserver flag, such as:
mlflow | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-58 | mlflow
models
build
m
my_model
-enable-mlserver
n
my-model
To read more about the integration between MLflow and MLServer, please check
the end-to-end example in the MLServer documentation or
visit the MLServer docs.
Encoding complex data
Complex data types, such as dates or binary, do not have a native JSON representation. If you include a model
signature, MLflow can automatically decode supported data types from JSON. The following data type conversions
are supported:
binary: data is expected to be base64 encoded, MLflow will automatically base64 decode.
datetime: data is expected as string according to
ISO 8601 specification.
MLflow will parse this into the appropriate datetime representation on the given platform.
Example requests:
# record-oriented DataFrame input with binary column "b"
curl
http://127.0.0.1:5000/invocations
H
'Content-Type: application/json'
d
'[
{"a": 0, "b": "dGVzdCBiaW5hcnkgZGF0YSAw"},
{"a": 1, "b": "dGVzdCBiaW5hcnkgZGF0YSAx"},
{"a": 2, "b": "dGVzdCBiaW5hcnkgZGF0YSAy"}
]'
# record-oriented DataFrame input with datetime column "b"
curl
http://127.0.0.1:5000/invocations
H
'Content-Type: application/json'
d
'[
{"a": 0, "b": "2020-01-01T00:00:00Z"},
{"a": 1, "b": "2020-02-01T12:34:56Z"}, | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-59 | {"a": 2, "b": "2021-03-01T00:00:00Z"}
]'
Command Line Interface
MLflow also has a CLI that supports the following commands:
serve deploys the model as a local REST API server.
build_docker packages a REST API endpoint serving the
model as a docker image.
predict uses the model to generate a prediction for a local
CSV or JSON file. Note that this method only supports DataFrame input.
For more info, see:
mlflow
models
-help
mlflow
models
serve
-help
mlflow
models
predict
-help
mlflow
models
build-docker
-help
Environment Management Tools
MLflow currently supports the following environment management tools to restore model environments:
Use the local environment. No extra tools are required.
Create environments using virtualenv and pyenv (for python version management). Virtualenv and
pyenv (for Linux and macOS) or pyenv-win (for Windows) must be installed for this mode of environment reconstruction.
virtualenv installation instructions
pyenv installation instructions
pyenv-win installation instructions
Create environments using conda. Conda must be installed for this mode of environment reconstruction.
Warning
By using conda, you’re responsible for adhering to Anaconda’s terms of service.
conda installation instructions
The mlflow models CLI commands provide an optional --env-manager argument that selects a specific environment management configuration to be used, as shown below:
# Use virtualenv
mlflow
models
predict
...
-env-manager
=virtualenv
# Use conda
mlflow
models
serve
...
-env-manager
=conda
Deploy a python_function model on Microsoft Azure ML | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-60 | serve
...
-env-manager
=conda
Deploy a python_function model on Microsoft Azure ML
The MLflow plugin azureml-mlflow can deploy models to Azure ML, either to Azure Kubernetes Service (AKS) or Azure Container Instances (ACI) for real-time serving.
The resulting deployment accepts the following data formats as input:
JSON-serialized pandas DataFrames in the split orientation. For example, data = pandas_df.to_json(orient='split'). This format is specified using a Content-Type request header value of application/json.
Warning
The TensorSpec input format is not fully supported for deployments on Azure Machine Learning at the moment. Be aware that many autolog() implementations may use TensorSpec for model’s signatures when logging models and hence those deployments will fail in Azure ML.
Deployments can be generated using both the Python API or MLflow CLI. In both cases, a JSON configuration file can be indicated with the details of the deployment you want to achieve. If not indicated, then a default deployment is done using Azure Container Instances (ACI) and a minimal configuration. The full specification of this configuration file can be checked at Deployment configuration schema. Also, you will also need the Azure ML MLflow Tracking URI of your particular Azure ML Workspace where you want to deploy your model. You can obtain this URI in several ways:
Through the Azure ML Studio:
Navigate to Azure ML Studio and select the workspace you are working on.
Click on the name of the workspace at the upper right corner of the page.
Click “View all properties in Azure Portal” on the pane popup.
Copy the MLflow tracking URI value from the properties section.
Programmatically, using Azure ML SDK with the method Workspace.get_mlflow_tracking_uri(). If you are running inside Azure ML Compute, like for instance a Compute Instance, you can get this value also from the environment variable os.environ["MLFLOW_TRACKING_URI"]. | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-61 | Manually, for a given Subscription ID, Resource Group and Azure ML Workspace, the URI is as follows: azureml://eastus.api.azureml.ms/mlflow/v1.0/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.MachineLearningServices/workspaces/<WORKSPACE_NAME>
Configuration example for ACI deployment
"computeType"
"aci"
"containerResourceRequirements"
"cpu"
"memoryInGB"
},
"location"
"eastus2"
If containerResourceRequirements is not indicated, a deployment with minimal compute configuration is applied (cpu: 0.1 and memory: 0.5).
If location is not indicated, it defaults to the location of the workspace.
Configuration example for an AKS deployment
"computeType"
"aks"
"computeTargetName"
"aks-mlflow"
In above example, aks-mlflow is the name of an Azure Kubernetes Cluster registered/created in Azure Machine Learning.
The following examples show how to create a deployment in ACI. Please, ensure you have azureml-mlflow installed before continuing.
Example: Workflow using the Python API
import
json
from
mlflow.deployments
import
get_deploy_client
# Create the deployment configuration.
# If no deployment configuration is provided, then the deployment happens on ACI.
deploy_config
"computeType"
"aci"
# Write the deployment configuration into a file.
deployment_config_path
"deployment_config.json"
with
open
deployment_config_path
"w"
as
outfile
outfile
write
json
dumps
deploy_config
))
# Set the tracking uri in the deployment client.
client
get_deploy_client
"<azureml-mlflow-tracking-url>"
# MLflow requires the deployment configuration to be passed as a dictionary.
config
"deploy-config-file" | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-62 | config
"deploy-config-file"
deployment_config_path
model_name
"mymodel"
model_version
# define the model path and the name is the service name
# if model is not registered, it gets registered automatically and a name is autogenerated using the "name" parameter below
client
create_deployment
model_uri
"models:/
model_name
model_version
config
config
name
"mymodel-aci-deployment"
# After the model deployment completes, requests can be posted via HTTP to the new ACI
# webservice's scoring URI.
print
"Scoring URI is:
%s
webservice
scoring_uri
# The following example posts a sample input from the wine dataset
# used in the MLflow ElasticNet example:
# https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine
# `sample_input` is a JSON-serialized pandas DataFrame with the `split` orientation
import
requests
import
json
# `sample_input` is a JSON-serialized pandas DataFrame with the `split` orientation
sample_input
"columns"
"alcohol"
"chlorides"
"citric acid"
"density"
"fixed acidity"
"free sulfur dioxide"
"pH"
"residual sugar"
"sulphates"
"total sulfur dioxide"
"volatile acidity"
],
"data"
[[
8.8
0.045
0.36
1.001
45
20.7
0.45
170
0.27
]],
response
requests
post
url
webservice
scoring_uri
data
json
dumps
sample_input
),
headers
"Content-type"
"application/json"
},
response_json
json
loads
response
text
print
response_json | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-63 | },
response_json
json
loads
response
text
print
response_json
Example: Workflow using the MLflow CLI
echo
"{ computeType: aci }"
deployment_config.json
mlflow
deployments
create
-name
<deployment-name>
m
models:/<model-name>/<model-version>
t
<azureml-mlflow-tracking-url>
-deploy-config-file
deployment_config.json
# After the deployment completes, requests can be posted via HTTP to the new ACI
# webservice's scoring URI.
scoring_uri
$(az
ml
service
show
-name
<deployment-name>
v
jq
r
".scoringUri"
# The following example posts a sample input from the wine dataset
# used in the MLflow ElasticNet example:
# https://github.com/mlflow/mlflow/tree/master/examples/sklearn_elasticnet_wine
# `sample_input` is a JSON-serialized pandas DataFrame with the `split` orientation
sample_input
"columns": [
"alcohol",
"chlorides",
"citric acid",
"density",
"fixed acidity",
"free sulfur dioxide",
"pH",
"residual sugar",
"sulphates",
"total sulfur dioxide",
"volatile acidity"
],
"data": [
[8.8, 0.045, 0.36, 1.001, 7, 45, 3, 20.7, 0.45, 170, 0.27]
}'
echo
$sample_input
curl
s
X
POST
$scoring_uri
\
-H
'Cache-Control: no-cache'
\
-H
'Content-Type: application/json'
\
-d
@-
You can also test your deployments locally first using the option run-local:
mlflow | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-64 | @-
You can also test your deployments locally first using the option run-local:
mlflow
deployments
run-local
-name
<deployment-name>
m
models:/<model-name>/<model-version>
t
<azureml-mlflow-tracking-url>
For more info, see:
mlflow
deployments
help
t
azureml
Deploy a python_function model on Amazon SageMaker
The mlflow.deployments and mlflow.sagemaker modules can deploy
python_function models locally in a Docker container with SageMaker compatible environment and
remotely on SageMaker. To deploy remotely to SageMaker you need to set up your environment and user
accounts. To export a custom model to SageMaker, you need a MLflow-compatible Docker image to be
available on Amazon ECR. MLflow provides a default Docker image definition; however, it is up to you
to build the image and upload it to ECR. MLflow includes the utility function
build_and_push_container to perform this step. Once built and uploaded, you can use the MLflow
container for all MLflow Models. Model webservers deployed using the mlflow.deployments
module accept the following data formats as input, depending on the deployment flavor:
python_function: For this deployment flavor, the endpoint accepts the same formats described
in the local model deployment documentation.
mleap: For this deployment flavor, the endpoint accepts only
JSON-serialized pandas DataFrames in the split orientation. For example,
data = pandas_df.to_json(orient='split'). This format is specified using a Content-Type
request header value of application/json.
Commands
mlflow deployments run-local -t sagemaker deploys the
model locally in a Docker container. The image and the environment should be identical to how the
model would be run remotely and it is therefore useful for testing the model prior to deployment. | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-65 | mlflow sagemaker build-and-push-container
builds an MLfLow Docker image and uploads it to ECR. The caller must have the correct permissions
set up. The image is built locally and requires Docker to be present on the machine that performs
this step.
mlflow deployments create -t sagemaker
deploys the model on Amazon SageMaker. MLflow uploads the Python Function model into S3 and starts
an Amazon SageMaker endpoint serving the model.
Example workflow using the MLflow CLI
mlflow
sagemaker
build-and-push-container
# build the container (only needs to be called once)
mlflow
deployments
run-local
t
sagemaker
-name
<deployment-name>
m
<path-to-model>
# test the model locally
mlflow
deployments
sagemaker
create
t
# deploy the model remotely
For more info, see:
mlflow
sagemaker
-help
mlflow
sagemaker
build-and-push-container
-help
mlflow
deployments
run-local
-help
mlflow
deployments
help
t
sagemaker
Export a python_function model as an Apache Spark UDF
You can output a python_function model as an Apache Spark UDF, which can be uploaded to a
Spark cluster and used to score the model.
Example
from
pyspark.sql.functions
import
struct
from
pyspark.sql
import
SparkSession
spark
SparkSession
builder
getOrCreate
()
pyfunc_udf
mlflow
pyfunc
spark_udf
spark
"<path-to-model>"
df
spark_df
withColumn
"prediction"
pyfunc_udf
struct
([
...
]))) | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-66 | withColumn
"prediction"
pyfunc_udf
struct
([
...
])))
If a model contains a signature, the UDF can be called without specifying column name arguments.
In this case, the UDF will be called with column names from signature, so the evaluation
dataframe’s column names must match the model signature’s column names.
Example
from
pyspark.sql
import
SparkSession
spark
SparkSession
builder
getOrCreate
()
pyfunc_udf
mlflow
pyfunc
spark_udf
spark
"<path-to-model-with-signature>"
df
spark_df
withColumn
"prediction"
pyfunc_udf
())
If a model contains a signature with tensor spec inputs,
you will need to pass a column of array type as a corresponding UDF argument.
The values in this column must be comprised of one-dimensional arrays. The
UDF will reshape the array values to the required shape with ‘C’ order
(i.e. read / write the elements using C-like index order) and cast the values
as the required tensor spec type. For example, assuming a model
requires input ‘a’ of shape (-1, 2, 3) and input ‘b’ of shape (-1, 4, 5). In order to
perform inference on this data, we need to prepare a Spark DataFrame with column ‘a’
containing arrays of length 6 and column ‘b’ containing arrays of length 20. We can then
invoke the UDF like following example code:
Example
from
pyspark.sql
import
SparkSession
spark
SparkSession
builder
getOrCreate
()
# Assuming the model requires input 'a' of shape (-1, 2, 3) and input 'b' of shape (-1, 4, 5)
model_path | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-67 | model_path
"<path-to-model-requiring-multidimensional-inputs>"
pyfunc_udf
mlflow
pyfunc
spark_udf
spark
model_path
# The `spark_df` has column 'a' containing arrays of length 6 and
# column 'b' containing arrays of length 20
df
spark_df
withColumn
"prediction"
pyfunc_udf
struct
"a"
"b"
)))
The resulting UDF is based on Spark’s Pandas UDF and is currently limited to producing either a single
value, an array of values, or a struct containing multiple field values
of the same type per observation. By default, we return the first
numeric column as a double. You can control what result is returned by supplying result_type
argument. The following values are supported:
'int' or IntegerType: The leftmost integer that can fit in
int32 result is returned or an exception is raised if there are none.
'long' or LongType: The leftmost long integer that can fit in int64
result is returned or an exception is raised if there are none.
ArrayType (IntegerType | LongType): Return all integer columns that can fit
into the requested size.
'float' or FloatType: The leftmost numeric result cast to
float32 is returned or an exception is raised if there are no numeric columns.
'double' or DoubleType: The leftmost numeric result cast to
double is returned or an exception is raised if there are no numeric columns.
ArrayType ( FloatType | DoubleType ): Return all numeric columns cast to the
requested type. An exception is raised if there are no numeric columns.
'string' or StringType: Result is the leftmost column cast as string.
ArrayType ( StringType ): Return all columns cast as string. | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-68 | ArrayType ( StringType ): Return all columns cast as string.
'bool' or 'boolean' or BooleanType: The leftmost column cast to bool
is returned or an exception is raised if the values cannot be coerced.
'field1 FIELD1_TYPE, field2 FIELD2_TYPE, ...': A struct type containing
multiple fields separated by comma, each field type must be one of types
listed above.
Example
from
pyspark.sql
import
SparkSession
spark
SparkSession
builder
getOrCreate
()
# Suppose the PyFunc model `predict` method returns a dict like:
# `{'prediction': 1-dim_array, 'probability': 2-dim_array}`
# You can supply result_type to be a struct type containing
# 2 fields 'prediction' and 'probability' like following.
pyfunc_udf
mlflow
pyfunc
spark_udf
spark
"<path-to-model>"
result_type
"prediction float, probability: array<float>"
df
spark_df
withColumn
"prediction"
pyfunc_udf
())
Example
from
pyspark.sql.types
import
ArrayType
FloatType
from
pyspark.sql.functions
import
struct
from
pyspark.sql
import
SparkSession
spark
SparkSession
builder
getOrCreate
()
pyfunc_udf
mlflow
pyfunc
spark_udf
spark
"path/to/model"
result_type
ArrayType
FloatType
())
# The prediction column will contain all the numeric columns returned by the model as floats
df
spark_df
withColumn
"prediction"
pyfunc_udf
struct
"name"
"age"
)))
If you want to use conda to restore the python environment that was used to train the model,
set the env_manager argument when calling mlflow.pyfunc.spark_udf().
Example
from | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-69 | Example
from
pyspark.sql.types
import
ArrayType
FloatType
from
pyspark.sql.functions
import
struct
from
pyspark.sql
import
SparkSession
spark
SparkSession
builder
getOrCreate
()
pyfunc_udf
mlflow
pyfunc
spark_udf
spark
"path/to/model"
result_type
ArrayType
FloatType
()),
env_manager
"conda"
# Use conda to restore the environment used in training
df
spark_df
withColumn
"prediction"
pyfunc_udf
struct
"name"
"age"
)))
Deployment to Custom Targets
In addition to the built-in deployment tools, MLflow provides a pluggable
mlflow.deployments Python API and
mlflow deployments CLI for deploying
models to custom targets and environments. To deploy to a custom target, you must first install an
appropriate third-party Python plugin. See the list of known community-maintained plugins
here.
Commands
The mlflow deployments CLI contains the following commands, which can also be invoked programmatically
using the mlflow.deployments Python API:
Create: Deploy an MLflow model to a specified custom target
Delete: Delete a deployment
Update: Update an existing deployment, for example to
deploy a new model version or change the deployment’s configuration (e.g. increase replica count)
List: List IDs of all deployments
Get: Print a detailed description of a particular deployment
Run Local: Deploy the model locally for testing
Help: Show the help string for the specified target
For more info, see:
mlflow
deployments
-help
mlflow
deployments
create
-help
mlflow
deployments
delete
-help
mlflow
deployments
update
-help
mlflow
deployments
list
-help
mlflow
deployments
get
-help
mlflow
deployments | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-70 | list
-help
mlflow
deployments
get
-help
mlflow
deployments
run-local
-help
mlflow
deployments
help
-help
Community Model Flavors
Other useful MLflow flavors are developed and maintained by the
MLflow community, enabling you to use MLflow Models with an
even broader ecosystem of machine learning libraries. For more information,
check out the description of each community-developed flavor below.
MLflow VizMod
BigML (bigmlflow)
Sktime
MLflow VizMod
The mlflow-vizmod project allows data scientists
to be more productive with their visualizations. We treat visualizations as models - just like ML
models - thus being able to use the same infrastructure as MLflow to track, create projects,
register, and deploy visualizations.
Installation:
pip
install
mlflow-vizmod
Example:
from
sklearn.datasets
import
load_iris
import
altair
as
alt
import
mlflow_vismod
df_iris
load_iris
as_frame
True
viz_iris
alt
Chart
df_iris
mark_circle
size
60
encode
"x"
"y"
color
"z:N"
properties
height
375
width
575
interactive
()
mlflow_vismod
log_model
model
viz_iris
artifact_path
"viz"
style
"vegalite"
input_example
df_iris
head
),
BigML (bigmlflow)
bigmlflow library implements
the
bigml
BigML supervised models
and offers the
save_model()
log_model()
load_model()
Installing bigmlflow
BigMLFlow can be installed from PyPI as follows:
pip
install
bigmlflow
BigMLFlow usage | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-71 | pip
install
bigmlflow
BigMLFlow usage
The bigmlflow module defines the flavor that implements the
save_model() and log_model() methods. They can be used
to save BigML models and their related information in MLflow Model format.
import
json
import
mlflow
import
bigmlflow
MODEL_FILE
"logistic_regression.json"
with
mlflow
start_run
():
with
open
MODEL_FILE
as
handler
model
json
load
handler
bigmlflow
log_model
model
artifact_path
"model"
registered_model_name
"my_model"
These methods also add the python_function flavor to the MLflow Models
that they produce, allowing the models to be interpreted as generic Python
functions for inference via mlflow.pyfunc.load_model().
This loaded PyFunc model can only be scored with DataFrame inputs.
# saving the model
save_model
model
path
model_path
# retrieving model
pyfunc_model
pyfunc
load_model
model_path
pyfunc_predictions
pyfunc_model
predict
dataframe
You can also use the bigmlflow.load_model() method to load MLflow Models
with the bigmlflow model flavor as a BigML
SupervisedModel.
For more information, see the
BigMLFlow documentation
and BigML’s blog.
Sktime
sktime
sktime models in MLflow
format via the
save_model()
log_model()
python_function
mlflow.pyfunc.load_model().
This loaded PyFunc model can only be scored with a DataFrame input.
You can also use the
load_model()
sktime
Installing Sktime
Install sktime with mlflow dependency:
pip
install
sktime
[mlflow
Usage example | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
fe0d5088bc70-72 | pip
install
sktime
[mlflow
Usage example
Refer to the sktime mlflow documentation for details on the interface for utilizing sktime models loaded as a pyfunc type and an example notebook for extended code usage examples.
import
pandas
as
pd
from
sktime.datasets
import
load_airline
from
sktime.forecasting.arima
import
AutoARIMA
from
sktime.utils
import
mlflow_sktime
airline
load_airline
()
model_path
"model"
auto_arima_model
AutoARIMA
sp
12
max_p
max_q
suppress_warnings
True
fit
airline
fh
mlflow_sktime
save_model
sktime_model
auto_arima_model
path
model_path
loaded_model
mlflow_sktime
load_model
model_uri
model_path
loaded_pyfunc
mlflow_sktime
pyfunc
load_model
model_uri
model_path
print
loaded_model
predict
())
print
loaded_pyfunc
predict
pd
DataFrame
())) | {
"url": "https://mlflow.org/docs/latest/models.html"
} |
e9e0bdb6f304-0 | MLflow Model Registry
The MLflow Model Registry component is a centralized model store, set of APIs, and UI, to
collaboratively manage the full lifecycle of an MLflow Model. It provides model lineage (which
MLflow experiment and run produced the model), model versioning, stage transitions (for example from
staging to production), and annotations.
Table of Contents
Concepts
Model Registry Workflows
UI Workflow
Registering a Model
Using the Model Registry
API Workflow
Adding an MLflow Model to the Model Registry
Fetching an MLflow Model from the Model Registry
Serving an MLflow Model from Model Registry
Adding or Updating an MLflow Model Descriptions
Renaming an MLflow Model
Transitioning an MLflow Model’s Stage
Listing and Searching MLflow Models
Archiving an MLflow Model
Deleting MLflow Models
Registering a Saved Model
Registering an Unsupported Machine Learning Model
Concepts
The Model Registry introduces a few concepts that describe and facilitate the full lifecycle of an MLflow Model.
An MLflow Model is created from an experiment or run that is logged with one of the model flavor’s mlflow.<model_flavor>.log_model() methods. Once logged, this model can then be registered with the Model Registry.
An MLflow Model can be registered with the Model Registry. A registered model has a unique name, contains versions, associated transitional stages, model lineage, and other metadata.
Each registered model can have one or many versions. When a new model is added to the Model Registry, it is added as version 1. Each new model registered to the same model name increments the version number.
Each distinct model version can be assigned one stage at any given time. MLflow provides predefined stages for common use-cases such as Staging, Production or Archived. You can transition a model version from one stage to another stage. | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-1 | You can annotate the top-level model and each version individually using Markdown, including description and any relevant information useful for the team such as algorithm descriptions, dataset employed or methodology.
Model Registry Workflows
If running your own MLflow server, you must use a database-backed backend store in order to access
the model registry via the UI or API. See here for more information.
Before you can add a model to the Model Registry, you must log it using the log_model methods
of the corresponding model flavors. Once a model has been logged, you can add, modify, update, transition,
or delete model in the Model Registry through the UI or the API.
UI Workflow
Registering a Model
From the MLflow Runs detail page, select a logged MLflow Model in the Artifacts section.
Click the Register Model button.
In the Model Name field, if you are adding a new model, specify a unique name to identify the model. If you are registering a new version to an existing model, pick the existing model name from the dropdown.
Using the Model Registry
Navigate to the Registered Models page and view the model properties.
Go to the Artifacts section of the run detail page, click the model, and then click the model version at the top right to view the version you just created.
Each model has an overview page that shows the active versions.
Click a version to navigate to the version detail page.
On the version detail page you can see model version details and the current stage of the model
version. Click the Stage drop-down at the top right, to transition the model
version to one of the other valid stages.
API Workflow
An alternative way to interact with Model Registry is using the MLflow model flavor or MLflow Client Tracking API interface.
In particular, you can register a model during an MLflow experiment run or after all your experiment runs.
Adding an MLflow Model to the Model Registry | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-2 | Adding an MLflow Model to the Model Registry
There are three programmatic ways to add a model to the registry. First, you can use the mlflow.<model_flavor>.log_model() method. For example, in your code:
from
random
import
random
randint
from
sklearn.ensemble
import
RandomForestRegressor
import
mlflow
import
mlflow.sklearn
with
mlflow
start_run
run_name
"YOUR_RUN_NAME"
as
run
params
"n_estimators"
"random_state"
42
sk_learn_rfr
RandomForestRegressor
*
params
# Log parameters and metrics using the MLflow APIs
mlflow
log_params
params
mlflow
log_param
"param_1"
randint
100
))
mlflow
log_metrics
({
"metric_1"
random
(),
"metric_2"
random
()
})
# Log the sklearn model and register as version 1
mlflow
sklearn
log_model
sk_model
sk_learn_rfr
artifact_path
"sklearn-model"
registered_model_name
"sk-learn-random-forest-reg-model"
In the above code snippet, if a registered model with the name doesn’t exist, the method registers a new model and creates Version 1.
If a registered model with the name exists, the method creates a new model version.
The second way is to use the mlflow.register_model() method, after all your experiment runs complete and when you have decided which model is most suitable to add to the registry.
For this method, you will need the run_id as part of the runs:URI argument.
result
mlflow
register_model
"runs:/d16076a3ec534311817565e6527539c0/sklearn-model"
"sk-learn-random-forest-reg" | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-3 | "sk-learn-random-forest-reg"
If a registered model with the name doesn’t exist, the method registers a new model, creates Version 1, and returns a ModelVersion MLflow object.
If a registered model with the name exists, the method creates a new model version and returns the version object.
And finally, you can use the create_registered_model() to create a new registered model. If the model name exists,
this method will throw an MlflowException because creating a new registered model requires a unique name.
from
mlflow
import
MlflowClient
client
MlflowClient
()
client
create_registered_model
"sk-learn-random-forest-reg-model"
While the method above creates an empty registered model with no version associated, the method below creates a new version of the model.
client
MlflowClient
()
result
client
create_model_version
name
"sk-learn-random-forest-reg-model"
source
"mlruns/0/d16076a3ec534311817565e6527539c0/artifacts/sklearn-model"
run_id
"d16076a3ec534311817565e6527539c0"
Fetching an MLflow Model from the Model Registry
After you have registered an MLflow model, you can fetch that model using mlflow.<model_flavor>.load_model(), or more generally, load_model().
Fetch a specific model version
To fetch a specific model version, just supply that version number as part of the model URI.
import
mlflow.pyfunc
model_name
"sk-learn-random-forest-reg-model"
model_version
model
mlflow
pyfunc
load_model
model_uri
"models:/
model_name
model_version
model
predict
data
Fetch the latest model version in a specific stage | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-4 | model_version
model
predict
data
Fetch the latest model version in a specific stage
To fetch a model version by stage, simply provide the model stage as part of the model URI, and it will fetch the most recent version of the model in that stage.
import
mlflow.pyfunc
model_name
"sk-learn-random-forest-reg-model"
stage
"Staging"
model
mlflow
pyfunc
load_model
model_uri
"models:/
model_name
stage
model
predict
data
Serving an MLflow Model from Model Registry
After you have registered an MLflow model, you can serve the model as a service on your host.
#!/usr/bin/env sh
# Set environment variable for the tracking URL where the Model Registry resides
export
MLFLOW_TRACKING_URI
=http://localhost:5000
# Serve the production model from the model registry
mlflow
models
serve
m
"models:/sk-learn-random-forest-reg-model/Production"
Adding or Updating an MLflow Model Descriptions
At any point in a model’s lifecycle development, you can update a model version’s description using update_model_version().
client
MlflowClient
()
client
update_model_version
name
"sk-learn-random-forest-reg-model"
version
description
"This model version is a scikit-learn random forest containing 100 decision trees"
Renaming an MLflow Model
As well as adding or updating a description of a specific version of the model, you can rename an existing registered model using rename_registered_model().
client
MlflowClient
()
client
rename_registered_model
name
"sk-learn-random-forest-reg-model"
new_name
"sk-learn-random-forest-reg-model-100"
Transitioning an MLflow Model’s Stage | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-5 | Transitioning an MLflow Model’s Stage
Over the course of the model’s lifecycle, a model evolves—from development to staging to production.
You can transition a registered model to one of the stages: Staging, Production or Archived.
client
MlflowClient
()
client
transition_model_version_stage
name
"sk-learn-random-forest-reg-model"
version
stage
"Production"
The accepted values for <stage> are: Staging|Archived|Production|None.
Listing and Searching MLflow Models
You can fetch a list of registered models in the registry with a simple method.
from
pprint
import
pprint
client
MlflowClient
()
for
rm
in
client
search_registered_models
():
pprint
dict
rm
),
indent
This outputs: | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-6 | search_registered_models
():
pprint
dict
rm
),
indent
This outputs:
{ 'creation_timestamp': 1582671933216,
'description': None,
'last_updated_timestamp': 1582671960712,
'latest_versions': [<ModelVersion: creation_timestamp=1582671933246, current_stage='Production', description='A random forest model containing 100 decision trees trained in scikit-learn', last_updated_timestamp=1582671960712, name='sk-learn-random-forest-reg-model', run_id='ae2cc01346de45f79a44a320aab1797b', source='./mlruns/0/ae2cc01346de45f79a44a320aab1797b/artifacts/sklearn-model', status='READY', status_message=None, user_id=None, version=1>,
<ModelVersion: creation_timestamp=1582671960628, current_stage='None', description=None, last_updated_timestamp=1582671960628, name='sk-learn-random-forest-reg-model', run_id='d994f18d09c64c148e62a785052e6723', source='./mlruns/0/d994f18d09c64c148e62a785052e6723/artifacts/sklearn-model', status='READY', status_message=None, user_id=None, version=2>],
'name': 'sk-learn-random-forest-reg-model'}
With hundreds of models, it can be cumbersome to peruse the results returned from this call. A more efficient approach would be to search for a specific model name and list its version
details using search_model_versions() method
and provide a filter string such as "name='sk-learn-random-forest-reg-model'"
client
MlflowClient
()
for
mv
in
client
search_model_versions | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-7 | client
MlflowClient
()
for
mv
in
client
search_model_versions
"name='sk-learn-random-forest-reg-model'"
):
pprint
dict
mv
),
indent
This outputs:
"creation_timestamp"
1582671933246
"current_stage"
"Production"
"description"
"A random forest model containing 100 decision trees "
"trained in scikit-learn"
"last_updated_timestamp"
1582671960712
"name"
"sk-learn-random-forest-reg-model"
"run_id"
"ae2cc01346de45f79a44a320aab1797b"
"source"
"./mlruns/0/ae2cc01346de45f79a44a320aab1797b/artifacts/sklearn-model"
"status"
"READY"
"status_message"
None
"user_id"
None
"version"
"creation_timestamp"
1582671960628
"current_stage"
"None"
"description"
None
"last_updated_timestamp"
1582671960628
"name"
"sk-learn-random-forest-reg-model"
"run_id"
"d994f18d09c64c148e62a785052e6723"
"source"
"./mlruns/0/d994f18d09c64c148e62a785052e6723/artifacts/sklearn-model"
"status"
"READY"
"status_message"
None
"user_id"
None
"version"
Archiving an MLflow Model
You can move models versions out of a Production stage into an Archived stage.
At a later point, if that archived model is not needed, you can delete it.
# Archive models version 3 from Production into Archived
client
MlflowClient
() | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-8 | # Archive models version 3 from Production into Archived
client
MlflowClient
()
client
transition_model_version_stage
name
"sk-learn-random-forest-reg-model"
version
stage
"Archived"
Deleting MLflow Models
Note
Deleting registered models or model versions is irrevocable, so use it judiciously.
You can either delete specific versions of a registered model or you can delete a registered model and all its versions.
# Delete versions 1,2, and 3 of the model
client
MlflowClient
()
versions
for
version
in
versions
client
delete_model_version
name
"sk-learn-random-forest-reg-model"
version
version
# Delete a registered model along with all its versions
client
delete_registered_model
name
"sk-learn-random-forest-reg-model"
While the above workflow API demonstrates interactions with the Model Registry, two exceptional cases require attention.
One is when you have existing ML models saved from training without the use of MLflow. Serialized and persisted on disk
in sklearn’s pickled format, you want to register this model with the Model Registry. The second is when you use
an ML framework without a built-in MLflow model flavor support, for instance, vaderSentiment, and want to register the model.
Registering a Saved Model
Not everyone will start their model training with MLflow. So you may have some models trained before the use of MLflow.
Instead of retraining the models, all you want to do is register your saved models with the Model Registry.
This code snippet creates a sklearn model, which we assume that you had created and saved in native pickle format.
Note
The sklearn library and pickle versions with which the model was saved should be compatible with the
current MLflow supported built-in sklearn model flavor.
import
numpy
as
np
import | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-9 | import
numpy
as
np
import
pickle
from
sklearn
import
datasets
linear_model
from
sklearn.metrics
import
mean_squared_error
r2_score
# source: https://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html
# Load the diabetes dataset
diabetes_X
diabetes_y
datasets
load_diabetes
return_X_y
True
# Use only one feature
diabetes_X
diabetes_X
[:,
np
newaxis
# Split the data into training/testing sets
diabetes_X_train
diabetes_X
[:
20
diabetes_X_test
diabetes_X
20
:]
# Split the targets into training/testing sets
diabetes_y_train
diabetes_y
[:
20
diabetes_y_test
diabetes_y
20
:]
def
print_predictions
y_pred
):
# The coefficients
print
"Coefficients:
\n
coef_
# The mean squared error
print
"Mean squared error:
%.2f
mean_squared_error
diabetes_y_test
y_pred
))
# The coefficient of determination: 1 is perfect prediction
print
"Coefficient of determination:
%.2f
r2_score
diabetes_y_test
y_pred
))
# Create linear regression object
lr_model
linear_model
LinearRegression
()
# Train the model using the training sets
lr_model
fit
diabetes_X_train
diabetes_y_train
# Make predictions using the testing set
diabetes_y_pred
lr_model
predict
diabetes_X_test
print_predictions
lr_model
diabetes_y_pred
# save the model in the native sklearn format
filename
"lr_model.pkl"
pickle
dump
lr_model
open
filename
"wb"
)) | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-10 | pickle
dump
lr_model
open
filename
"wb"
))
Coefficients:
[938.23786125]
Mean squared error: 2548.07
Coefficient of determination: 0.47
Once saved in pickled format, we can load the sklearn model into memory using pickle API and
register the loaded model with the Model Registry.
import
mlflow
# load the model into memory
loaded_model
pickle
load
open
filename
"rb"
))
# log and register the model using MLflow scikit-learn API
mlflow
set_tracking_uri
"sqlite:///mlruns.db"
reg_model_name
"SklearnLinearRegression"
print
"--"
mlflow
sklearn
log_model
loaded_model
"sk_learn"
serialization_format
"cloudpickle"
registered_model_name
reg_model_name
Now, using MLflow fluent APIs, we reload the model from the Model Registry and score.
-
Successfully registered model 'SklearnLinearRegression'.
2021/04/02 16:30:57 INFO mlflow.tracking._model_registry.client: Waiting up to 300 seconds for model version to finish creation.
Model name: SklearnLinearRegression, version 1
Created version '1' of model 'SklearnLinearRegression'.
Now, using MLflow fluent APIs, we reload the model from the Model Registry and score.
# load the model from the Model Registry and score
model_uri
"models:/
reg_model_name
/1"
loaded_model
mlflow
sklearn
load_model
model_uri
print
"--"
# Make predictions using the testing set
diabetes_y_pred
loaded_model
predict
diabetes_X_test
print_predictions
loaded_model
diabetes_y_pred | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-11 | loaded_model
predict
diabetes_X_test
print_predictions
loaded_model
diabetes_y_pred
-
Coefficients:
[938.23786125]
Mean squared error: 2548.07
Coefficient of determination: 0.47
Registering an Unsupported Machine Learning Model
In some cases, you might use a machine learning framework without its built-in MLflow Model flavor support.
For instance, the vaderSentiment library is a standard Natural Language Processing (NLP) library used
for sentiment analysis. Since it lacks a built-in MLflow Model flavor, you cannot log or register the model
using MLflow Model fluent APIs.
To work around this problem, you can create an instance of a mlflow.pyfunc model flavor and embed your NLP model
inside it, allowing you to save, log or register the model. Once registered, load the model from the Model Registry
and score using the predict function.
The code sections below demonstrate how to create a PythonFuncModel class with a vaderSentiment model embedded in it,
save, log, register, and load from the Model Registry and score.
Note
To use this example, you will need to pip install vaderSentiment.
from
sys
import
version_info
import
cloudpickle
import
pandas
as
pd
import
mlflow.pyfunc
from
vaderSentiment.vaderSentiment
import
SentimentIntensityAnalyzer
# Good and readable paper from the authors of this package
# http://comp.social.gatech.edu/papers/icwsm14.vader.hutto.pdf
INPUT_TEXTS
"text"
"This is a bad movie. You don't want to see it! :-)"
},
"text"
"Ricky Gervais is smart, witty, and creative!!!!!! :D"
},
"text" | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-12 | },
"text"
"LOL, this guy fell off a chair while sleeping and snoring in a meeting"
},
"text"
"Men shoots himself while trying to steal a dog, OMG"
},
"text"
"Yay!! Another good phone interview. I nailed it!!"
},
"text"
"This is INSANE! I can't believe it. How could you do such a horrible thing?"
},
PYTHON_VERSION
{major}
{minor}
{micro}
format
major
version_info
major
minor
version_info
minor
micro
version_info
micro
def
score_model
model
):
# Use inference to predict output from the customized PyFunc model
for
text
in
enumerate
INPUT_TEXTS
):
text
INPUT_TEXTS
][
"text"
m_input
pd
DataFrame
([
text
])
scores
loaded_model
predict
m_input
print
"<
text
> --
str
scores
])
# Define a class and extend from PythonModel
class
SocialMediaAnalyserModel
mlflow
pyfunc
PythonModel
):
def
__init__
self
):
super
()
__init__
()
# embed your vader model instance
self
_analyser
SentimentIntensityAnalyzer
()
# preprocess the input with prediction from the vader sentiment model
def
_score
self
txt
):
prediction_scores
self
_analyser
polarity_scores
txt
return
prediction_scores
def
predict
self
context
model_input
):
# Apply the preprocess function from the vader model to score
model_output
model_input
apply
lambda
col
self
_score
col
))
return
model_output
model_path
"vader"
reg_model_name
"PyFuncVaderSentiments" | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-13 | model_path
"vader"
reg_model_name
"PyFuncVaderSentiments"
vader_model
SocialMediaAnalyserModel
()
# Set the tracking URI to use local SQLAlchemy db file and start the run
# Log MLflow entities and save the model
mlflow
set_tracking_uri
"sqlite:///mlruns.db"
# Save the conda environment for this model.
conda_env
"channels"
"defaults"
"conda-forge"
],
"dependencies"
"python=
{}
format
PYTHON_VERSION
),
"pip"
],
"pip"
"mlflow"
"cloudpickle==
{}
format
cloudpickle
__version__
),
"vaderSentiment==3.3.2"
],
"name"
"mlflow-env"
# Save the model
with
mlflow
start_run
run_name
"Vader Sentiment Analysis"
as
run
model_path
model_path
run
info
run_uuid
mlflow
log_param
"algorithm"
"VADER"
mlflow
log_param
"total_sentiments"
len
INPUT_TEXTS
))
mlflow
pyfunc
save_model
path
model_path
python_model
vader_model
conda_env
conda_env
# Use the saved model path to log and register into the model registry
mlflow
pyfunc
log_model
artifact_path
model_path
python_model
vader_model
registered_model_name
reg_model_name
conda_env
conda_env
# Load the model from the model registry and score
model_uri
"models:/
reg_model_name
/1"
loaded_model
mlflow
pyfunc
load_model
model_uri
score_model
loaded_model | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |
e9e0bdb6f304-14 | loaded_model
mlflow
pyfunc
load_model
model_uri
score_model
loaded_model
Successfully registered model 'PyFuncVaderSentiments'.
2021/04/05 10:34:15 INFO mlflow.tracking._model_registry.client: Waiting up to 300 seconds for model version to finish creation.
Created version '1' of model 'PyFuncVaderSentiments'.
<This is a bad movie. You don't want to see it! :-)> -- {'neg': 0.307, 'neu': 0.552, 'pos': 0.141, 'compound': -0.4047}
<Ricky Gervais is smart, witty, and creative!!!!!! :D> -- {'neg': 0.0, 'neu': 0.316, 'pos': 0.684, 'compound': 0.8957}
<LOL, this guy fell off a chair while sleeping and snoring in a meeting> -- {'neg': 0.0, 'neu': 0.786, 'pos': 0.214, 'compound': 0.5473}
<Men shoots himself while trying to steal a dog, OMG> -- {'neg': 0.262, 'neu': 0.738, 'pos': 0.0, 'compound': -0.4939}
<Yay!! Another good phone interview. I nailed it!!> -- {'neg': 0.0, 'neu': 0.446, 'pos': 0.554, 'compound': 0.816}
<This is INSANE! I can't believe it. How could you do such a horrible thing?> -- {'neg': 0.357, 'neu': 0.643, 'pos': 0.0, 'compound': -0.8034} | {
"url": "https://mlflow.org/docs/latest/model-registry.html#concepts"
} |