wjayesh commited on
Commit
db6a18c
·
verified ·
1 Parent(s): c119039

Upload component-guide.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. component-guide.txt +14 -384
component-guide.txt CHANGED
@@ -157,11 +157,6 @@ description: Sending automated alerts to chat services.
157
  icon: message-exclamation
158
  ---
159
 
160
- {% hint style="warning" %}
161
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
162
- {% endhint %}
163
-
164
-
165
  # Alerters
166
 
167
  **Alerters** allow you to send messages to chat services (like Slack, Discord, Mattermost, etc.) from within your
@@ -217,11 +212,6 @@ File: docs/book/component-guide/alerters/custom.md
217
  description: Learning how to develop a custom alerter.
218
  ---
219
 
220
- {% hint style="warning" %}
221
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
222
- {% endhint %}
223
-
224
-
225
  # Develop a Custom Alerter
226
 
227
  {% hint style="info" %}
@@ -369,11 +359,6 @@ File: docs/book/component-guide/alerters/discord.md
369
  description: Sending automated alerts to a Discord channel.
370
  ---
371
 
372
- {% hint style="warning" %}
373
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
374
- {% endhint %}
375
-
376
-
377
  # Discord Alerter
378
 
379
  The `DiscordAlerter` enables you to send messages to a dedicated Discord channel
@@ -516,11 +501,6 @@ File: docs/book/component-guide/alerters/slack.md
516
  description: Sending automated alerts to a Slack channel.
517
  ---
518
 
519
- {% hint style="warning" %}
520
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
521
- {% endhint %}
522
-
523
-
524
  # Slack Alerter
525
 
526
  The `SlackAlerter` enables you to send messages or ask questions within a
@@ -862,11 +842,6 @@ File: docs/book/component-guide/annotators/argilla.md
862
  description: Annotating data using Argilla.
863
  ---
864
 
865
- {% hint style="warning" %}
866
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
867
- {% endhint %}
868
-
869
-
870
  # Argilla
871
 
872
  [Argilla](https://github.com/argilla-io/argilla) is a collaboration tool for AI engineers and domain experts who need to build high-quality datasets for their projects. It enables users to build robust language models through faster data curation using both human and machine feedback, providing support for each step in the MLOps cycle, from data labeling to model monitoring.
@@ -1010,11 +985,6 @@ File: docs/book/component-guide/annotators/custom.md
1010
  description: Learning how to develop a custom annotator.
1011
  ---
1012
 
1013
- {% hint style="warning" %}
1014
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1015
- {% endhint %}
1016
-
1017
-
1018
  # Develop a Custom Annotator
1019
 
1020
  {% hint style="info" %}
@@ -1038,11 +1008,6 @@ File: docs/book/component-guide/annotators/label-studio.md
1038
  description: Annotating data using Label Studio.
1039
  ---
1040
 
1041
- {% hint style="warning" %}
1042
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1043
- {% endhint %}
1044
-
1045
-
1046
  # Label Studio
1047
 
1048
  Label Studio is one of the leading open-source annotation platforms available to data scientists and ML practitioners.
@@ -1195,11 +1160,6 @@ File: docs/book/component-guide/annotators/pigeon.md
1195
  description: Annotating data using Pigeon.
1196
  ---
1197
 
1198
- {% hint style="warning" %}
1199
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1200
- {% endhint %}
1201
-
1202
-
1203
  # Pigeon
1204
 
1205
  Pigeon is a lightweight, open-source annotation tool designed for quick and easy labeling of data directly within Jupyter notebooks. It provides a simple and intuitive interface for annotating various types of data, including:
@@ -1317,11 +1277,6 @@ File: docs/book/component-guide/annotators/prodigy.md
1317
  description: Annotating data using Prodigy.
1318
  ---
1319
 
1320
- {% hint style="warning" %}
1321
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1322
- {% endhint %}
1323
-
1324
-
1325
  # Prodigy
1326
 
1327
  [Prodigy](https://prodi.gy/) is a modern annotation tool for creating training
@@ -1461,11 +1416,6 @@ description: Setting up a persistent storage for your artifacts.
1461
  icon: folder-closed
1462
  ---
1463
 
1464
- {% hint style="warning" %}
1465
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1466
- {% endhint %}
1467
-
1468
-
1469
  # Artifact Stores
1470
 
1471
  The Artifact Store is a central component in any MLOps stack. As the name suggests, it acts as a data persistence layer where artifacts (e.g. datasets, models) ingested or generated by the machine learning pipelines are stored.
@@ -1638,11 +1588,6 @@ File: docs/book/component-guide/artifact-stores/azure.md
1638
  description: Storing artifacts using Azure Blob Storage
1639
  ---
1640
 
1641
- {% hint style="warning" %}
1642
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1643
- {% endhint %}
1644
-
1645
-
1646
  # Azure Blob Storage
1647
 
1648
  The Azure Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the Azure ZenML integration that uses [the Azure Blob Storage managed object storage service](https://azure.microsoft.com/en-us/services/storage/blobs/) to store ZenML artifacts in an Azure Blob Storage container.
@@ -1875,11 +1820,6 @@ File: docs/book/component-guide/artifact-stores/custom.md
1875
  description: Learning how to develop a custom artifact store.
1876
  ---
1877
 
1878
- {% hint style="warning" %}
1879
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1880
- {% endhint %}
1881
-
1882
-
1883
  # Develop a custom artifact store
1884
 
1885
  {% hint style="info" %}
@@ -2072,11 +2012,6 @@ File: docs/book/component-guide/artifact-stores/gcp.md
2072
  description: Storing artifacts using GCP Cloud Storage.
2073
  ---
2074
 
2075
- {% hint style="warning" %}
2076
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2077
- {% endhint %}
2078
-
2079
-
2080
  # Google Cloud Storage (GCS)
2081
 
2082
  The GCS Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the GCP ZenML integration that uses [the Google Cloud Storage managed object storage service](https://cloud.google.com/storage/docs/introduction) to store ZenML artifacts in a GCP Cloud Storage bucket.
@@ -2281,11 +2216,6 @@ File: docs/book/component-guide/artifact-stores/local.md
2281
  description: Storing artifacts on your local filesystem.
2282
  ---
2283
 
2284
- {% hint style="warning" %}
2285
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2286
- {% endhint %}
2287
-
2288
-
2289
  # Local Artifact Store
2290
 
2291
  The local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) flavor that uses a folder on your local filesystem to store artifacts.
@@ -2374,11 +2304,6 @@ File: docs/book/component-guide/artifact-stores/s3.md
2374
  description: Storing artifacts in an AWS S3 bucket.
2375
  ---
2376
 
2377
- {% hint style="warning" %}
2378
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2379
- {% endhint %}
2380
-
2381
-
2382
  # Amazon Simple Cloud Storage (S3)
2383
 
2384
  The S3 Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the S3 ZenML integration that uses [the AWS S3 managed object storage service](https://aws.amazon.com/s3/) or one of the self-hosted S3 alternatives, such as [MinIO](https://min.io/) or [Ceph RGW](https://ceph.io/en/discover/technology/#object), to store artifacts in an S3 compatible object storage backend.
@@ -2603,11 +2528,6 @@ File: docs/book/component-guide/container-registries/aws.md
2603
  description: Storing container images in Amazon ECR.
2604
  ---
2605
 
2606
- {% hint style="warning" %}
2607
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2608
- {% endhint %}
2609
-
2610
-
2611
  # Amazon Elastic Container Registry (ECR)
2612
 
2613
  The AWS container registry is a [container registry](./container-registries.md) flavor provided with the ZenML `aws` integration and uses [Amazon ECR](https://aws.amazon.com/ecr/) to store container images.
@@ -2819,11 +2739,6 @@ File: docs/book/component-guide/container-registries/azure.md
2819
  description: Storing container images in Azure.
2820
  ---
2821
 
2822
- {% hint style="warning" %}
2823
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2824
- {% endhint %}
2825
-
2826
-
2827
  # Azure Container Registry
2828
 
2829
  The Azure container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) to store container images.
@@ -3078,11 +2993,6 @@ File: docs/book/component-guide/container-registries/custom.md
3078
  description: Learning how to develop a custom container registry.
3079
  ---
3080
 
3081
- {% hint style="warning" %}
3082
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3083
- {% endhint %}
3084
-
3085
-
3086
  # Develop a custom container registry
3087
 
3088
  {% hint style="info" %}
@@ -3209,11 +3119,6 @@ File: docs/book/component-guide/container-registries/default.md
3209
  description: Storing container images locally.
3210
  ---
3211
 
3212
- {% hint style="warning" %}
3213
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3214
- {% endhint %}
3215
-
3216
-
3217
  # Default Container Registry
3218
 
3219
  The Default container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and allows container registry URIs of any format.
@@ -3391,11 +3296,6 @@ File: docs/book/component-guide/container-registries/dockerhub.md
3391
  description: Storing container images in DockerHub.
3392
  ---
3393
 
3394
- {% hint style="warning" %}
3395
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3396
- {% endhint %}
3397
-
3398
-
3399
  # DockerHub
3400
 
3401
  The DockerHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses [DockerHub](https://hub.docker.com/) to store container images.
@@ -3469,11 +3369,6 @@ File: docs/book/component-guide/container-registries/gcp.md
3469
  description: Storing container images in GCP.
3470
  ---
3471
 
3472
- {% hint style="warning" %}
3473
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3474
- {% endhint %}
3475
-
3476
-
3477
  # Google Cloud Container Registry
3478
 
3479
  The GCP container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Google Artifact Registry](https://cloud.google.com/artifact-registry).
@@ -3570,13 +3465,9 @@ Stacks using the GCP Container Registry set up with local authentication are not
3570
  {% endtab %}
3571
 
3572
  {% tab title="GCP Service Connector (recommended)" %}
3573
- To set up the GCP Container Registry to authenticate to GCP and access a GCR registry, it is recommended to leverage the many features provided by [the GCP Service Connector](../../how-to/infrastructure-deployment/auth-management/gcp-service-connector.md) such as auto-configuration, local login, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components.
3574
 
3575
- {% hint style="warning" %}
3576
- The GCP Service Connector does not support the Google Artifact Registry yet. If you need to connect your GCP Container Registry to a Google Artifact Registry, you can use the _Local Authentication_ method instead.
3577
- {% endhint %}
3578
-
3579
- If you don't already have a GCP Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You have the option to configure a GCP Service Connector that can be used to access a GCR registry or even more than one type of GCP resource:
3580
 
3581
  ```sh
3582
  zenml service-connector register --type gcp -i
@@ -3715,11 +3606,6 @@ File: docs/book/component-guide/container-registries/github.md
3715
  description: Storing container images in GitHub.
3716
  ---
3717
 
3718
- {% hint style="warning" %}
3719
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3720
- {% endhint %}
3721
-
3722
-
3723
  # GitHub Container Registry
3724
 
3725
  The GitHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) to store container images.
@@ -3782,11 +3668,6 @@ File: docs/book/component-guide/data-validators/custom.md
3782
  description: How to develop a custom data validator
3783
  ---
3784
 
3785
- {% hint style="warning" %}
3786
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3787
- {% endhint %}
3788
-
3789
-
3790
  # Develop a custom data validator
3791
 
3792
  {% hint style="info" %}
@@ -3916,11 +3797,6 @@ description: >-
3916
  suites
3917
  ---
3918
 
3919
- {% hint style="warning" %}
3920
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3921
- {% endhint %}
3922
-
3923
-
3924
  # Deepchecks
3925
 
3926
  The Deepchecks [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Deepchecks](https://deepchecks.com/) to run data integrity, data drift, model drift and model performance tests on the datasets and models circulated in your ZenML pipelines. The test results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
@@ -4345,11 +4221,6 @@ description: >-
4345
  with Evidently profiling
4346
  ---
4347
 
4348
- {% hint style="warning" %}
4349
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4350
- {% endhint %}
4351
-
4352
-
4353
  # Evidently
4354
 
4355
  The Evidently [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Evidently](https://evidentlyai.com/) to perform data quality, data drift, model drift and model performance analyzes, to generate reports and run checks. The reports and check results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
@@ -4987,11 +4858,6 @@ description: >-
4987
  document the results
4988
  ---
4989
 
4990
- {% hint style="warning" %}
4991
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4992
- {% endhint %}
4993
-
4994
-
4995
  # Great Expectations
4996
 
4997
  The Great Expectations [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Great Expectations](https://greatexpectations.io/) to run data profiling and data quality tests on the data circulated through your pipelines. The test results can be used to implement automated corrective actions in your pipelines. They are also automatically rendered into documentation for further visual interpretation and evaluation.
@@ -5304,11 +5170,6 @@ description: >-
5304
  data with whylogs/WhyLabs profiling.
5305
  ---
5306
 
5307
- {% hint style="warning" %}
5308
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5309
- {% endhint %}
5310
-
5311
-
5312
  # Whylogs
5313
 
5314
  The whylogs/WhyLabs [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [whylogs](https://whylabs.ai/whylogs) and [WhyLabs](https://whylabs.ai) to generate and track data profiles, highly accurate descriptive representations of your data. The profiles can be used to implement automated corrective actions in your pipelines, or to render interactive representations for further visual interpretation, evaluation and documentation.
@@ -5596,11 +5457,6 @@ File: docs/book/component-guide/experiment-trackers/comet.md
5596
  description: Logging and visualizing experiments with Comet.
5597
  ---
5598
 
5599
- {% hint style="warning" %}
5600
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5601
- {% endhint %}
5602
-
5603
-
5604
  # Comet
5605
 
5606
  The Comet Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Comet ZenML integration that uses [the Comet experiment tracking platform](https://www.comet.com/site/products/ml-experiment-tracking/) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
@@ -5896,11 +5752,6 @@ File: docs/book/component-guide/experiment-trackers/custom.md
5896
  description: Learning how to develop a custom experiment tracker.
5897
  ---
5898
 
5899
- {% hint style="warning" %}
5900
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5901
- {% endhint %}
5902
-
5903
-
5904
  # Develop a custom experiment tracker
5905
 
5906
  {% hint style="info" %}
@@ -5967,11 +5818,6 @@ description: Logging and visualizing ML experiments.
5967
  icon: clipboard
5968
  ---
5969
 
5970
- {% hint style="warning" %}
5971
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5972
- {% endhint %}
5973
-
5974
-
5975
  # Experiment Trackers
5976
 
5977
  Experiment trackers let you track your ML experiments by logging extended information about your models, datasets,
@@ -6065,11 +5911,6 @@ File: docs/book/component-guide/experiment-trackers/mlflow.md
6065
  description: Logging and visualizing experiments with MLflow.
6066
  ---
6067
 
6068
- {% hint style="warning" %}
6069
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6070
- {% endhint %}
6071
-
6072
-
6073
  # MLflow
6074
 
6075
  The MLflow Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the MLflow ZenML integration that uses [the MLflow tracking service](https://mlflow.org/docs/latest/tracking.html) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
@@ -6288,11 +6129,6 @@ File: docs/book/component-guide/experiment-trackers/neptune.md
6288
  description: Logging and visualizing experiments with neptune.ai
6289
  ---
6290
 
6291
- {% hint style="warning" %}
6292
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6293
- {% endhint %}
6294
-
6295
-
6296
  # Neptune
6297
 
6298
  The Neptune Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Neptune-ZenML integration that uses [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
@@ -6611,11 +6447,6 @@ File: docs/book/component-guide/experiment-trackers/vertexai.md
6611
  description: Logging and visualizing experiments with Vertex AI Experiment Tracker.
6612
  ---
6613
 
6614
- {% hint style="warning" %}
6615
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6616
- {% endhint %}
6617
-
6618
-
6619
  # Vertex AI Experiment Tracker
6620
 
6621
  The Vertex AI Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Vertex AI ZenML integration. It uses the [Vertex AI tracking service](https://cloud.google.com/vertex-ai/docs/experiments/intro-vertex-ai-experiments) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
@@ -6935,11 +6766,6 @@ File: docs/book/component-guide/experiment-trackers/wandb.md
6935
  description: Logging and visualizing experiments with Weights & Biases.
6936
  ---
6937
 
6938
- {% hint style="warning" %}
6939
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6940
- {% endhint %}
6941
-
6942
-
6943
  # Weights & Biases
6944
 
6945
  The Weights & Biases Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Weights & Biases ZenML integration that uses [the Weights & Biases experiment tracking platform](https://wandb.ai/site/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
@@ -7257,11 +7083,6 @@ File: docs/book/component-guide/feature-stores/custom.md
7257
  description: Learning how to develop a custom feature store.
7258
  ---
7259
 
7260
- {% hint style="warning" %}
7261
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7262
- {% endhint %}
7263
-
7264
-
7265
  # Develop a Custom Feature Store
7266
 
7267
  {% hint style="info" %}
@@ -7285,11 +7106,6 @@ File: docs/book/component-guide/feature-stores/feast.md
7285
  description: Managing data in Feast feature stores.
7286
  ---
7287
 
7288
- {% hint style="warning" %}
7289
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7290
- {% endhint %}
7291
-
7292
-
7293
  # Feast
7294
 
7295
  Feast (Feature Store) is an operational data system for managing and serving machine learning features to models in production. Feast is able to serve feature data to models from a low-latency online store (for real-time prediction) or from an offline store (for scale-out batch scoring or model training).
@@ -7472,11 +7288,6 @@ File: docs/book/component-guide/image-builders/aws.md
7472
  description: Building container images with AWS CodeBuild
7473
  ---
7474
 
7475
- {% hint style="warning" %}
7476
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7477
- {% endhint %}
7478
-
7479
-
7480
  # AWS Image Builder
7481
 
7482
  The AWS image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `aws` integration that uses [AWS CodeBuild](https://aws.amazon.com/codebuild) to build container images.
@@ -7715,11 +7526,6 @@ File: docs/book/component-guide/image-builders/custom.md
7715
  description: Learning how to develop a custom image builder.
7716
  ---
7717
 
7718
- {% hint style="warning" %}
7719
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7720
- {% endhint %}
7721
-
7722
-
7723
  # Develop a Custom Image Builder
7724
 
7725
  {% hint style="info" %}
@@ -7840,11 +7646,6 @@ File: docs/book/component-guide/image-builders/gcp.md
7840
  description: Building container images with Google Cloud Build
7841
  ---
7842
 
7843
- {% hint style="warning" %}
7844
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7845
- {% endhint %}
7846
-
7847
-
7848
  # Google Cloud Image Builder
7849
 
7850
  The Google Cloud image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `gcp` integration that uses [Google Cloud Build](https://cloud.google.com/build) to build container images.
@@ -8098,11 +7899,6 @@ File: docs/book/component-guide/image-builders/kaniko.md
8098
  description: Building container images with Kaniko.
8099
  ---
8100
 
8101
- {% hint style="warning" %}
8102
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8103
- {% endhint %}
8104
-
8105
-
8106
  # Kaniko Image Builder
8107
 
8108
  The Kaniko image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `kaniko` integration that uses [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images.
@@ -8244,7 +8040,7 @@ List of some possible additional flags:
8244
 
8245
  * `--cache`: Set to `false` to disable caching. Defaults to `true`.
8246
  * `--cache-dir`: Set the directory where to store cached layers. Defaults to `/cache`.
8247
- * `--cache-repo`: Set the repository where to store cached layers. Defaults to `gcr.io/kaniko-project/executor`.
8248
  * `--cache-ttl`: Set the cache expiration time. Defaults to `24h`.
8249
  * `--cleanup`: Set to `false` to disable cleanup of the working directory. Defaults to `true`.
8250
  * `--compressed-caching`: Set to `false` to disable compressed caching. Defaults to `true`.
@@ -8260,11 +8056,6 @@ File: docs/book/component-guide/image-builders/local.md
8260
  description: Building container images locally.
8261
  ---
8262
 
8263
- {% hint style="warning" %}
8264
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8265
- {% endhint %}
8266
-
8267
-
8268
  # Local Image Builder
8269
 
8270
  The local image builder is an [image builder](./image-builders.md) flavor that comes built-in with ZenML and uses the local Docker installation on your client machine to build container images.
@@ -8317,11 +8108,6 @@ File: docs/book/component-guide/model-deployers/bentoml.md
8317
  description: Deploying your models locally with BentoML.
8318
  ---
8319
 
8320
- {% hint style="warning" %}
8321
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8322
- {% endhint %}
8323
-
8324
-
8325
  # BentoML
8326
 
8327
  BentoML is an open-source framework for machine learning model serving. it can be used to deploy models locally, in a cloud environment, or in a Kubernetes environment.
@@ -8369,7 +8155,7 @@ The recommended flow to use the BentoML model deployer is to first [create a Ben
8369
 
8370
  ### Create a BentoML Service
8371
 
8372
- The first step to being able to deploy your models and use BentoML is to create a [bento service](https://docs.bentoml.com/en/latest/guides/services.html) which is the main logic that defines how your model will be served. The
8373
 
8374
  The following example shows how to create a basic bento service that will be used to serve a torch model. Learn more about how to specify the inputs and outputs for the APIs and how to use validators in the [Input and output types BentoML docs](https://docs.bentoml.com/en/latest/guides/iotypes.html)
8375
 
@@ -8708,11 +8494,6 @@ File: docs/book/component-guide/model-deployers/custom.md
8708
  description: Learning how to develop a custom model deployer.
8709
  ---
8710
 
8711
- {% hint style="warning" %}
8712
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8713
- {% endhint %}
8714
-
8715
-
8716
  # Develop a Custom Model Deployer
8717
 
8718
  {% hint style="info" %}
@@ -8885,11 +8666,6 @@ description: >-
8885
  Deploying models to Databricks Inference Endpoints with Databricks
8886
  ---
8887
 
8888
- {% hint style="warning" %}
8889
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8890
- {% endhint %}
8891
-
8892
-
8893
  # Databricks
8894
 
8895
 
@@ -9043,11 +8819,6 @@ description: >-
9043
  :hugging_face:.
9044
  ---
9045
 
9046
- {% hint style="warning" %}
9047
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
9048
- {% endhint %}
9049
-
9050
-
9051
  # Hugging Face
9052
 
9053
  Hugging Face Inference Endpoints provides a secure production solution to easily deploy any `transformers`, `sentence-transformers`, and `diffusers` models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the [Hub](https://huggingface.co/models).
@@ -9240,11 +9011,6 @@ File: docs/book/component-guide/model-deployers/mlflow.md
9240
  description: Deploying your models locally with MLflow.
9241
  ---
9242
 
9243
- {% hint style="warning" %}
9244
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
9245
- {% endhint %}
9246
-
9247
-
9248
  # MLflow
9249
 
9250
  The MLflow Model Deployer is one of the available flavors of the [Model Deployer](./model-deployers.md) stack component. Provided with the MLflow integration it can be used to deploy and manage [MLflow models](https://www.mlflow.org/docs/latest/python\_api/mlflow.deployments.html) on a local running MLflow server.
@@ -9568,13 +9334,13 @@ zenml model-deployer register seldon --flavor=seldon \
9568
  ```
9569
 
9570
  * Lifecycle Management: Provides mechanisms for comprehensive lifecycle management of model servers, including the ability to start, stop, and delete model servers, as well as to update existing servers with new model versions, thereby optimizing resource utilization and facilitating continuous delivery of model updates. Some core methods that can be used to interact with the remote model server include:
9571
-
9572
- `deploy_model` - Deploys a model to the serving environment and returns a Service object that represents the deployed model server.
9573
- `find_model_server` - Finds and returns a list of Service objects that represent model servers that have been deployed to the serving environment, the
9574
- services are stored in the DB and can be used as a reference to know what and where the model is deployed.
9575
- `stop_model_server` - Stops a model server that is currently running in the serving environment.
9576
- `start_model_server` - Starts a model server that has been stopped in the serving environment.
9577
- `delete_model_server` - Deletes a model server from the serving environment and from the DB.
9578
 
9579
  {% hint style="info" %}
9580
  ZenML uses the Service object to represent a model server that has been deployed to a serving environment. The Service object is saved in the DB and can be used as a reference to know what and where the model is deployed. The Service object consists of 2 main attributes, the `config` and the `status`. The `config` attribute holds all the deployment configuration attributes required to create a new deployment, while the `status` attribute holds the operational status of the deployment, such as the last error message, the prediction URL, and the deployment status.
@@ -9689,11 +9455,6 @@ File: docs/book/component-guide/model-deployers/seldon.md
9689
  description: Deploying models to Kubernetes with Seldon Core.
9690
  ---
9691
 
9692
- {% hint style="warning" %}
9693
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
9694
- {% endhint %}
9695
-
9696
-
9697
  # Seldon
9698
 
9699
  [Seldon Core](https://github.com/SeldonIO/seldon-core) is a production grade source-available model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more.
@@ -10173,11 +9934,6 @@ File: docs/book/component-guide/model-deployers/vllm.md
10173
  description: Deploying your LLM locally with vLLM.
10174
  ---
10175
 
10176
- {% hint style="warning" %}
10177
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10178
- {% endhint %}
10179
-
10180
-
10181
  # vLLM
10182
 
10183
  [vLLM](https://docs.vllm.ai/en/latest/) is a fast and easy-to-use library for LLM inference and serving.
@@ -10256,11 +10012,6 @@ File: docs/book/component-guide/model-registries/custom.md
10256
  description: Learning how to develop a custom model registry.
10257
  ---
10258
 
10259
- {% hint style="warning" %}
10260
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10261
- {% endhint %}
10262
-
10263
-
10264
  # Develop a Custom Model Registry
10265
 
10266
  {% hint style="info" %}
@@ -10457,11 +10208,6 @@ File: docs/book/component-guide/model-registries/mlflow.md
10457
  description: Managing MLFlow logged models and artifacts
10458
  ---
10459
 
10460
- {% hint style="warning" %}
10461
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10462
- {% endhint %}
10463
-
10464
-
10465
  # MLflow Model Registry
10466
 
10467
  [MLflow](https://www.mlflow.org/docs/latest/tracking.html) is a popular tool that helps you track experiments, manage models and even deploy them to different environments. ZenML already provides a [MLflow Experiment Tracker](../experiment-trackers/mlflow.md) that you can use to track your experiments, and an [MLflow Model Deployer](../model-deployers/mlflow.md) that you can use to deploy your models locally.
@@ -10711,11 +10457,6 @@ File: docs/book/component-guide/orchestrators/airflow.md
10711
  description: Orchestrating your pipelines to run on Airflow.
10712
  ---
10713
 
10714
- {% hint style="warning" %}
10715
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10716
- {% endhint %}
10717
-
10718
-
10719
  # Airflow Orchestrator
10720
 
10721
  ZenML pipelines can be executed natively as [Airflow](https://airflow.apache.org/)
@@ -11025,11 +10766,6 @@ File: docs/book/component-guide/orchestrators/azureml.md
11025
  description: Orchestrating your pipelines to run on AzureML.
11026
  ---
11027
 
11028
- {% hint style="warning" %}
11029
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11030
- {% endhint %}
11031
-
11032
-
11033
  # AzureML Orchestrator
11034
 
11035
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning) is a
@@ -11276,11 +11012,6 @@ File: docs/book/component-guide/orchestrators/custom.md
11276
  description: Learning how to develop a custom orchestrator.
11277
  ---
11278
 
11279
- {% hint style="warning" %}
11280
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11281
- {% endhint %}
11282
-
11283
-
11284
  # Develop a custom orchestrator
11285
 
11286
  {% hint style="info" %}
@@ -11505,11 +11236,6 @@ File: docs/book/component-guide/orchestrators/databricks.md
11505
  description: Orchestrating your pipelines to run on Databricks.
11506
  ---
11507
 
11508
- {% hint style="warning" %}
11509
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11510
- {% endhint %}
11511
-
11512
-
11513
  # Databricks Orchestrator
11514
 
11515
  [Databricks](https://www.databricks.com/) is a unified data analytics platform that combines the best of data warehouses and data lakes to offer an integrated solution for big data processing and machine learning. It provides a collaborative environment for data scientists, data engineers, and business analysts to work together on data projects. Databricks offers optimized performance and scalability for big data workloads.
@@ -11706,11 +11432,6 @@ File: docs/book/component-guide/orchestrators/hyperai.md
11706
  description: Orchestrating your pipelines to run on HyperAI.ai instances.
11707
  ---
11708
 
11709
- {% hint style="warning" %}
11710
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11711
- {% endhint %}
11712
-
11713
-
11714
  # HyperAI Orchestrator
11715
 
11716
  [HyperAI](https://www.hyperai.ai) is a cutting-edge cloud compute platform designed to make AI accessible for everyone. The HyperAI orchestrator is an [orchestrator](./orchestrators.md) flavor that allows you to easily deploy your pipelines on HyperAI instances.
@@ -11798,11 +11519,6 @@ File: docs/book/component-guide/orchestrators/kubeflow.md
11798
  description: Orchestrating your pipelines to run on Kubeflow.
11799
  ---
11800
 
11801
- {% hint style="warning" %}
11802
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11803
- {% endhint %}
11804
-
11805
-
11806
  # Kubeflow Orchestrator
11807
 
11808
  The Kubeflow orchestrator is an [orchestrator](./orchestrators.md) flavor provided by the ZenML `kubeflow` integration that uses [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) to run your pipelines.
@@ -12160,11 +11876,6 @@ File: docs/book/component-guide/orchestrators/kubernetes.md
12160
  description: Orchestrating your pipelines to run on Kubernetes clusters.
12161
  ---
12162
 
12163
- {% hint style="warning" %}
12164
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12165
- {% endhint %}
12166
-
12167
-
12168
  # Kubernetes Orchestrator
12169
 
12170
  Using the ZenML `kubernetes` integration, you can orchestrate and scale your ML pipelines on a [Kubernetes](https://kubernetes.io/) cluster without writing a single line of Kubernetes code.
@@ -12173,9 +11884,7 @@ This Kubernetes-native orchestrator is a minimalist, lightweight alternative to
12173
 
12174
  Overall, the Kubernetes orchestrator is quite similar to the Kubeflow orchestrator in that it runs each pipeline step in a separate Kubernetes pod. However, the orchestration of the different pods is not done by Kubeflow but by a separate master pod that orchestrates the step execution via topological sort.
12175
 
12176
- Compared to Kubeflow, this means that the Kubernetes-native orchestrator is faster and much simpler to start with since you do not need to install and maintain Kubeflow on your cluster. The Kubernetes-native orchestrator is an ideal choice for teams new to distributed orchestration that do not want to go with a fully-managed offering.
12177
-
12178
- However, since Kubeflow is much more mature, you should, in most cases, aim to move your pipelines to Kubeflow in the long run. A smooth way to production-grade orchestration could be to set up a Kubernetes cluster first and get started with the Kubernetes-native orchestrator. If needed, you can then install and set up Kubeflow later and simply switch out the orchestrator of your stack as soon as your full setup is ready.
12179
 
12180
  {% hint style="warning" %}
12181
  This component is only meant to be used within the context of a [remote ZenML deployment scenario](../../getting-started/deploying-zenml/README.md). Usage with a local ZenML deployment may lead to unexpected behavior!
@@ -12470,11 +12179,6 @@ File: docs/book/component-guide/orchestrators/lightning.md
12470
  description: Orchestrating your pipelines to run on Lightning AI.
12471
  ---
12472
 
12473
- {% hint style="warning" %}
12474
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12475
- {% endhint %}
12476
-
12477
-
12478
 
12479
  # Lightning AI Orchestrator
12480
 
@@ -12674,11 +12378,6 @@ File: docs/book/component-guide/orchestrators/local-docker.md
12674
  description: Orchestrating your pipelines to run in Docker.
12675
  ---
12676
 
12677
- {% hint style="warning" %}
12678
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12679
- {% endhint %}
12680
-
12681
-
12682
  # Local Docker Orchestrator
12683
 
12684
  The local Docker orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally using Docker.
@@ -12756,11 +12455,6 @@ File: docs/book/component-guide/orchestrators/local.md
12756
  description: Orchestrating your pipelines to run locally.
12757
  ---
12758
 
12759
- {% hint style="warning" %}
12760
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12761
- {% endhint %}
12762
-
12763
-
12764
  # Local Orchestrator
12765
 
12766
  The local orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally.
@@ -12894,11 +12588,6 @@ File: docs/book/component-guide/orchestrators/sagemaker.md
12894
  description: Orchestrating your pipelines to run on Amazon Sagemaker.
12895
  ---
12896
 
12897
- {% hint style="warning" %}
12898
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12899
- {% endhint %}
12900
-
12901
-
12902
  # AWS Sagemaker Orchestrator
12903
 
12904
  [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) is a serverless ML workflow tool running on AWS. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
@@ -13064,7 +12753,7 @@ Additional configuration for the Sagemaker orchestrator can be passed via `Sagem
13064
  * `sagemaker_session`
13065
  * `entrypoint`
13066
  * `base_job_name`
13067
- * `env`
13068
 
13069
  For example, settings can be provided and applied in the following way:
13070
 
@@ -13077,6 +12766,7 @@ from zenml.integrations.aws.flavors.sagemaker_orchestrator_flavor import (
13077
  sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(
13078
  instance_type="ml.m5.large",
13079
  volume_size_in_gb=30,
 
13080
  )
13081
 
13082
 
@@ -13447,11 +13137,6 @@ File: docs/book/component-guide/orchestrators/skypilot-vm.md
13447
  description: Orchestrating your pipelines to run on VMs using SkyPilot.
13448
  ---
13449
 
13450
- {% hint style="warning" %}
13451
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
13452
- {% endhint %}
13453
-
13454
-
13455
  # Skypilot VM Orchestrator
13456
 
13457
  The SkyPilot VM Orchestrator is an integration provided by ZenML that allows you to provision and manage virtual machines (VMs) on any cloud provider supported by the [SkyPilot framework](https://skypilot.readthedocs.io/en/latest/index.html). This integration is designed to simplify the process of running machine learning workloads on the cloud, offering cost savings, high GPU availability, and managed execution, We recommend using the SkyPilot VM Orchestrator if you need access to GPUs for your workloads, but don't want to deal with the complexities of managing cloud infrastructure or expensive managed solutions.
@@ -13974,11 +13659,6 @@ File: docs/book/component-guide/orchestrators/tekton.md
13974
  description: Orchestrating your pipelines to run on Tekton.
13975
  ---
13976
 
13977
- {% hint style="warning" %}
13978
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
13979
- {% endhint %}
13980
-
13981
-
13982
  # Tekton Orchestrator
13983
 
13984
  [Tekton](https://tekton.dev/) is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems.
@@ -14219,11 +13899,6 @@ File: docs/book/component-guide/orchestrators/vertex.md
14219
  description: Orchestrating your pipelines to run on Vertex AI.
14220
  ---
14221
 
14222
- {% hint style="warning" %}
14223
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14224
- {% endhint %}
14225
-
14226
-
14227
  # Google Cloud VertexAI Orchestrator
14228
 
14229
  [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) is a serverless ML workflow tool running on the Google Cloud Platform. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
@@ -14543,11 +14218,6 @@ File: docs/book/component-guide/step-operators/azureml.md
14543
  description: Executing individual steps in AzureML.
14544
  ---
14545
 
14546
- {% hint style="warning" %}
14547
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14548
- {% endhint %}
14549
-
14550
-
14551
  # AzureML
14552
 
14553
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's AzureML step operator allows you to submit individual steps to be run on AzureML compute instances.
@@ -14709,11 +14379,6 @@ File: docs/book/component-guide/step-operators/custom.md
14709
  description: Learning how to develop a custom step operator.
14710
  ---
14711
 
14712
- {% hint style="warning" %}
14713
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14714
- {% endhint %}
14715
-
14716
-
14717
  # Develop a Custom Step Operator
14718
 
14719
  {% hint style="info" %}
@@ -14843,11 +14508,6 @@ File: docs/book/component-guide/step-operators/kubernetes.md
14843
  description: Executing individual steps in Kubernetes Pods.
14844
  ---
14845
 
14846
- {% hint style="warning" %}
14847
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14848
- {% endhint %}
14849
-
14850
-
14851
  # Kubernetes Step Operator
14852
 
14853
  ZenML's Kubernetes step operator allows you to submit individual steps to be run on Kubernetes pods.
@@ -15082,11 +14742,6 @@ File: docs/book/component-guide/step-operators/modal.md
15082
  description: Executing individual steps in Modal.
15083
  ---
15084
 
15085
- {% hint style="warning" %}
15086
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15087
- {% endhint %}
15088
-
15089
-
15090
  # Modal Step Operator
15091
 
15092
  [Modal](https://modal.com) is a platform for running cloud infrastructure. It offers specialized compute instances to run your code and has a fast execution time, especially around building Docker images and provisioning hardware. ZenML's Modal step operator allows you to submit individual steps to be run on Modal compute instances.
@@ -15204,11 +14859,6 @@ File: docs/book/component-guide/step-operators/sagemaker.md
15204
  description: Executing individual steps in SageMaker.
15205
  ---
15206
 
15207
- {% hint style="warning" %}
15208
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15209
- {% endhint %}
15210
-
15211
-
15212
  # Amazon SageMaker
15213
 
15214
  [SageMaker](https://aws.amazon.com/sagemaker/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's SageMaker step operator allows you to submit individual steps to be run on Sagemaker compute instances.
@@ -15574,11 +15224,6 @@ roleRef:
15574
  name: edit
15575
  apiGroup: rbac.authorization.k8s.io
15576
  ---
15577
-
15578
- {% hint style="warning" %}
15579
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15580
- {% endhint %}
15581
-
15582
  ```
15583
 
15584
  And then execute the following command to create the resources:
@@ -15747,11 +15392,6 @@ File: docs/book/component-guide/step-operators/vertex.md
15747
  description: Executing individual steps in Vertex AI.
15748
  ---
15749
 
15750
- {% hint style="warning" %}
15751
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15752
- {% endhint %}
15753
-
15754
-
15755
  # Google Cloud VertexAI
15756
 
15757
  [Vertex AI](https://cloud.google.com/vertex-ai) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's Vertex AI step operator allows you to submit individual steps to be run on Vertex AI compute instances.
@@ -15942,11 +15582,6 @@ File: docs/book/component-guide/component-guide.md
15942
  description: Overview of categories of MLOps components.
15943
  ---
15944
 
15945
- {% hint style="warning" %}
15946
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15947
- {% endhint %}
15948
-
15949
-
15950
  # 📜 Overview
15951
 
15952
  If you are new to the world of MLOps, it is often daunting to be immediately faced with a sea of tools that seemingly all promise and do the same things. It is useful in this case to try to categorize tools in various groups in order to understand their value in your toolchain in a more precise manner.
@@ -15985,11 +15620,6 @@ File: docs/book/component-guide/integration-overview.md
15985
  description: Overview of third-party ZenML integrations.
15986
  ---
15987
 
15988
- {% hint style="warning" %}
15989
- This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15990
- {% endhint %}
15991
-
15992
-
15993
  # Integration overview
15994
 
15995
  Categorizing the MLOps stack is a good way to write abstractions for an MLOps pipeline and standardize your processes. But ZenML goes further and also provides concrete implementations of these categories by **integrating** with various tools for each category. Once code is organized into a ZenML pipeline, you can supercharge your ML workflows with the best-in-class solutions from various MLOps areas.
 
157
  icon: message-exclamation
158
  ---
159
 
 
 
 
 
 
160
  # Alerters
161
 
162
  **Alerters** allow you to send messages to chat services (like Slack, Discord, Mattermost, etc.) from within your
 
212
  description: Learning how to develop a custom alerter.
213
  ---
214
 
 
 
 
 
 
215
  # Develop a Custom Alerter
216
 
217
  {% hint style="info" %}
 
359
  description: Sending automated alerts to a Discord channel.
360
  ---
361
 
 
 
 
 
 
362
  # Discord Alerter
363
 
364
  The `DiscordAlerter` enables you to send messages to a dedicated Discord channel
 
501
  description: Sending automated alerts to a Slack channel.
502
  ---
503
 
 
 
 
 
 
504
  # Slack Alerter
505
 
506
  The `SlackAlerter` enables you to send messages or ask questions within a
 
842
  description: Annotating data using Argilla.
843
  ---
844
 
 
 
 
 
 
845
  # Argilla
846
 
847
  [Argilla](https://github.com/argilla-io/argilla) is a collaboration tool for AI engineers and domain experts who need to build high-quality datasets for their projects. It enables users to build robust language models through faster data curation using both human and machine feedback, providing support for each step in the MLOps cycle, from data labeling to model monitoring.
 
985
  description: Learning how to develop a custom annotator.
986
  ---
987
 
 
 
 
 
 
988
  # Develop a Custom Annotator
989
 
990
  {% hint style="info" %}
 
1008
  description: Annotating data using Label Studio.
1009
  ---
1010
 
 
 
 
 
 
1011
  # Label Studio
1012
 
1013
  Label Studio is one of the leading open-source annotation platforms available to data scientists and ML practitioners.
 
1160
  description: Annotating data using Pigeon.
1161
  ---
1162
 
 
 
 
 
 
1163
  # Pigeon
1164
 
1165
  Pigeon is a lightweight, open-source annotation tool designed for quick and easy labeling of data directly within Jupyter notebooks. It provides a simple and intuitive interface for annotating various types of data, including:
 
1277
  description: Annotating data using Prodigy.
1278
  ---
1279
 
 
 
 
 
 
1280
  # Prodigy
1281
 
1282
  [Prodigy](https://prodi.gy/) is a modern annotation tool for creating training
 
1416
  icon: folder-closed
1417
  ---
1418
 
 
 
 
 
 
1419
  # Artifact Stores
1420
 
1421
  The Artifact Store is a central component in any MLOps stack. As the name suggests, it acts as a data persistence layer where artifacts (e.g. datasets, models) ingested or generated by the machine learning pipelines are stored.
 
1588
  description: Storing artifacts using Azure Blob Storage
1589
  ---
1590
 
 
 
 
 
 
1591
  # Azure Blob Storage
1592
 
1593
  The Azure Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the Azure ZenML integration that uses [the Azure Blob Storage managed object storage service](https://azure.microsoft.com/en-us/services/storage/blobs/) to store ZenML artifacts in an Azure Blob Storage container.
 
1820
  description: Learning how to develop a custom artifact store.
1821
  ---
1822
 
 
 
 
 
 
1823
  # Develop a custom artifact store
1824
 
1825
  {% hint style="info" %}
 
2012
  description: Storing artifacts using GCP Cloud Storage.
2013
  ---
2014
 
 
 
 
 
 
2015
  # Google Cloud Storage (GCS)
2016
 
2017
  The GCS Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the GCP ZenML integration that uses [the Google Cloud Storage managed object storage service](https://cloud.google.com/storage/docs/introduction) to store ZenML artifacts in a GCP Cloud Storage bucket.
 
2216
  description: Storing artifacts on your local filesystem.
2217
  ---
2218
 
 
 
 
 
 
2219
  # Local Artifact Store
2220
 
2221
  The local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) flavor that uses a folder on your local filesystem to store artifacts.
 
2304
  description: Storing artifacts in an AWS S3 bucket.
2305
  ---
2306
 
 
 
 
 
 
2307
  # Amazon Simple Cloud Storage (S3)
2308
 
2309
  The S3 Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the S3 ZenML integration that uses [the AWS S3 managed object storage service](https://aws.amazon.com/s3/) or one of the self-hosted S3 alternatives, such as [MinIO](https://min.io/) or [Ceph RGW](https://ceph.io/en/discover/technology/#object), to store artifacts in an S3 compatible object storage backend.
 
2528
  description: Storing container images in Amazon ECR.
2529
  ---
2530
 
 
 
 
 
 
2531
  # Amazon Elastic Container Registry (ECR)
2532
 
2533
  The AWS container registry is a [container registry](./container-registries.md) flavor provided with the ZenML `aws` integration and uses [Amazon ECR](https://aws.amazon.com/ecr/) to store container images.
 
2739
  description: Storing container images in Azure.
2740
  ---
2741
 
 
 
 
 
 
2742
  # Azure Container Registry
2743
 
2744
  The Azure container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) to store container images.
 
2993
  description: Learning how to develop a custom container registry.
2994
  ---
2995
 
 
 
 
 
 
2996
  # Develop a custom container registry
2997
 
2998
  {% hint style="info" %}
 
3119
  description: Storing container images locally.
3120
  ---
3121
 
 
 
 
 
 
3122
  # Default Container Registry
3123
 
3124
  The Default container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and allows container registry URIs of any format.
 
3296
  description: Storing container images in DockerHub.
3297
  ---
3298
 
 
 
 
 
 
3299
  # DockerHub
3300
 
3301
  The DockerHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses [DockerHub](https://hub.docker.com/) to store container images.
 
3369
  description: Storing container images in GCP.
3370
  ---
3371
 
 
 
 
 
 
3372
  # Google Cloud Container Registry
3373
 
3374
  The GCP container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Google Artifact Registry](https://cloud.google.com/artifact-registry).
 
3465
  {% endtab %}
3466
 
3467
  {% tab title="GCP Service Connector (recommended)" %}
3468
+ To set up the GCP Container Registry to authenticate to GCP and access a Google Artifact Registry, it is recommended to leverage the many features provided by [the GCP Service Connector](../../how-to/infrastructure-deployment/auth-management/gcp-service-connector.md) such as auto-configuration, local login, best security practices regarding long-lived credentials and reusing the same credentials across multiple stack components.
3469
 
3470
+ If you don't already have a GCP Service Connector configured in your ZenML deployment, you can register one using the interactive CLI command. You have the option to configure a GCP Service Connector that can be used to access a Google Artifact Registry or even more than one type of GCP resource:
 
 
 
 
3471
 
3472
  ```sh
3473
  zenml service-connector register --type gcp -i
 
3606
  description: Storing container images in GitHub.
3607
  ---
3608
 
 
 
 
 
 
3609
  # GitHub Container Registry
3610
 
3611
  The GitHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) to store container images.
 
3668
  description: How to develop a custom data validator
3669
  ---
3670
 
 
 
 
 
 
3671
  # Develop a custom data validator
3672
 
3673
  {% hint style="info" %}
 
3797
  suites
3798
  ---
3799
 
 
 
 
 
 
3800
  # Deepchecks
3801
 
3802
  The Deepchecks [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Deepchecks](https://deepchecks.com/) to run data integrity, data drift, model drift and model performance tests on the datasets and models circulated in your ZenML pipelines. The test results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
 
4221
  with Evidently profiling
4222
  ---
4223
 
 
 
 
 
 
4224
  # Evidently
4225
 
4226
  The Evidently [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Evidently](https://evidentlyai.com/) to perform data quality, data drift, model drift and model performance analyzes, to generate reports and run checks. The reports and check results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
 
4858
  document the results
4859
  ---
4860
 
 
 
 
 
 
4861
  # Great Expectations
4862
 
4863
  The Great Expectations [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Great Expectations](https://greatexpectations.io/) to run data profiling and data quality tests on the data circulated through your pipelines. The test results can be used to implement automated corrective actions in your pipelines. They are also automatically rendered into documentation for further visual interpretation and evaluation.
 
5170
  data with whylogs/WhyLabs profiling.
5171
  ---
5172
 
 
 
 
 
 
5173
  # Whylogs
5174
 
5175
  The whylogs/WhyLabs [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [whylogs](https://whylabs.ai/whylogs) and [WhyLabs](https://whylabs.ai) to generate and track data profiles, highly accurate descriptive representations of your data. The profiles can be used to implement automated corrective actions in your pipelines, or to render interactive representations for further visual interpretation, evaluation and documentation.
 
5457
  description: Logging and visualizing experiments with Comet.
5458
  ---
5459
 
 
 
 
 
 
5460
  # Comet
5461
 
5462
  The Comet Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Comet ZenML integration that uses [the Comet experiment tracking platform](https://www.comet.com/site/products/ml-experiment-tracking/) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
 
5752
  description: Learning how to develop a custom experiment tracker.
5753
  ---
5754
 
 
 
 
 
 
5755
  # Develop a custom experiment tracker
5756
 
5757
  {% hint style="info" %}
 
5818
  icon: clipboard
5819
  ---
5820
 
 
 
 
 
 
5821
  # Experiment Trackers
5822
 
5823
  Experiment trackers let you track your ML experiments by logging extended information about your models, datasets,
 
5911
  description: Logging and visualizing experiments with MLflow.
5912
  ---
5913
 
 
 
 
 
 
5914
  # MLflow
5915
 
5916
  The MLflow Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the MLflow ZenML integration that uses [the MLflow tracking service](https://mlflow.org/docs/latest/tracking.html) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
 
6129
  description: Logging and visualizing experiments with neptune.ai
6130
  ---
6131
 
 
 
 
 
 
6132
  # Neptune
6133
 
6134
  The Neptune Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Neptune-ZenML integration that uses [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
 
6447
  description: Logging and visualizing experiments with Vertex AI Experiment Tracker.
6448
  ---
6449
 
 
 
 
 
 
6450
  # Vertex AI Experiment Tracker
6451
 
6452
  The Vertex AI Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Vertex AI ZenML integration. It uses the [Vertex AI tracking service](https://cloud.google.com/vertex-ai/docs/experiments/intro-vertex-ai-experiments) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
 
6766
  description: Logging and visualizing experiments with Weights & Biases.
6767
  ---
6768
 
 
 
 
 
 
6769
  # Weights & Biases
6770
 
6771
  The Weights & Biases Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Weights & Biases ZenML integration that uses [the Weights & Biases experiment tracking platform](https://wandb.ai/site/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
 
7083
  description: Learning how to develop a custom feature store.
7084
  ---
7085
 
 
 
 
 
 
7086
  # Develop a Custom Feature Store
7087
 
7088
  {% hint style="info" %}
 
7106
  description: Managing data in Feast feature stores.
7107
  ---
7108
 
 
 
 
 
 
7109
  # Feast
7110
 
7111
  Feast (Feature Store) is an operational data system for managing and serving machine learning features to models in production. Feast is able to serve feature data to models from a low-latency online store (for real-time prediction) or from an offline store (for scale-out batch scoring or model training).
 
7288
  description: Building container images with AWS CodeBuild
7289
  ---
7290
 
 
 
 
 
 
7291
  # AWS Image Builder
7292
 
7293
  The AWS image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `aws` integration that uses [AWS CodeBuild](https://aws.amazon.com/codebuild) to build container images.
 
7526
  description: Learning how to develop a custom image builder.
7527
  ---
7528
 
 
 
 
 
 
7529
  # Develop a Custom Image Builder
7530
 
7531
  {% hint style="info" %}
 
7646
  description: Building container images with Google Cloud Build
7647
  ---
7648
 
 
 
 
 
 
7649
  # Google Cloud Image Builder
7650
 
7651
  The Google Cloud image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `gcp` integration that uses [Google Cloud Build](https://cloud.google.com/build) to build container images.
 
7899
  description: Building container images with Kaniko.
7900
  ---
7901
 
 
 
 
 
 
7902
  # Kaniko Image Builder
7903
 
7904
  The Kaniko image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `kaniko` integration that uses [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images.
 
8040
 
8041
  * `--cache`: Set to `false` to disable caching. Defaults to `true`.
8042
  * `--cache-dir`: Set the directory where to store cached layers. Defaults to `/cache`.
8043
+ * `--cache-repo`: Set the repository where to store cached layers.
8044
  * `--cache-ttl`: Set the cache expiration time. Defaults to `24h`.
8045
  * `--cleanup`: Set to `false` to disable cleanup of the working directory. Defaults to `true`.
8046
  * `--compressed-caching`: Set to `false` to disable compressed caching. Defaults to `true`.
 
8056
  description: Building container images locally.
8057
  ---
8058
 
 
 
 
 
 
8059
  # Local Image Builder
8060
 
8061
  The local image builder is an [image builder](./image-builders.md) flavor that comes built-in with ZenML and uses the local Docker installation on your client machine to build container images.
 
8108
  description: Deploying your models locally with BentoML.
8109
  ---
8110
 
 
 
 
 
 
8111
  # BentoML
8112
 
8113
  BentoML is an open-source framework for machine learning model serving. it can be used to deploy models locally, in a cloud environment, or in a Kubernetes environment.
 
8155
 
8156
  ### Create a BentoML Service
8157
 
8158
+ The first step to being able to deploy your models and use BentoML is to create a [bento service](https://docs.bentoml.com/en/latest/guides/services.html) which is the main logic that defines how your model will be served.
8159
 
8160
  The following example shows how to create a basic bento service that will be used to serve a torch model. Learn more about how to specify the inputs and outputs for the APIs and how to use validators in the [Input and output types BentoML docs](https://docs.bentoml.com/en/latest/guides/iotypes.html)
8161
 
 
8494
  description: Learning how to develop a custom model deployer.
8495
  ---
8496
 
 
 
 
 
 
8497
  # Develop a Custom Model Deployer
8498
 
8499
  {% hint style="info" %}
 
8666
  Deploying models to Databricks Inference Endpoints with Databricks
8667
  ---
8668
 
 
 
 
 
 
8669
  # Databricks
8670
 
8671
 
 
8819
  :hugging_face:.
8820
  ---
8821
 
 
 
 
 
 
8822
  # Hugging Face
8823
 
8824
  Hugging Face Inference Endpoints provides a secure production solution to easily deploy any `transformers`, `sentence-transformers`, and `diffusers` models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the [Hub](https://huggingface.co/models).
 
9011
  description: Deploying your models locally with MLflow.
9012
  ---
9013
 
 
 
 
 
 
9014
  # MLflow
9015
 
9016
  The MLflow Model Deployer is one of the available flavors of the [Model Deployer](./model-deployers.md) stack component. Provided with the MLflow integration it can be used to deploy and manage [MLflow models](https://www.mlflow.org/docs/latest/python\_api/mlflow.deployments.html) on a local running MLflow server.
 
9334
  ```
9335
 
9336
  * Lifecycle Management: Provides mechanisms for comprehensive lifecycle management of model servers, including the ability to start, stop, and delete model servers, as well as to update existing servers with new model versions, thereby optimizing resource utilization and facilitating continuous delivery of model updates. Some core methods that can be used to interact with the remote model server include:
9337
+ - `deploy_model` - Deploys a model to the serving environment and returns a Service object that represents the deployed model server.
9338
+ - `find_model_server` - Finds and returns a list of Service objects that
9339
+ represent model servers that have been deployed to the serving environment,
9340
+ the `services` are stored in the DB and can be used as a reference to know what and where the model is deployed.
9341
+ - `stop_model_server` - Stops a model server that is currently running in the serving environment.
9342
+ - `start_model_server` - Starts a model server that has been stopped in the serving environment.
9343
+ - `delete_model_server` - Deletes a model server from the serving environment and from the DB.
9344
 
9345
  {% hint style="info" %}
9346
  ZenML uses the Service object to represent a model server that has been deployed to a serving environment. The Service object is saved in the DB and can be used as a reference to know what and where the model is deployed. The Service object consists of 2 main attributes, the `config` and the `status`. The `config` attribute holds all the deployment configuration attributes required to create a new deployment, while the `status` attribute holds the operational status of the deployment, such as the last error message, the prediction URL, and the deployment status.
 
9455
  description: Deploying models to Kubernetes with Seldon Core.
9456
  ---
9457
 
 
 
 
 
 
9458
  # Seldon
9459
 
9460
  [Seldon Core](https://github.com/SeldonIO/seldon-core) is a production grade source-available model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more.
 
9934
  description: Deploying your LLM locally with vLLM.
9935
  ---
9936
 
 
 
 
 
 
9937
  # vLLM
9938
 
9939
  [vLLM](https://docs.vllm.ai/en/latest/) is a fast and easy-to-use library for LLM inference and serving.
 
10012
  description: Learning how to develop a custom model registry.
10013
  ---
10014
 
 
 
 
 
 
10015
  # Develop a Custom Model Registry
10016
 
10017
  {% hint style="info" %}
 
10208
  description: Managing MLFlow logged models and artifacts
10209
  ---
10210
 
 
 
 
 
 
10211
  # MLflow Model Registry
10212
 
10213
  [MLflow](https://www.mlflow.org/docs/latest/tracking.html) is a popular tool that helps you track experiments, manage models and even deploy them to different environments. ZenML already provides a [MLflow Experiment Tracker](../experiment-trackers/mlflow.md) that you can use to track your experiments, and an [MLflow Model Deployer](../model-deployers/mlflow.md) that you can use to deploy your models locally.
 
10457
  description: Orchestrating your pipelines to run on Airflow.
10458
  ---
10459
 
 
 
 
 
 
10460
  # Airflow Orchestrator
10461
 
10462
  ZenML pipelines can be executed natively as [Airflow](https://airflow.apache.org/)
 
10766
  description: Orchestrating your pipelines to run on AzureML.
10767
  ---
10768
 
 
 
 
 
 
10769
  # AzureML Orchestrator
10770
 
10771
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning) is a
 
11012
  description: Learning how to develop a custom orchestrator.
11013
  ---
11014
 
 
 
 
 
 
11015
  # Develop a custom orchestrator
11016
 
11017
  {% hint style="info" %}
 
11236
  description: Orchestrating your pipelines to run on Databricks.
11237
  ---
11238
 
 
 
 
 
 
11239
  # Databricks Orchestrator
11240
 
11241
  [Databricks](https://www.databricks.com/) is a unified data analytics platform that combines the best of data warehouses and data lakes to offer an integrated solution for big data processing and machine learning. It provides a collaborative environment for data scientists, data engineers, and business analysts to work together on data projects. Databricks offers optimized performance and scalability for big data workloads.
 
11432
  description: Orchestrating your pipelines to run on HyperAI.ai instances.
11433
  ---
11434
 
 
 
 
 
 
11435
  # HyperAI Orchestrator
11436
 
11437
  [HyperAI](https://www.hyperai.ai) is a cutting-edge cloud compute platform designed to make AI accessible for everyone. The HyperAI orchestrator is an [orchestrator](./orchestrators.md) flavor that allows you to easily deploy your pipelines on HyperAI instances.
 
11519
  description: Orchestrating your pipelines to run on Kubeflow.
11520
  ---
11521
 
 
 
 
 
 
11522
  # Kubeflow Orchestrator
11523
 
11524
  The Kubeflow orchestrator is an [orchestrator](./orchestrators.md) flavor provided by the ZenML `kubeflow` integration that uses [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) to run your pipelines.
 
11876
  description: Orchestrating your pipelines to run on Kubernetes clusters.
11877
  ---
11878
 
 
 
 
 
 
11879
  # Kubernetes Orchestrator
11880
 
11881
  Using the ZenML `kubernetes` integration, you can orchestrate and scale your ML pipelines on a [Kubernetes](https://kubernetes.io/) cluster without writing a single line of Kubernetes code.
 
11884
 
11885
  Overall, the Kubernetes orchestrator is quite similar to the Kubeflow orchestrator in that it runs each pipeline step in a separate Kubernetes pod. However, the orchestration of the different pods is not done by Kubeflow but by a separate master pod that orchestrates the step execution via topological sort.
11886
 
11887
+ Compared to Kubeflow, this means that the Kubernetes-native orchestrator is faster and much simpler since you do not need to install and maintain Kubeflow on your cluster. The Kubernetes-native orchestrator is an ideal choice for teams in need of distributed orchestration that do not want to go with a fully-managed offering.
 
 
11888
 
11889
  {% hint style="warning" %}
11890
  This component is only meant to be used within the context of a [remote ZenML deployment scenario](../../getting-started/deploying-zenml/README.md). Usage with a local ZenML deployment may lead to unexpected behavior!
 
12179
  description: Orchestrating your pipelines to run on Lightning AI.
12180
  ---
12181
 
 
 
 
 
 
12182
 
12183
  # Lightning AI Orchestrator
12184
 
 
12378
  description: Orchestrating your pipelines to run in Docker.
12379
  ---
12380
 
 
 
 
 
 
12381
  # Local Docker Orchestrator
12382
 
12383
  The local Docker orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally using Docker.
 
12455
  description: Orchestrating your pipelines to run locally.
12456
  ---
12457
 
 
 
 
 
 
12458
  # Local Orchestrator
12459
 
12460
  The local orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally.
 
12588
  description: Orchestrating your pipelines to run on Amazon Sagemaker.
12589
  ---
12590
 
 
 
 
 
 
12591
  # AWS Sagemaker Orchestrator
12592
 
12593
  [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) is a serverless ML workflow tool running on AWS. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
 
12753
  * `sagemaker_session`
12754
  * `entrypoint`
12755
  * `base_job_name`
12756
+ * `environment`
12757
 
12758
  For example, settings can be provided and applied in the following way:
12759
 
 
12766
  sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(
12767
  instance_type="ml.m5.large",
12768
  volume_size_in_gb=30,
12769
+ environment={"MY_ENV_VAR": "my_value"}
12770
  )
12771
 
12772
 
 
13137
  description: Orchestrating your pipelines to run on VMs using SkyPilot.
13138
  ---
13139
 
 
 
 
 
 
13140
  # Skypilot VM Orchestrator
13141
 
13142
  The SkyPilot VM Orchestrator is an integration provided by ZenML that allows you to provision and manage virtual machines (VMs) on any cloud provider supported by the [SkyPilot framework](https://skypilot.readthedocs.io/en/latest/index.html). This integration is designed to simplify the process of running machine learning workloads on the cloud, offering cost savings, high GPU availability, and managed execution, We recommend using the SkyPilot VM Orchestrator if you need access to GPUs for your workloads, but don't want to deal with the complexities of managing cloud infrastructure or expensive managed solutions.
 
13659
  description: Orchestrating your pipelines to run on Tekton.
13660
  ---
13661
 
 
 
 
 
 
13662
  # Tekton Orchestrator
13663
 
13664
  [Tekton](https://tekton.dev/) is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems.
 
13899
  description: Orchestrating your pipelines to run on Vertex AI.
13900
  ---
13901
 
 
 
 
 
 
13902
  # Google Cloud VertexAI Orchestrator
13903
 
13904
  [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) is a serverless ML workflow tool running on the Google Cloud Platform. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
 
14218
  description: Executing individual steps in AzureML.
14219
  ---
14220
 
 
 
 
 
 
14221
  # AzureML
14222
 
14223
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's AzureML step operator allows you to submit individual steps to be run on AzureML compute instances.
 
14379
  description: Learning how to develop a custom step operator.
14380
  ---
14381
 
 
 
 
 
 
14382
  # Develop a Custom Step Operator
14383
 
14384
  {% hint style="info" %}
 
14508
  description: Executing individual steps in Kubernetes Pods.
14509
  ---
14510
 
 
 
 
 
 
14511
  # Kubernetes Step Operator
14512
 
14513
  ZenML's Kubernetes step operator allows you to submit individual steps to be run on Kubernetes pods.
 
14742
  description: Executing individual steps in Modal.
14743
  ---
14744
 
 
 
 
 
 
14745
  # Modal Step Operator
14746
 
14747
  [Modal](https://modal.com) is a platform for running cloud infrastructure. It offers specialized compute instances to run your code and has a fast execution time, especially around building Docker images and provisioning hardware. ZenML's Modal step operator allows you to submit individual steps to be run on Modal compute instances.
 
14859
  description: Executing individual steps in SageMaker.
14860
  ---
14861
 
 
 
 
 
 
14862
  # Amazon SageMaker
14863
 
14864
  [SageMaker](https://aws.amazon.com/sagemaker/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's SageMaker step operator allows you to submit individual steps to be run on Sagemaker compute instances.
 
15224
  name: edit
15225
  apiGroup: rbac.authorization.k8s.io
15226
  ---
 
 
 
 
 
15227
  ```
15228
 
15229
  And then execute the following command to create the resources:
 
15392
  description: Executing individual steps in Vertex AI.
15393
  ---
15394
 
 
 
 
 
 
15395
  # Google Cloud VertexAI
15396
 
15397
  [Vertex AI](https://cloud.google.com/vertex-ai) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's Vertex AI step operator allows you to submit individual steps to be run on Vertex AI compute instances.
 
15582
  description: Overview of categories of MLOps components.
15583
  ---
15584
 
 
 
 
 
 
15585
  # 📜 Overview
15586
 
15587
  If you are new to the world of MLOps, it is often daunting to be immediately faced with a sea of tools that seemingly all promise and do the same things. It is useful in this case to try to categorize tools in various groups in order to understand their value in your toolchain in a more precise manner.
 
15620
  description: Overview of third-party ZenML integrations.
15621
  ---
15622
 
 
 
 
 
 
15623
  # Integration overview
15624
 
15625
  Categorizing the MLOps stack is a good way to write abstractions for an MLOps pipeline and standardize your processes. But ZenML goes further and also provides concrete implementations of these categories by **integrating** with various tools for each category. Once code is organized into a ZenML pipeline, you can supercharge your ML workflows with the best-in-class solutions from various MLOps areas.