wjayesh commited on
Commit
5f11033
·
verified ·
1 Parent(s): 2e27e5b

Upload component-guide.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. component-guide.txt +377 -7
component-guide.txt CHANGED
@@ -1,5 +1,4 @@
1
- This file is a merged representation of the entire codebase, combining all repository files into a single document.
2
- Generated by Repomix on: 2025-02-07T15:55:37.705Z
3
 
4
  ================================================================
5
  File Summary
@@ -35,11 +34,11 @@ Usage Guidelines:
35
 
36
  Notes:
37
  ------
38
- - Some files may have been excluded based on .gitignore rules and Repomix's
39
- configuration.
40
- - Binary files are not included in this packed representation. Please refer to
41
- the Repository Structure section for a complete list of file paths, including
42
- binary files.
43
 
44
  Additional Info:
45
  ----------------
@@ -158,6 +157,11 @@ description: Sending automated alerts to chat services.
158
  icon: message-exclamation
159
  ---
160
 
 
 
 
 
 
161
  # Alerters
162
 
163
  **Alerters** allow you to send messages to chat services (like Slack, Discord, Mattermost, etc.) from within your
@@ -213,6 +217,11 @@ File: docs/book/component-guide/alerters/custom.md
213
  description: Learning how to develop a custom alerter.
214
  ---
215
 
 
 
 
 
 
216
  # Develop a Custom Alerter
217
 
218
  {% hint style="info" %}
@@ -360,6 +369,11 @@ File: docs/book/component-guide/alerters/discord.md
360
  description: Sending automated alerts to a Discord channel.
361
  ---
362
 
 
 
 
 
 
363
  # Discord Alerter
364
 
365
  The `DiscordAlerter` enables you to send messages to a dedicated Discord channel
@@ -502,6 +516,11 @@ File: docs/book/component-guide/alerters/slack.md
502
  description: Sending automated alerts to a Slack channel.
503
  ---
504
 
 
 
 
 
 
505
  # Slack Alerter
506
 
507
  The `SlackAlerter` enables you to send messages or ask questions within a
@@ -843,6 +862,11 @@ File: docs/book/component-guide/annotators/argilla.md
843
  description: Annotating data using Argilla.
844
  ---
845
 
 
 
 
 
 
846
  # Argilla
847
 
848
  [Argilla](https://github.com/argilla-io/argilla) is a collaboration tool for AI engineers and domain experts who need to build high-quality datasets for their projects. It enables users to build robust language models through faster data curation using both human and machine feedback, providing support for each step in the MLOps cycle, from data labeling to model monitoring.
@@ -986,6 +1010,11 @@ File: docs/book/component-guide/annotators/custom.md
986
  description: Learning how to develop a custom annotator.
987
  ---
988
 
 
 
 
 
 
989
  # Develop a Custom Annotator
990
 
991
  {% hint style="info" %}
@@ -1009,6 +1038,11 @@ File: docs/book/component-guide/annotators/label-studio.md
1009
  description: Annotating data using Label Studio.
1010
  ---
1011
 
 
 
 
 
 
1012
  # Label Studio
1013
 
1014
  Label Studio is one of the leading open-source annotation platforms available to data scientists and ML practitioners.
@@ -1161,6 +1195,11 @@ File: docs/book/component-guide/annotators/pigeon.md
1161
  description: Annotating data using Pigeon.
1162
  ---
1163
 
 
 
 
 
 
1164
  # Pigeon
1165
 
1166
  Pigeon is a lightweight, open-source annotation tool designed for quick and easy labeling of data directly within Jupyter notebooks. It provides a simple and intuitive interface for annotating various types of data, including:
@@ -1278,6 +1317,11 @@ File: docs/book/component-guide/annotators/prodigy.md
1278
  description: Annotating data using Prodigy.
1279
  ---
1280
 
 
 
 
 
 
1281
  # Prodigy
1282
 
1283
  [Prodigy](https://prodi.gy/) is a modern annotation tool for creating training
@@ -1417,6 +1461,11 @@ description: Setting up a persistent storage for your artifacts.
1417
  icon: folder-closed
1418
  ---
1419
 
 
 
 
 
 
1420
  # Artifact Stores
1421
 
1422
  The Artifact Store is a central component in any MLOps stack. As the name suggests, it acts as a data persistence layer where artifacts (e.g. datasets, models) ingested or generated by the machine learning pipelines are stored.
@@ -1589,6 +1638,11 @@ File: docs/book/component-guide/artifact-stores/azure.md
1589
  description: Storing artifacts using Azure Blob Storage
1590
  ---
1591
 
 
 
 
 
 
1592
  # Azure Blob Storage
1593
 
1594
  The Azure Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the Azure ZenML integration that uses [the Azure Blob Storage managed object storage service](https://azure.microsoft.com/en-us/services/storage/blobs/) to store ZenML artifacts in an Azure Blob Storage container.
@@ -1821,6 +1875,11 @@ File: docs/book/component-guide/artifact-stores/custom.md
1821
  description: Learning how to develop a custom artifact store.
1822
  ---
1823
 
 
 
 
 
 
1824
  # Develop a custom artifact store
1825
 
1826
  {% hint style="info" %}
@@ -2013,6 +2072,11 @@ File: docs/book/component-guide/artifact-stores/gcp.md
2013
  description: Storing artifacts using GCP Cloud Storage.
2014
  ---
2015
 
 
 
 
 
 
2016
  # Google Cloud Storage (GCS)
2017
 
2018
  The GCS Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the GCP ZenML integration that uses [the Google Cloud Storage managed object storage service](https://cloud.google.com/storage/docs/introduction) to store ZenML artifacts in a GCP Cloud Storage bucket.
@@ -2217,6 +2281,11 @@ File: docs/book/component-guide/artifact-stores/local.md
2217
  description: Storing artifacts on your local filesystem.
2218
  ---
2219
 
 
 
 
 
 
2220
  # Local Artifact Store
2221
 
2222
  The local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) flavor that uses a folder on your local filesystem to store artifacts.
@@ -2305,6 +2374,11 @@ File: docs/book/component-guide/artifact-stores/s3.md
2305
  description: Storing artifacts in an AWS S3 bucket.
2306
  ---
2307
 
 
 
 
 
 
2308
  # Amazon Simple Cloud Storage (S3)
2309
 
2310
  The S3 Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the S3 ZenML integration that uses [the AWS S3 managed object storage service](https://aws.amazon.com/s3/) or one of the self-hosted S3 alternatives, such as [MinIO](https://min.io/) or [Ceph RGW](https://ceph.io/en/discover/technology/#object), to store artifacts in an S3 compatible object storage backend.
@@ -2529,6 +2603,11 @@ File: docs/book/component-guide/container-registries/aws.md
2529
  description: Storing container images in Amazon ECR.
2530
  ---
2531
 
 
 
 
 
 
2532
  # Amazon Elastic Container Registry (ECR)
2533
 
2534
  The AWS container registry is a [container registry](./container-registries.md) flavor provided with the ZenML `aws` integration and uses [Amazon ECR](https://aws.amazon.com/ecr/) to store container images.
@@ -2740,6 +2819,11 @@ File: docs/book/component-guide/container-registries/azure.md
2740
  description: Storing container images in Azure.
2741
  ---
2742
 
 
 
 
 
 
2743
  # Azure Container Registry
2744
 
2745
  The Azure container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) to store container images.
@@ -2994,6 +3078,11 @@ File: docs/book/component-guide/container-registries/custom.md
2994
  description: Learning how to develop a custom container registry.
2995
  ---
2996
 
 
 
 
 
 
2997
  # Develop a custom container registry
2998
 
2999
  {% hint style="info" %}
@@ -3120,6 +3209,11 @@ File: docs/book/component-guide/container-registries/default.md
3120
  description: Storing container images locally.
3121
  ---
3122
 
 
 
 
 
 
3123
  # Default Container Registry
3124
 
3125
  The Default container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and allows container registry URIs of any format.
@@ -3297,6 +3391,11 @@ File: docs/book/component-guide/container-registries/dockerhub.md
3297
  description: Storing container images in DockerHub.
3298
  ---
3299
 
 
 
 
 
 
3300
  # DockerHub
3301
 
3302
  The DockerHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses [DockerHub](https://hub.docker.com/) to store container images.
@@ -3370,6 +3469,11 @@ File: docs/book/component-guide/container-registries/gcp.md
3370
  description: Storing container images in GCP.
3371
  ---
3372
 
 
 
 
 
 
3373
  # Google Cloud Container Registry
3374
 
3375
  The GCP container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Google Artifact Registry](https://cloud.google.com/artifact-registry).
@@ -3611,6 +3715,11 @@ File: docs/book/component-guide/container-registries/github.md
3611
  description: Storing container images in GitHub.
3612
  ---
3613
 
 
 
 
 
 
3614
  # GitHub Container Registry
3615
 
3616
  The GitHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) to store container images.
@@ -3673,6 +3782,11 @@ File: docs/book/component-guide/data-validators/custom.md
3673
  description: How to develop a custom data validator
3674
  ---
3675
 
 
 
 
 
 
3676
  # Develop a custom data validator
3677
 
3678
  {% hint style="info" %}
@@ -3802,6 +3916,11 @@ description: >-
3802
  suites
3803
  ---
3804
 
 
 
 
 
 
3805
  # Deepchecks
3806
 
3807
  The Deepchecks [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Deepchecks](https://deepchecks.com/) to run data integrity, data drift, model drift and model performance tests on the datasets and models circulated in your ZenML pipelines. The test results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
@@ -4226,6 +4345,11 @@ description: >-
4226
  with Evidently profiling
4227
  ---
4228
 
 
 
 
 
 
4229
  # Evidently
4230
 
4231
  The Evidently [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Evidently](https://evidentlyai.com/) to perform data quality, data drift, model drift and model performance analyzes, to generate reports and run checks. The reports and check results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
@@ -4863,6 +4987,11 @@ description: >-
4863
  document the results
4864
  ---
4865
 
 
 
 
 
 
4866
  # Great Expectations
4867
 
4868
  The Great Expectations [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Great Expectations](https://greatexpectations.io/) to run data profiling and data quality tests on the data circulated through your pipelines. The test results can be used to implement automated corrective actions in your pipelines. They are also automatically rendered into documentation for further visual interpretation and evaluation.
@@ -5175,6 +5304,11 @@ description: >-
5175
  data with whylogs/WhyLabs profiling.
5176
  ---
5177
 
 
 
 
 
 
5178
  # Whylogs
5179
 
5180
  The whylogs/WhyLabs [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [whylogs](https://whylabs.ai/whylogs) and [WhyLabs](https://whylabs.ai) to generate and track data profiles, highly accurate descriptive representations of your data. The profiles can be used to implement automated corrective actions in your pipelines, or to render interactive representations for further visual interpretation, evaluation and documentation.
@@ -5462,6 +5596,11 @@ File: docs/book/component-guide/experiment-trackers/comet.md
5462
  description: Logging and visualizing experiments with Comet.
5463
  ---
5464
 
 
 
 
 
 
5465
  # Comet
5466
 
5467
  The Comet Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Comet ZenML integration that uses [the Comet experiment tracking platform](https://www.comet.com/site/products/ml-experiment-tracking/) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
@@ -5757,6 +5896,11 @@ File: docs/book/component-guide/experiment-trackers/custom.md
5757
  description: Learning how to develop a custom experiment tracker.
5758
  ---
5759
 
 
 
 
 
 
5760
  # Develop a custom experiment tracker
5761
 
5762
  {% hint style="info" %}
@@ -5823,6 +5967,11 @@ description: Logging and visualizing ML experiments.
5823
  icon: clipboard
5824
  ---
5825
 
 
 
 
 
 
5826
  # Experiment Trackers
5827
 
5828
  Experiment trackers let you track your ML experiments by logging extended information about your models, datasets,
@@ -5916,6 +6065,11 @@ File: docs/book/component-guide/experiment-trackers/mlflow.md
5916
  description: Logging and visualizing experiments with MLflow.
5917
  ---
5918
 
 
 
 
 
 
5919
  # MLflow
5920
 
5921
  The MLflow Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the MLflow ZenML integration that uses [the MLflow tracking service](https://mlflow.org/docs/latest/tracking.html) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
@@ -6134,6 +6288,11 @@ File: docs/book/component-guide/experiment-trackers/neptune.md
6134
  description: Logging and visualizing experiments with neptune.ai
6135
  ---
6136
 
 
 
 
 
 
6137
  # Neptune
6138
 
6139
  The Neptune Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Neptune-ZenML integration that uses [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
@@ -6452,6 +6611,11 @@ File: docs/book/component-guide/experiment-trackers/vertexai.md
6452
  description: Logging and visualizing experiments with Vertex AI Experiment Tracker.
6453
  ---
6454
 
 
 
 
 
 
6455
  # Vertex AI Experiment Tracker
6456
 
6457
  The Vertex AI Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Vertex AI ZenML integration. It uses the [Vertex AI tracking service](https://cloud.google.com/vertex-ai/docs/experiments/intro-vertex-ai-experiments) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
@@ -6771,6 +6935,11 @@ File: docs/book/component-guide/experiment-trackers/wandb.md
6771
  description: Logging and visualizing experiments with Weights & Biases.
6772
  ---
6773
 
 
 
 
 
 
6774
  # Weights & Biases
6775
 
6776
  The Weights & Biases Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Weights & Biases ZenML integration that uses [the Weights & Biases experiment tracking platform](https://wandb.ai/site/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
@@ -7088,6 +7257,11 @@ File: docs/book/component-guide/feature-stores/custom.md
7088
  description: Learning how to develop a custom feature store.
7089
  ---
7090
 
 
 
 
 
 
7091
  # Develop a Custom Feature Store
7092
 
7093
  {% hint style="info" %}
@@ -7111,6 +7285,11 @@ File: docs/book/component-guide/feature-stores/feast.md
7111
  description: Managing data in Feast feature stores.
7112
  ---
7113
 
 
 
 
 
 
7114
  # Feast
7115
 
7116
  Feast (Feature Store) is an operational data system for managing and serving machine learning features to models in production. Feast is able to serve feature data to models from a low-latency online store (for real-time prediction) or from an offline store (for scale-out batch scoring or model training).
@@ -7293,6 +7472,11 @@ File: docs/book/component-guide/image-builders/aws.md
7293
  description: Building container images with AWS CodeBuild
7294
  ---
7295
 
 
 
 
 
 
7296
  # AWS Image Builder
7297
 
7298
  The AWS image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `aws` integration that uses [AWS CodeBuild](https://aws.amazon.com/codebuild) to build container images.
@@ -7531,6 +7715,11 @@ File: docs/book/component-guide/image-builders/custom.md
7531
  description: Learning how to develop a custom image builder.
7532
  ---
7533
 
 
 
 
 
 
7534
  # Develop a Custom Image Builder
7535
 
7536
  {% hint style="info" %}
@@ -7651,6 +7840,11 @@ File: docs/book/component-guide/image-builders/gcp.md
7651
  description: Building container images with Google Cloud Build
7652
  ---
7653
 
 
 
 
 
 
7654
  # Google Cloud Image Builder
7655
 
7656
  The Google Cloud image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `gcp` integration that uses [Google Cloud Build](https://cloud.google.com/build) to build container images.
@@ -7904,6 +8098,11 @@ File: docs/book/component-guide/image-builders/kaniko.md
7904
  description: Building container images with Kaniko.
7905
  ---
7906
 
 
 
 
 
 
7907
  # Kaniko Image Builder
7908
 
7909
  The Kaniko image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `kaniko` integration that uses [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images.
@@ -8061,6 +8260,11 @@ File: docs/book/component-guide/image-builders/local.md
8061
  description: Building container images locally.
8062
  ---
8063
 
 
 
 
 
 
8064
  # Local Image Builder
8065
 
8066
  The local image builder is an [image builder](./image-builders.md) flavor that comes built-in with ZenML and uses the local Docker installation on your client machine to build container images.
@@ -8113,6 +8317,11 @@ File: docs/book/component-guide/model-deployers/bentoml.md
8113
  description: Deploying your models locally with BentoML.
8114
  ---
8115
 
 
 
 
 
 
8116
  # BentoML
8117
 
8118
  BentoML is an open-source framework for machine learning model serving. it can be used to deploy models locally, in a cloud environment, or in a Kubernetes environment.
@@ -8499,6 +8708,11 @@ File: docs/book/component-guide/model-deployers/custom.md
8499
  description: Learning how to develop a custom model deployer.
8500
  ---
8501
 
 
 
 
 
 
8502
  # Develop a Custom Model Deployer
8503
 
8504
  {% hint style="info" %}
@@ -8671,6 +8885,11 @@ description: >-
8671
  Deploying models to Databricks Inference Endpoints with Databricks
8672
  ---
8673
 
 
 
 
 
 
8674
  # Databricks
8675
 
8676
 
@@ -8824,6 +9043,11 @@ description: >-
8824
  :hugging_face:.
8825
  ---
8826
 
 
 
 
 
 
8827
  # Hugging Face
8828
 
8829
  Hugging Face Inference Endpoints provides a secure production solution to easily deploy any `transformers`, `sentence-transformers`, and `diffusers` models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the [Hub](https://huggingface.co/models).
@@ -9016,6 +9240,11 @@ File: docs/book/component-guide/model-deployers/mlflow.md
9016
  description: Deploying your models locally with MLflow.
9017
  ---
9018
 
 
 
 
 
 
9019
  # MLflow
9020
 
9021
  The MLflow Model Deployer is one of the available flavors of the [Model Deployer](./model-deployers.md) stack component. Provided with the MLflow integration it can be used to deploy and manage [MLflow models](https://www.mlflow.org/docs/latest/python\_api/mlflow.deployments.html) on a local running MLflow server.
@@ -9460,6 +9689,11 @@ File: docs/book/component-guide/model-deployers/seldon.md
9460
  description: Deploying models to Kubernetes with Seldon Core.
9461
  ---
9462
 
 
 
 
 
 
9463
  # Seldon
9464
 
9465
  [Seldon Core](https://github.com/SeldonIO/seldon-core) is a production grade source-available model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more.
@@ -9939,6 +10173,11 @@ File: docs/book/component-guide/model-deployers/vllm.md
9939
  description: Deploying your LLM locally with vLLM.
9940
  ---
9941
 
 
 
 
 
 
9942
  # vLLM
9943
 
9944
  [vLLM](https://docs.vllm.ai/en/latest/) is a fast and easy-to-use library for LLM inference and serving.
@@ -10017,6 +10256,11 @@ File: docs/book/component-guide/model-registries/custom.md
10017
  description: Learning how to develop a custom model registry.
10018
  ---
10019
 
 
 
 
 
 
10020
  # Develop a Custom Model Registry
10021
 
10022
  {% hint style="info" %}
@@ -10213,6 +10457,11 @@ File: docs/book/component-guide/model-registries/mlflow.md
10213
  description: Managing MLFlow logged models and artifacts
10214
  ---
10215
 
 
 
 
 
 
10216
  # MLflow Model Registry
10217
 
10218
  [MLflow](https://www.mlflow.org/docs/latest/tracking.html) is a popular tool that helps you track experiments, manage models and even deploy them to different environments. ZenML already provides a [MLflow Experiment Tracker](../experiment-trackers/mlflow.md) that you can use to track your experiments, and an [MLflow Model Deployer](../model-deployers/mlflow.md) that you can use to deploy your models locally.
@@ -10462,6 +10711,11 @@ File: docs/book/component-guide/orchestrators/airflow.md
10462
  description: Orchestrating your pipelines to run on Airflow.
10463
  ---
10464
 
 
 
 
 
 
10465
  # Airflow Orchestrator
10466
 
10467
  ZenML pipelines can be executed natively as [Airflow](https://airflow.apache.org/)
@@ -10771,6 +11025,11 @@ File: docs/book/component-guide/orchestrators/azureml.md
10771
  description: Orchestrating your pipelines to run on AzureML.
10772
  ---
10773
 
 
 
 
 
 
10774
  # AzureML Orchestrator
10775
 
10776
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning) is a
@@ -11017,6 +11276,11 @@ File: docs/book/component-guide/orchestrators/custom.md
11017
  description: Learning how to develop a custom orchestrator.
11018
  ---
11019
 
 
 
 
 
 
11020
  # Develop a custom orchestrator
11021
 
11022
  {% hint style="info" %}
@@ -11241,6 +11505,11 @@ File: docs/book/component-guide/orchestrators/databricks.md
11241
  description: Orchestrating your pipelines to run on Databricks.
11242
  ---
11243
 
 
 
 
 
 
11244
  # Databricks Orchestrator
11245
 
11246
  [Databricks](https://www.databricks.com/) is a unified data analytics platform that combines the best of data warehouses and data lakes to offer an integrated solution for big data processing and machine learning. It provides a collaborative environment for data scientists, data engineers, and business analysts to work together on data projects. Databricks offers optimized performance and scalability for big data workloads.
@@ -11437,6 +11706,11 @@ File: docs/book/component-guide/orchestrators/hyperai.md
11437
  description: Orchestrating your pipelines to run on HyperAI.ai instances.
11438
  ---
11439
 
 
 
 
 
 
11440
  # HyperAI Orchestrator
11441
 
11442
  [HyperAI](https://www.hyperai.ai) is a cutting-edge cloud compute platform designed to make AI accessible for everyone. The HyperAI orchestrator is an [orchestrator](./orchestrators.md) flavor that allows you to easily deploy your pipelines on HyperAI instances.
@@ -11524,6 +11798,11 @@ File: docs/book/component-guide/orchestrators/kubeflow.md
11524
  description: Orchestrating your pipelines to run on Kubeflow.
11525
  ---
11526
 
 
 
 
 
 
11527
  # Kubeflow Orchestrator
11528
 
11529
  The Kubeflow orchestrator is an [orchestrator](./orchestrators.md) flavor provided by the ZenML `kubeflow` integration that uses [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) to run your pipelines.
@@ -11881,6 +12160,11 @@ File: docs/book/component-guide/orchestrators/kubernetes.md
11881
  description: Orchestrating your pipelines to run on Kubernetes clusters.
11882
  ---
11883
 
 
 
 
 
 
11884
  # Kubernetes Orchestrator
11885
 
11886
  Using the ZenML `kubernetes` integration, you can orchestrate and scale your ML pipelines on a [Kubernetes](https://kubernetes.io/) cluster without writing a single line of Kubernetes code.
@@ -12186,6 +12470,11 @@ File: docs/book/component-guide/orchestrators/lightning.md
12186
  description: Orchestrating your pipelines to run on Lightning AI.
12187
  ---
12188
 
 
 
 
 
 
12189
 
12190
  # Lightning AI Orchestrator
12191
 
@@ -12385,6 +12674,11 @@ File: docs/book/component-guide/orchestrators/local-docker.md
12385
  description: Orchestrating your pipelines to run in Docker.
12386
  ---
12387
 
 
 
 
 
 
12388
  # Local Docker Orchestrator
12389
 
12390
  The local Docker orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally using Docker.
@@ -12462,6 +12756,11 @@ File: docs/book/component-guide/orchestrators/local.md
12462
  description: Orchestrating your pipelines to run locally.
12463
  ---
12464
 
 
 
 
 
 
12465
  # Local Orchestrator
12466
 
12467
  The local orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally.
@@ -12595,6 +12894,11 @@ File: docs/book/component-guide/orchestrators/sagemaker.md
12595
  description: Orchestrating your pipelines to run on Amazon Sagemaker.
12596
  ---
12597
 
 
 
 
 
 
12598
  # AWS Sagemaker Orchestrator
12599
 
12600
  [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) is a serverless ML workflow tool running on AWS. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
@@ -13143,6 +13447,11 @@ File: docs/book/component-guide/orchestrators/skypilot-vm.md
13143
  description: Orchestrating your pipelines to run on VMs using SkyPilot.
13144
  ---
13145
 
 
 
 
 
 
13146
  # Skypilot VM Orchestrator
13147
 
13148
  The SkyPilot VM Orchestrator is an integration provided by ZenML that allows you to provision and manage virtual machines (VMs) on any cloud provider supported by the [SkyPilot framework](https://skypilot.readthedocs.io/en/latest/index.html). This integration is designed to simplify the process of running machine learning workloads on the cloud, offering cost savings, high GPU availability, and managed execution, We recommend using the SkyPilot VM Orchestrator if you need access to GPUs for your workloads, but don't want to deal with the complexities of managing cloud infrastructure or expensive managed solutions.
@@ -13665,6 +13974,11 @@ File: docs/book/component-guide/orchestrators/tekton.md
13665
  description: Orchestrating your pipelines to run on Tekton.
13666
  ---
13667
 
 
 
 
 
 
13668
  # Tekton Orchestrator
13669
 
13670
  [Tekton](https://tekton.dev/) is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems.
@@ -13905,6 +14219,11 @@ File: docs/book/component-guide/orchestrators/vertex.md
13905
  description: Orchestrating your pipelines to run on Vertex AI.
13906
  ---
13907
 
 
 
 
 
 
13908
  # Google Cloud VertexAI Orchestrator
13909
 
13910
  [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) is a serverless ML workflow tool running on the Google Cloud Platform. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
@@ -14224,6 +14543,11 @@ File: docs/book/component-guide/step-operators/azureml.md
14224
  description: Executing individual steps in AzureML.
14225
  ---
14226
 
 
 
 
 
 
14227
  # AzureML
14228
 
14229
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's AzureML step operator allows you to submit individual steps to be run on AzureML compute instances.
@@ -14385,6 +14709,11 @@ File: docs/book/component-guide/step-operators/custom.md
14385
  description: Learning how to develop a custom step operator.
14386
  ---
14387
 
 
 
 
 
 
14388
  # Develop a Custom Step Operator
14389
 
14390
  {% hint style="info" %}
@@ -14514,6 +14843,11 @@ File: docs/book/component-guide/step-operators/kubernetes.md
14514
  description: Executing individual steps in Kubernetes Pods.
14515
  ---
14516
 
 
 
 
 
 
14517
  # Kubernetes Step Operator
14518
 
14519
  ZenML's Kubernetes step operator allows you to submit individual steps to be run on Kubernetes pods.
@@ -14748,6 +15082,11 @@ File: docs/book/component-guide/step-operators/modal.md
14748
  description: Executing individual steps in Modal.
14749
  ---
14750
 
 
 
 
 
 
14751
  # Modal Step Operator
14752
 
14753
  [Modal](https://modal.com) is a platform for running cloud infrastructure. It offers specialized compute instances to run your code and has a fast execution time, especially around building Docker images and provisioning hardware. ZenML's Modal step operator allows you to submit individual steps to be run on Modal compute instances.
@@ -14865,6 +15204,11 @@ File: docs/book/component-guide/step-operators/sagemaker.md
14865
  description: Executing individual steps in SageMaker.
14866
  ---
14867
 
 
 
 
 
 
14868
  # Amazon SageMaker
14869
 
14870
  [SageMaker](https://aws.amazon.com/sagemaker/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's SageMaker step operator allows you to submit individual steps to be run on Sagemaker compute instances.
@@ -15230,6 +15574,11 @@ roleRef:
15230
  name: edit
15231
  apiGroup: rbac.authorization.k8s.io
15232
  ---
 
 
 
 
 
15233
  ```
15234
 
15235
  And then execute the following command to create the resources:
@@ -15398,6 +15747,11 @@ File: docs/book/component-guide/step-operators/vertex.md
15398
  description: Executing individual steps in Vertex AI.
15399
  ---
15400
 
 
 
 
 
 
15401
  # Google Cloud VertexAI
15402
 
15403
  [Vertex AI](https://cloud.google.com/vertex-ai) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's Vertex AI step operator allows you to submit individual steps to be run on Vertex AI compute instances.
@@ -15588,6 +15942,11 @@ File: docs/book/component-guide/component-guide.md
15588
  description: Overview of categories of MLOps components.
15589
  ---
15590
 
 
 
 
 
 
15591
  # 📜 Overview
15592
 
15593
  If you are new to the world of MLOps, it is often daunting to be immediately faced with a sea of tools that seemingly all promise and do the same things. It is useful in this case to try to categorize tools in various groups in order to understand their value in your toolchain in a more precise manner.
@@ -15626,6 +15985,11 @@ File: docs/book/component-guide/integration-overview.md
15626
  description: Overview of third-party ZenML integrations.
15627
  ---
15628
 
 
 
 
 
 
15629
  # Integration overview
15630
 
15631
  Categorizing the MLOps stack is a good way to write abstractions for an MLOps pipeline and standardize your processes. But ZenML goes further and also provides concrete implementations of these categories by **integrating** with various tools for each category. Once code is organized into a ZenML pipeline, you can supercharge your ML workflows with the best-in-class solutions from various MLOps areas.
@@ -15796,3 +16160,9 @@ There are countless tools in the ML / MLOps field. We have made an initial prior
15796
  We also welcome community contributions. Check our [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for more details on how to best contribute to new integrations.
15797
 
15798
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
 
 
 
 
 
 
 
1
+ This file is a merged representation of a subset of the codebase, containing specifically included files, combined into a single document by Repomix.
 
2
 
3
  ================================================================
4
  File Summary
 
34
 
35
  Notes:
36
  ------
37
+ - Some files may have been excluded based on .gitignore rules and Repomix's configuration
38
+ - Binary files are not included in this packed representation. Please refer to the Repository Structure section for a complete list of file paths, including binary files
39
+ - Only files matching these patterns are included: docs/book/component-guide/**/*.md
40
+ - Files matching patterns in .gitignore are excluded
41
+ - Files matching default ignore patterns are excluded
42
 
43
  Additional Info:
44
  ----------------
 
157
  icon: message-exclamation
158
  ---
159
 
160
+ {% hint style="warning" %}
161
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
162
+ {% endhint %}
163
+
164
+
165
  # Alerters
166
 
167
  **Alerters** allow you to send messages to chat services (like Slack, Discord, Mattermost, etc.) from within your
 
217
  description: Learning how to develop a custom alerter.
218
  ---
219
 
220
+ {% hint style="warning" %}
221
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
222
+ {% endhint %}
223
+
224
+
225
  # Develop a Custom Alerter
226
 
227
  {% hint style="info" %}
 
369
  description: Sending automated alerts to a Discord channel.
370
  ---
371
 
372
+ {% hint style="warning" %}
373
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
374
+ {% endhint %}
375
+
376
+
377
  # Discord Alerter
378
 
379
  The `DiscordAlerter` enables you to send messages to a dedicated Discord channel
 
516
  description: Sending automated alerts to a Slack channel.
517
  ---
518
 
519
+ {% hint style="warning" %}
520
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
521
+ {% endhint %}
522
+
523
+
524
  # Slack Alerter
525
 
526
  The `SlackAlerter` enables you to send messages or ask questions within a
 
862
  description: Annotating data using Argilla.
863
  ---
864
 
865
+ {% hint style="warning" %}
866
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
867
+ {% endhint %}
868
+
869
+
870
  # Argilla
871
 
872
  [Argilla](https://github.com/argilla-io/argilla) is a collaboration tool for AI engineers and domain experts who need to build high-quality datasets for their projects. It enables users to build robust language models through faster data curation using both human and machine feedback, providing support for each step in the MLOps cycle, from data labeling to model monitoring.
 
1010
  description: Learning how to develop a custom annotator.
1011
  ---
1012
 
1013
+ {% hint style="warning" %}
1014
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1015
+ {% endhint %}
1016
+
1017
+
1018
  # Develop a Custom Annotator
1019
 
1020
  {% hint style="info" %}
 
1038
  description: Annotating data using Label Studio.
1039
  ---
1040
 
1041
+ {% hint style="warning" %}
1042
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1043
+ {% endhint %}
1044
+
1045
+
1046
  # Label Studio
1047
 
1048
  Label Studio is one of the leading open-source annotation platforms available to data scientists and ML practitioners.
 
1195
  description: Annotating data using Pigeon.
1196
  ---
1197
 
1198
+ {% hint style="warning" %}
1199
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1200
+ {% endhint %}
1201
+
1202
+
1203
  # Pigeon
1204
 
1205
  Pigeon is a lightweight, open-source annotation tool designed for quick and easy labeling of data directly within Jupyter notebooks. It provides a simple and intuitive interface for annotating various types of data, including:
 
1317
  description: Annotating data using Prodigy.
1318
  ---
1319
 
1320
+ {% hint style="warning" %}
1321
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1322
+ {% endhint %}
1323
+
1324
+
1325
  # Prodigy
1326
 
1327
  [Prodigy](https://prodi.gy/) is a modern annotation tool for creating training
 
1461
  icon: folder-closed
1462
  ---
1463
 
1464
+ {% hint style="warning" %}
1465
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1466
+ {% endhint %}
1467
+
1468
+
1469
  # Artifact Stores
1470
 
1471
  The Artifact Store is a central component in any MLOps stack. As the name suggests, it acts as a data persistence layer where artifacts (e.g. datasets, models) ingested or generated by the machine learning pipelines are stored.
 
1638
  description: Storing artifacts using Azure Blob Storage
1639
  ---
1640
 
1641
+ {% hint style="warning" %}
1642
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1643
+ {% endhint %}
1644
+
1645
+
1646
  # Azure Blob Storage
1647
 
1648
  The Azure Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the Azure ZenML integration that uses [the Azure Blob Storage managed object storage service](https://azure.microsoft.com/en-us/services/storage/blobs/) to store ZenML artifacts in an Azure Blob Storage container.
 
1875
  description: Learning how to develop a custom artifact store.
1876
  ---
1877
 
1878
+ {% hint style="warning" %}
1879
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
1880
+ {% endhint %}
1881
+
1882
+
1883
  # Develop a custom artifact store
1884
 
1885
  {% hint style="info" %}
 
2072
  description: Storing artifacts using GCP Cloud Storage.
2073
  ---
2074
 
2075
+ {% hint style="warning" %}
2076
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2077
+ {% endhint %}
2078
+
2079
+
2080
  # Google Cloud Storage (GCS)
2081
 
2082
  The GCS Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the GCP ZenML integration that uses [the Google Cloud Storage managed object storage service](https://cloud.google.com/storage/docs/introduction) to store ZenML artifacts in a GCP Cloud Storage bucket.
 
2281
  description: Storing artifacts on your local filesystem.
2282
  ---
2283
 
2284
+ {% hint style="warning" %}
2285
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2286
+ {% endhint %}
2287
+
2288
+
2289
  # Local Artifact Store
2290
 
2291
  The local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) flavor that uses a folder on your local filesystem to store artifacts.
 
2374
  description: Storing artifacts in an AWS S3 bucket.
2375
  ---
2376
 
2377
+ {% hint style="warning" %}
2378
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2379
+ {% endhint %}
2380
+
2381
+
2382
  # Amazon Simple Cloud Storage (S3)
2383
 
2384
  The S3 Artifact Store is an [Artifact Store](./artifact-stores.md) flavor provided with the S3 ZenML integration that uses [the AWS S3 managed object storage service](https://aws.amazon.com/s3/) or one of the self-hosted S3 alternatives, such as [MinIO](https://min.io/) or [Ceph RGW](https://ceph.io/en/discover/technology/#object), to store artifacts in an S3 compatible object storage backend.
 
2603
  description: Storing container images in Amazon ECR.
2604
  ---
2605
 
2606
+ {% hint style="warning" %}
2607
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2608
+ {% endhint %}
2609
+
2610
+
2611
  # Amazon Elastic Container Registry (ECR)
2612
 
2613
  The AWS container registry is a [container registry](./container-registries.md) flavor provided with the ZenML `aws` integration and uses [Amazon ECR](https://aws.amazon.com/ecr/) to store container images.
 
2819
  description: Storing container images in Azure.
2820
  ---
2821
 
2822
+ {% hint style="warning" %}
2823
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
2824
+ {% endhint %}
2825
+
2826
+
2827
  # Azure Container Registry
2828
 
2829
  The Azure container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) to store container images.
 
3078
  description: Learning how to develop a custom container registry.
3079
  ---
3080
 
3081
+ {% hint style="warning" %}
3082
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3083
+ {% endhint %}
3084
+
3085
+
3086
  # Develop a custom container registry
3087
 
3088
  {% hint style="info" %}
 
3209
  description: Storing container images locally.
3210
  ---
3211
 
3212
+ {% hint style="warning" %}
3213
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3214
+ {% endhint %}
3215
+
3216
+
3217
  # Default Container Registry
3218
 
3219
  The Default container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and allows container registry URIs of any format.
 
3391
  description: Storing container images in DockerHub.
3392
  ---
3393
 
3394
+ {% hint style="warning" %}
3395
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3396
+ {% endhint %}
3397
+
3398
+
3399
  # DockerHub
3400
 
3401
  The DockerHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses [DockerHub](https://hub.docker.com/) to store container images.
 
3469
  description: Storing container images in GCP.
3470
  ---
3471
 
3472
+ {% hint style="warning" %}
3473
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3474
+ {% endhint %}
3475
+
3476
+
3477
  # Google Cloud Container Registry
3478
 
3479
  The GCP container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [Google Artifact Registry](https://cloud.google.com/artifact-registry).
 
3715
  description: Storing container images in GitHub.
3716
  ---
3717
 
3718
+ {% hint style="warning" %}
3719
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3720
+ {% endhint %}
3721
+
3722
+
3723
  # GitHub Container Registry
3724
 
3725
  The GitHub container registry is a [container registry](./container-registries.md) flavor that comes built-in with ZenML and uses the [GitHub Container Registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) to store container images.
 
3782
  description: How to develop a custom data validator
3783
  ---
3784
 
3785
+ {% hint style="warning" %}
3786
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3787
+ {% endhint %}
3788
+
3789
+
3790
  # Develop a custom data validator
3791
 
3792
  {% hint style="info" %}
 
3916
  suites
3917
  ---
3918
 
3919
+ {% hint style="warning" %}
3920
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
3921
+ {% endhint %}
3922
+
3923
+
3924
  # Deepchecks
3925
 
3926
  The Deepchecks [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Deepchecks](https://deepchecks.com/) to run data integrity, data drift, model drift and model performance tests on the datasets and models circulated in your ZenML pipelines. The test results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
 
4345
  with Evidently profiling
4346
  ---
4347
 
4348
+ {% hint style="warning" %}
4349
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4350
+ {% endhint %}
4351
+
4352
+
4353
  # Evidently
4354
 
4355
  The Evidently [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Evidently](https://evidentlyai.com/) to perform data quality, data drift, model drift and model performance analyzes, to generate reports and run checks. The reports and check results can be used to implement automated corrective actions in your pipelines or to render interactive representations for further visual interpretation, evaluation and documentation.
 
4987
  document the results
4988
  ---
4989
 
4990
+ {% hint style="warning" %}
4991
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
4992
+ {% endhint %}
4993
+
4994
+
4995
  # Great Expectations
4996
 
4997
  The Great Expectations [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [Great Expectations](https://greatexpectations.io/) to run data profiling and data quality tests on the data circulated through your pipelines. The test results can be used to implement automated corrective actions in your pipelines. They are also automatically rendered into documentation for further visual interpretation and evaluation.
 
5304
  data with whylogs/WhyLabs profiling.
5305
  ---
5306
 
5307
+ {% hint style="warning" %}
5308
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5309
+ {% endhint %}
5310
+
5311
+
5312
  # Whylogs
5313
 
5314
  The whylogs/WhyLabs [Data Validator](./data-validators.md) flavor provided with the ZenML integration uses [whylogs](https://whylabs.ai/whylogs) and [WhyLabs](https://whylabs.ai) to generate and track data profiles, highly accurate descriptive representations of your data. The profiles can be used to implement automated corrective actions in your pipelines, or to render interactive representations for further visual interpretation, evaluation and documentation.
 
5596
  description: Logging and visualizing experiments with Comet.
5597
  ---
5598
 
5599
+ {% hint style="warning" %}
5600
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5601
+ {% endhint %}
5602
+
5603
+
5604
  # Comet
5605
 
5606
  The Comet Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Comet ZenML integration that uses [the Comet experiment tracking platform](https://www.comet.com/site/products/ml-experiment-tracking/) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
 
5896
  description: Learning how to develop a custom experiment tracker.
5897
  ---
5898
 
5899
+ {% hint style="warning" %}
5900
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5901
+ {% endhint %}
5902
+
5903
+
5904
  # Develop a custom experiment tracker
5905
 
5906
  {% hint style="info" %}
 
5967
  icon: clipboard
5968
  ---
5969
 
5970
+ {% hint style="warning" %}
5971
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
5972
+ {% endhint %}
5973
+
5974
+
5975
  # Experiment Trackers
5976
 
5977
  Experiment trackers let you track your ML experiments by logging extended information about your models, datasets,
 
6065
  description: Logging and visualizing experiments with MLflow.
6066
  ---
6067
 
6068
+ {% hint style="warning" %}
6069
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6070
+ {% endhint %}
6071
+
6072
+
6073
  # MLflow
6074
 
6075
  The MLflow Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the MLflow ZenML integration that uses [the MLflow tracking service](https://mlflow.org/docs/latest/tracking.html) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
 
6288
  description: Logging and visualizing experiments with neptune.ai
6289
  ---
6290
 
6291
+ {% hint style="warning" %}
6292
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6293
+ {% endhint %}
6294
+
6295
+
6296
  # Neptune
6297
 
6298
  The Neptune Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Neptune-ZenML integration that uses [neptune.ai](https://neptune.ai/product/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
 
6611
  description: Logging and visualizing experiments with Vertex AI Experiment Tracker.
6612
  ---
6613
 
6614
+ {% hint style="warning" %}
6615
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6616
+ {% endhint %}
6617
+
6618
+
6619
  # Vertex AI Experiment Tracker
6620
 
6621
  The Vertex AI Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Vertex AI ZenML integration. It uses the [Vertex AI tracking service](https://cloud.google.com/vertex-ai/docs/experiments/intro-vertex-ai-experiments) to log and visualize information from your pipeline steps (e.g., models, parameters, metrics).
 
6935
  description: Logging and visualizing experiments with Weights & Biases.
6936
  ---
6937
 
6938
+ {% hint style="warning" %}
6939
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
6940
+ {% endhint %}
6941
+
6942
+
6943
  # Weights & Biases
6944
 
6945
  The Weights & Biases Experiment Tracker is an [Experiment Tracker](./experiment-trackers.md) flavor provided with the Weights & Biases ZenML integration that uses [the Weights & Biases experiment tracking platform](https://wandb.ai/site/experiment-tracking) to log and visualize information from your pipeline steps (e.g. models, parameters, metrics).
 
7257
  description: Learning how to develop a custom feature store.
7258
  ---
7259
 
7260
+ {% hint style="warning" %}
7261
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7262
+ {% endhint %}
7263
+
7264
+
7265
  # Develop a Custom Feature Store
7266
 
7267
  {% hint style="info" %}
 
7285
  description: Managing data in Feast feature stores.
7286
  ---
7287
 
7288
+ {% hint style="warning" %}
7289
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7290
+ {% endhint %}
7291
+
7292
+
7293
  # Feast
7294
 
7295
  Feast (Feature Store) is an operational data system for managing and serving machine learning features to models in production. Feast is able to serve feature data to models from a low-latency online store (for real-time prediction) or from an offline store (for scale-out batch scoring or model training).
 
7472
  description: Building container images with AWS CodeBuild
7473
  ---
7474
 
7475
+ {% hint style="warning" %}
7476
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7477
+ {% endhint %}
7478
+
7479
+
7480
  # AWS Image Builder
7481
 
7482
  The AWS image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `aws` integration that uses [AWS CodeBuild](https://aws.amazon.com/codebuild) to build container images.
 
7715
  description: Learning how to develop a custom image builder.
7716
  ---
7717
 
7718
+ {% hint style="warning" %}
7719
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7720
+ {% endhint %}
7721
+
7722
+
7723
  # Develop a Custom Image Builder
7724
 
7725
  {% hint style="info" %}
 
7840
  description: Building container images with Google Cloud Build
7841
  ---
7842
 
7843
+ {% hint style="warning" %}
7844
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
7845
+ {% endhint %}
7846
+
7847
+
7848
  # Google Cloud Image Builder
7849
 
7850
  The Google Cloud image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `gcp` integration that uses [Google Cloud Build](https://cloud.google.com/build) to build container images.
 
8098
  description: Building container images with Kaniko.
8099
  ---
8100
 
8101
+ {% hint style="warning" %}
8102
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8103
+ {% endhint %}
8104
+
8105
+
8106
  # Kaniko Image Builder
8107
 
8108
  The Kaniko image builder is an [image builder](./image-builders.md) flavor provided by the ZenML `kaniko` integration that uses [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images.
 
8260
  description: Building container images locally.
8261
  ---
8262
 
8263
+ {% hint style="warning" %}
8264
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8265
+ {% endhint %}
8266
+
8267
+
8268
  # Local Image Builder
8269
 
8270
  The local image builder is an [image builder](./image-builders.md) flavor that comes built-in with ZenML and uses the local Docker installation on your client machine to build container images.
 
8317
  description: Deploying your models locally with BentoML.
8318
  ---
8319
 
8320
+ {% hint style="warning" %}
8321
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8322
+ {% endhint %}
8323
+
8324
+
8325
  # BentoML
8326
 
8327
  BentoML is an open-source framework for machine learning model serving. it can be used to deploy models locally, in a cloud environment, or in a Kubernetes environment.
 
8708
  description: Learning how to develop a custom model deployer.
8709
  ---
8710
 
8711
+ {% hint style="warning" %}
8712
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8713
+ {% endhint %}
8714
+
8715
+
8716
  # Develop a Custom Model Deployer
8717
 
8718
  {% hint style="info" %}
 
8885
  Deploying models to Databricks Inference Endpoints with Databricks
8886
  ---
8887
 
8888
+ {% hint style="warning" %}
8889
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
8890
+ {% endhint %}
8891
+
8892
+
8893
  # Databricks
8894
 
8895
 
 
9043
  :hugging_face:.
9044
  ---
9045
 
9046
+ {% hint style="warning" %}
9047
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
9048
+ {% endhint %}
9049
+
9050
+
9051
  # Hugging Face
9052
 
9053
  Hugging Face Inference Endpoints provides a secure production solution to easily deploy any `transformers`, `sentence-transformers`, and `diffusers` models on a dedicated and autoscaling infrastructure managed by Hugging Face. An Inference Endpoint is built from a model from the [Hub](https://huggingface.co/models).
 
9240
  description: Deploying your models locally with MLflow.
9241
  ---
9242
 
9243
+ {% hint style="warning" %}
9244
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
9245
+ {% endhint %}
9246
+
9247
+
9248
  # MLflow
9249
 
9250
  The MLflow Model Deployer is one of the available flavors of the [Model Deployer](./model-deployers.md) stack component. Provided with the MLflow integration it can be used to deploy and manage [MLflow models](https://www.mlflow.org/docs/latest/python\_api/mlflow.deployments.html) on a local running MLflow server.
 
9689
  description: Deploying models to Kubernetes with Seldon Core.
9690
  ---
9691
 
9692
+ {% hint style="warning" %}
9693
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
9694
+ {% endhint %}
9695
+
9696
+
9697
  # Seldon
9698
 
9699
  [Seldon Core](https://github.com/SeldonIO/seldon-core) is a production grade source-available model serving platform. It packs a wide range of features built around deploying models to REST/GRPC microservices that include monitoring and logging, model explainers, outlier detectors and various continuous deployment strategies such as A/B testing, canary deployments and more.
 
10173
  description: Deploying your LLM locally with vLLM.
10174
  ---
10175
 
10176
+ {% hint style="warning" %}
10177
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10178
+ {% endhint %}
10179
+
10180
+
10181
  # vLLM
10182
 
10183
  [vLLM](https://docs.vllm.ai/en/latest/) is a fast and easy-to-use library for LLM inference and serving.
 
10256
  description: Learning how to develop a custom model registry.
10257
  ---
10258
 
10259
+ {% hint style="warning" %}
10260
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10261
+ {% endhint %}
10262
+
10263
+
10264
  # Develop a Custom Model Registry
10265
 
10266
  {% hint style="info" %}
 
10457
  description: Managing MLFlow logged models and artifacts
10458
  ---
10459
 
10460
+ {% hint style="warning" %}
10461
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10462
+ {% endhint %}
10463
+
10464
+
10465
  # MLflow Model Registry
10466
 
10467
  [MLflow](https://www.mlflow.org/docs/latest/tracking.html) is a popular tool that helps you track experiments, manage models and even deploy them to different environments. ZenML already provides a [MLflow Experiment Tracker](../experiment-trackers/mlflow.md) that you can use to track your experiments, and an [MLflow Model Deployer](../model-deployers/mlflow.md) that you can use to deploy your models locally.
 
10711
  description: Orchestrating your pipelines to run on Airflow.
10712
  ---
10713
 
10714
+ {% hint style="warning" %}
10715
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
10716
+ {% endhint %}
10717
+
10718
+
10719
  # Airflow Orchestrator
10720
 
10721
  ZenML pipelines can be executed natively as [Airflow](https://airflow.apache.org/)
 
11025
  description: Orchestrating your pipelines to run on AzureML.
11026
  ---
11027
 
11028
+ {% hint style="warning" %}
11029
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11030
+ {% endhint %}
11031
+
11032
+
11033
  # AzureML Orchestrator
11034
 
11035
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning) is a
 
11276
  description: Learning how to develop a custom orchestrator.
11277
  ---
11278
 
11279
+ {% hint style="warning" %}
11280
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11281
+ {% endhint %}
11282
+
11283
+
11284
  # Develop a custom orchestrator
11285
 
11286
  {% hint style="info" %}
 
11505
  description: Orchestrating your pipelines to run on Databricks.
11506
  ---
11507
 
11508
+ {% hint style="warning" %}
11509
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11510
+ {% endhint %}
11511
+
11512
+
11513
  # Databricks Orchestrator
11514
 
11515
  [Databricks](https://www.databricks.com/) is a unified data analytics platform that combines the best of data warehouses and data lakes to offer an integrated solution for big data processing and machine learning. It provides a collaborative environment for data scientists, data engineers, and business analysts to work together on data projects. Databricks offers optimized performance and scalability for big data workloads.
 
11706
  description: Orchestrating your pipelines to run on HyperAI.ai instances.
11707
  ---
11708
 
11709
+ {% hint style="warning" %}
11710
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11711
+ {% endhint %}
11712
+
11713
+
11714
  # HyperAI Orchestrator
11715
 
11716
  [HyperAI](https://www.hyperai.ai) is a cutting-edge cloud compute platform designed to make AI accessible for everyone. The HyperAI orchestrator is an [orchestrator](./orchestrators.md) flavor that allows you to easily deploy your pipelines on HyperAI instances.
 
11798
  description: Orchestrating your pipelines to run on Kubeflow.
11799
  ---
11800
 
11801
+ {% hint style="warning" %}
11802
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
11803
+ {% endhint %}
11804
+
11805
+
11806
  # Kubeflow Orchestrator
11807
 
11808
  The Kubeflow orchestrator is an [orchestrator](./orchestrators.md) flavor provided by the ZenML `kubeflow` integration that uses [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) to run your pipelines.
 
12160
  description: Orchestrating your pipelines to run on Kubernetes clusters.
12161
  ---
12162
 
12163
+ {% hint style="warning" %}
12164
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12165
+ {% endhint %}
12166
+
12167
+
12168
  # Kubernetes Orchestrator
12169
 
12170
  Using the ZenML `kubernetes` integration, you can orchestrate and scale your ML pipelines on a [Kubernetes](https://kubernetes.io/) cluster without writing a single line of Kubernetes code.
 
12470
  description: Orchestrating your pipelines to run on Lightning AI.
12471
  ---
12472
 
12473
+ {% hint style="warning" %}
12474
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12475
+ {% endhint %}
12476
+
12477
+
12478
 
12479
  # Lightning AI Orchestrator
12480
 
 
12674
  description: Orchestrating your pipelines to run in Docker.
12675
  ---
12676
 
12677
+ {% hint style="warning" %}
12678
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12679
+ {% endhint %}
12680
+
12681
+
12682
  # Local Docker Orchestrator
12683
 
12684
  The local Docker orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally using Docker.
 
12756
  description: Orchestrating your pipelines to run locally.
12757
  ---
12758
 
12759
+ {% hint style="warning" %}
12760
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12761
+ {% endhint %}
12762
+
12763
+
12764
  # Local Orchestrator
12765
 
12766
  The local orchestrator is an [orchestrator](./orchestrators.md) flavor that comes built-in with ZenML and runs your pipelines locally.
 
12894
  description: Orchestrating your pipelines to run on Amazon Sagemaker.
12895
  ---
12896
 
12897
+ {% hint style="warning" %}
12898
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
12899
+ {% endhint %}
12900
+
12901
+
12902
  # AWS Sagemaker Orchestrator
12903
 
12904
  [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) is a serverless ML workflow tool running on AWS. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
 
13447
  description: Orchestrating your pipelines to run on VMs using SkyPilot.
13448
  ---
13449
 
13450
+ {% hint style="warning" %}
13451
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
13452
+ {% endhint %}
13453
+
13454
+
13455
  # Skypilot VM Orchestrator
13456
 
13457
  The SkyPilot VM Orchestrator is an integration provided by ZenML that allows you to provision and manage virtual machines (VMs) on any cloud provider supported by the [SkyPilot framework](https://skypilot.readthedocs.io/en/latest/index.html). This integration is designed to simplify the process of running machine learning workloads on the cloud, offering cost savings, high GPU availability, and managed execution, We recommend using the SkyPilot VM Orchestrator if you need access to GPUs for your workloads, but don't want to deal with the complexities of managing cloud infrastructure or expensive managed solutions.
 
13974
  description: Orchestrating your pipelines to run on Tekton.
13975
  ---
13976
 
13977
+ {% hint style="warning" %}
13978
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
13979
+ {% endhint %}
13980
+
13981
+
13982
  # Tekton Orchestrator
13983
 
13984
  [Tekton](https://tekton.dev/) is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems.
 
14219
  description: Orchestrating your pipelines to run on Vertex AI.
14220
  ---
14221
 
14222
+ {% hint style="warning" %}
14223
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14224
+ {% endhint %}
14225
+
14226
+
14227
  # Google Cloud VertexAI Orchestrator
14228
 
14229
  [Vertex AI Pipelines](https://cloud.google.com/vertex-ai/docs/pipelines/introduction) is a serverless ML workflow tool running on the Google Cloud Platform. It is an easy way to quickly run your code in a production-ready, repeatable cloud orchestrator that requires minimal setup without provisioning and paying for standby compute.
 
14543
  description: Executing individual steps in AzureML.
14544
  ---
14545
 
14546
+ {% hint style="warning" %}
14547
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14548
+ {% endhint %}
14549
+
14550
+
14551
  # AzureML
14552
 
14553
  [AzureML](https://azure.microsoft.com/en-us/products/machine-learning/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's AzureML step operator allows you to submit individual steps to be run on AzureML compute instances.
 
14709
  description: Learning how to develop a custom step operator.
14710
  ---
14711
 
14712
+ {% hint style="warning" %}
14713
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14714
+ {% endhint %}
14715
+
14716
+
14717
  # Develop a Custom Step Operator
14718
 
14719
  {% hint style="info" %}
 
14843
  description: Executing individual steps in Kubernetes Pods.
14844
  ---
14845
 
14846
+ {% hint style="warning" %}
14847
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
14848
+ {% endhint %}
14849
+
14850
+
14851
  # Kubernetes Step Operator
14852
 
14853
  ZenML's Kubernetes step operator allows you to submit individual steps to be run on Kubernetes pods.
 
15082
  description: Executing individual steps in Modal.
15083
  ---
15084
 
15085
+ {% hint style="warning" %}
15086
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15087
+ {% endhint %}
15088
+
15089
+
15090
  # Modal Step Operator
15091
 
15092
  [Modal](https://modal.com) is a platform for running cloud infrastructure. It offers specialized compute instances to run your code and has a fast execution time, especially around building Docker images and provisioning hardware. ZenML's Modal step operator allows you to submit individual steps to be run on Modal compute instances.
 
15204
  description: Executing individual steps in SageMaker.
15205
  ---
15206
 
15207
+ {% hint style="warning" %}
15208
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15209
+ {% endhint %}
15210
+
15211
+
15212
  # Amazon SageMaker
15213
 
15214
  [SageMaker](https://aws.amazon.com/sagemaker/) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's SageMaker step operator allows you to submit individual steps to be run on Sagemaker compute instances.
 
15574
  name: edit
15575
  apiGroup: rbac.authorization.k8s.io
15576
  ---
15577
+
15578
+ {% hint style="warning" %}
15579
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15580
+ {% endhint %}
15581
+
15582
  ```
15583
 
15584
  And then execute the following command to create the resources:
 
15747
  description: Executing individual steps in Vertex AI.
15748
  ---
15749
 
15750
+ {% hint style="warning" %}
15751
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15752
+ {% endhint %}
15753
+
15754
+
15755
  # Google Cloud VertexAI
15756
 
15757
  [Vertex AI](https://cloud.google.com/vertex-ai) offers specialized compute instances to run your training jobs and has a comprehensive UI to track and manage your models and logs. ZenML's Vertex AI step operator allows you to submit individual steps to be run on Vertex AI compute instances.
 
15942
  description: Overview of categories of MLOps components.
15943
  ---
15944
 
15945
+ {% hint style="warning" %}
15946
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15947
+ {% endhint %}
15948
+
15949
+
15950
  # 📜 Overview
15951
 
15952
  If you are new to the world of MLOps, it is often daunting to be immediately faced with a sea of tools that seemingly all promise and do the same things. It is useful in this case to try to categorize tools in various groups in order to understand their value in your toolchain in a more precise manner.
 
15985
  description: Overview of third-party ZenML integrations.
15986
  ---
15987
 
15988
+ {% hint style="warning" %}
15989
+ This is an older version of the ZenML documentation. To read and view the latest version please [visit this up-to-date URL](https://docs.zenml.io).
15990
+ {% endhint %}
15991
+
15992
+
15993
  # Integration overview
15994
 
15995
  Categorizing the MLOps stack is a good way to write abstractions for an MLOps pipeline and standardize your processes. But ZenML goes further and also provides concrete implementations of these categories by **integrating** with various tools for each category. Once code is organized into a ZenML pipeline, you can supercharge your ML workflows with the best-in-class solutions from various MLOps areas.
 
16160
  We also welcome community contributions. Check our [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for more details on how to best contribute to new integrations.
16161
 
16162
  <figure><img src="https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc" alt="ZenML Scarf"><figcaption></figcaption></figure>
16163
+
16164
+
16165
+
16166
+ ================================================================
16167
+ End of Codebase
16168
+ ================================================================