=== File: docs/book/introduction.md === # ZenML Documentation Summary **ZenML** is an open-source MLOps framework designed for creating portable, production-ready machine learning pipelines. It decouples infrastructure from code, facilitating collaboration among developers. ## For MLOps Platform Engineers - **ZenML Pro**: Offers a managed instance with features like CI/CD, Model Control Plane, and RBAC. - **Self-hosted Deployment**: Deploy on any cloud provider using Terraform utilities. ```bash zenml stack register --provider aws zenml stack deploy --provider gcp ``` - **Standardization**: Register environments as ZenML stacks for consistent ML workflows. ```bash zenml orchestrator register kfp_orchestrator -f kubeflow zenml stack register production --orchestrator kubeflow ... ``` - **No Vendor Lock-In**: Easily switch between cloud providers. ```bash zenml stack set gcp python run.py # Run in GCP zenml stack set aws python run.py # Now in AWS ``` ## For Data Scientists - **Local Development**: Develop models locally and switch to production seamlessly. ```bash python run.py # Local development zenml stack set production python run.py # Production run ``` - **Pythonic SDK**: Use decorators to create ZenML pipelines. ```python from zenml import pipeline, step @step def step_1() -> str: return "world" @step def step_2(input_one: str, input_two: str) -> None: print(f"{input_one} {input_two}") @pipeline def my_pipeline(): step_2(input_one="hello", input_two=step_1()) my_pipeline() ``` - **Automatic Metadata Tracking**: Tracks runs, datasets, and models, providing visualizations via the ZenML dashboard. ## For ML Engineers - **ML Lifecycle Management**: Manage ML setups easily by defining workflows as Pipelines and infrastructures as Stacks. ```bash zenml stack set staging python run.py # Test on staging zenml stack set production python run.py # Run in production ``` - **Reproducibility**: Automatically tracks and versions stacks, pipelines, and artifacts. - **Automated Deployments**: Define workflows for automatic deployment to services like Seldon. ```python from zenml.integrations.seldon.steps import seldon_model_deployer_step @pipeline def my_pipeline(): data = data_loader_step() model = model_trainer_step(data) seldon_model_deployer_step(model) ``` ## Additional Resources - **For MLOps Engineers**: [Production Guide](user-guide/production-guide/cloud-orchestration.md), [Component Guide](./component-guide/README.md), [FAQ](reference/faq.md). - **For Data Scientists**: [Core Concepts](getting-started/core-concepts.md), [Starter Guide](user-guide/starter-guide/), [Quickstart in Colab](https://colab.research.google.com/github/zenml-io/zenml/blob/main/examples/quickstart/notebooks/quickstart.ipynb). - **For ML Engineers**: [How To](./how-to/pipeline-development/build-pipelines/README.md), [Examples](https://github.com/zenml-io/zenml-projects). ZenML provides a comprehensive framework for managing the ML lifecycle, ensuring reproducibility, and facilitating collaboration across teams. ================================================== === File: docs/book/component-guide/README.md === # Overview of MLOps Components and Integrations ZenML categorizes MLOps tools into **Stacks and Stack Components**, which standardize workflows in MLOps pipelines. Each stack component serves a specific function, and users can implement custom components or utilize built-in integrations. ## Supported Stack Components | **Component Type** | **Description** | |-------------------------|-------------------------------------------------------| | [Orchestrator](orchestrators/orchestrators.md) | Manages pipeline runs | | [Artifact Store](artifact-stores/artifact-stores.md) | Stores artifacts from pipelines | | [Container Registry](container-registries/container-registries.md) | Stores container images | | [Data Validator](data-validators/data-validators.md) | Validates data and models | | [Experiment Tracker](experiment-trackers/experiment-trackers.md) | Tracks ML experiments | | [Model Deployer](model-deployers/model-deployers.md) | Online model serving platforms/services | | [Step Operator](step-operators/step-operators.md) | Executes individual steps in runtime environments | | [Alerter](alerters/alerters.md) | Sends alerts through specified channels | | [Image Builder](image-builders/image-builders.md) | Builds container images | | [Annotator](annotators/annotators.md) | Labels and annotates data | | [Model Registry](model-registries/model-registries.md) | Manages ML models | | [Feature Store](feature-stores/feature-stores.md) | Manages data/features | Each ZenML pipeline requires a **stack** with at least an orchestrator and an artifact store, while other components are optional. ## Custom Component Flavors Users can create custom components by writing **component flavors**. For guidance, refer to the [general guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific guides like the [custom orchestrator guide](orchestrators/custom.md). ## Integrations ZenML integrates with various tools to enhance MLOps processes. Examples include using [Airflow](orchestrators/airflow.md) for orchestration, [MLflow Tracking](experiment-trackers/mlflow.md) for experiment tracking, and deploying models with [Seldon Core](model-deployers/seldon.md). ZenML allows flexibility in tool selection without vendor lock-in. ### Available Integrations A list of supported integrations can be found on the [ZenML integrations page](https://zenml.io/integrations) or in the [GitHub integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations). ### Installing Integrations To install integrations, use: ```bash zenml integration install kubeflow mlflow seldon -y ``` This command installs preferred versions via pip: ```bash pip install kubeflow== mlflow== seldon== ``` The `-y` flag confirms installations without prompts. For a complete list of CLI commands, run `zenml integration --help`. ### Using `uv` for Package Installation You can use [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag: ```bash zenml integration install kubeflow --uv ``` ### Upgrading Integrations To upgrade integrations, use: ```bash zenml integration upgrade mlflow pytorch -y ``` If no integrations are specified, all installed integrations will be upgraded. ### Community Contributions ZenML welcomes contributions for new integrations. Refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and the [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for more information. ================================================== === File: docs/book/component-guide/integration-overview.md === ### ZenML Integrations Overview ZenML enhances MLOps pipelines by integrating with various tools across different categories, allowing users to optimize their ML workflows. Key integrations include: - **Orchestrators**: Use [Airflow](orchestrators/airflow.md) or [Kubeflow](orchestrators/kubeflow.md) for workflow orchestration. - **Experiment Trackers**: Track experiments with [MLflow Tracking](experiment-trackers/mlflow.md) or [Weights & Biases](experiment-trackers/wandb.md). - **Model Deployers**: Transition from local deployments with [MLflow](model-deployers/mlflow.md) to Kubernetes using [Seldon Core](model-deployers/seldon.md). ZenML allows flexibility in choosing MLOps tools without vendor lock-in, enabling easy tool switching as requirements evolve. ### Available Integrations A comprehensive list of supported ZenML integrations can be found on the [ZenML integrations webpage](https://zenml.io/integrations) or in the [integrations directory](https://github.com/zenml-io/zenml/tree/main/src/zenml/integrations) on GitHub. ### Installing ZenML Integrations To install integrations, use: ```bash zenml integration install kubeflow mlflow seldon -y ``` This command installs preferred versions via pip: ```bash pip install kubeflow== mlflow== seldon== ``` The `-y` flag auto-confirms installations. For a complete list of CLI commands, run `zenml integration --help`. ### Using `uv` for Package Installation You can use [`uv`](https://github.com/astral-sh/uv) as a package manager by adding the `--uv` flag: ```bash zenml integration install --uv ``` Ensure `uv` is installed. This is experimental, and some packages may not be compatible. ### Upgrading ZenML Integrations To upgrade integrations to their latest versions, use: ```bash zenml integration upgrade mlflow pytorch -y ``` The `-y` flag confirms upgrades without prompts. If no integrations are specified, all installed integrations will be upgraded. ### Community Contributions ZenML welcomes community contributions for new integrations. Check the public [roadmap](https://zenml.io/roadmap) and refer to the [Contribution Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) and [External Integration Guide](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/README.md) for more details. ================================================== === File: docs/book/component-guide/component-guide.md === # Overview of MLOps Components MLOps can be overwhelming due to the multitude of tools available. To simplify this, ZenML categorizes tools into **Stacks and Stack Components**, which standardize workflows in MLOps pipelines. Users can implement these components through custom implementations or built-in integrations. ## Stack Components ZenML supports the following stack components, each serving a specific role in the MLOps process: | **Type of Stack Component** | **Description** | |-----------------------------|-----------------| | [Orchestrator](./orchestrators/orchestrators.md) | Manages pipeline runs | | [Artifact Store](./artifact-stores/artifact-stores.md) | Stores artifacts from pipelines | | [Container Registry](./container-registries/container-registries.md) | Stores container images | | [Step Operator](./step-operators/step-operators.md) | Executes steps in runtime environments | | [Model Deployer](./model-deployers/model-deployers.md) | Handles online model serving | | [Feature Store](./feature-stores/feature-stores.md) | Manages data/features | | [Experiment Tracker](./experiment-trackers/experiment-trackers.md) | Tracks ML experiments | | [Alerter](./alerters/alerters.md) | Sends alerts through channels | | [Annotator](./annotators/annotators.md) | Labels and annotates data | | [Data Validator](./data-validators/data-validators.md) | Validates data and models | | [Image Builder](./image-builders/image-builders.md) | Builds container images | | [Model Registry](./model-registries/model-registries.md) | Manages ML models | Each ZenML pipeline requires a **stack** that includes at least an orchestrator and an artifact store, while other components can be added as needed. ## Custom Component Flavors Users can create custom component **flavors** to control ZenML's behavior. For more information, refer to the [guide on writing component flavors](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) or specific guides for component types, such as the [custom orchestrator guide](orchestrators/custom.md). ================================================== === File: docs/book/component-guide/model-registries/custom.md === ### Custom Model Registry Development in ZenML #### Overview This documentation provides guidance on developing a custom model registry in ZenML. Familiarity with the general guide on writing custom component flavors is recommended. #### Important Notes - The `BaseModelRegistry` is an abstract class for creating custom model registries, offering a basic interface for model registration and retrieval. - The API is subject to change as the model registry component is still evolving. Feedback can be provided via Slack or GitHub. #### Base Abstraction The `BaseModelRegistry` class includes methods for model and version management, which must be implemented in subclasses: ```python from abc import ABC, abstractmethod from typing import Any, Dict, List, Optional class BaseModelRegistryConfig(StackComponentConfig): """Base config for model registries.""" class BaseModelRegistry(StackComponent, ABC): @abstractmethod def register_model(self, name: str, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> RegisteredModel: """Registers a model.""" @abstractmethod def delete_model(self, name: str) -> None: """Deletes a model.""" @abstractmethod def update_model(self, name: str, description: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> RegisteredModel: """Updates a model.""" @abstractmethod def get_model(self, name: str) -> RegisteredModel: """Retrieves a model.""" @abstractmethod def list_models(self, name: Optional[str] = None, tags: Optional[Dict[str, str]] = None) -> List[RegisteredModel]: """Lists models.""" @abstractmethod def register_model_version(self, name: str, version: Optional[str] = None, **kwargs: Any) -> RegistryModelVersion: """Registers a model version.""" @abstractmethod def delete_model_version(self, name: str, version: str) -> None: """Deletes a model version.""" @abstractmethod def update_model_version(self, name: str, version: str, **kwargs: Any) -> RegistryModelVersion: """Updates a model version.""" @abstractmethod def list_model_versions(self, name: Optional[str] = None, **kwargs: Any) -> List[RegistryModelVersion]: """Lists model versions.""" @abstractmethod def get_model_version(self, name: str, version: str) -> RegistryModelVersion: """Retrieves a model version.""" @abstractmethod def load_model_version(self, name: str, version: str, **kwargs: Any) -> Any: """Loads a model version.""" @abstractmethod def get_model_uri_artifact_store(self, model_version: RegistryModelVersion) -> str: """Gets the URI artifact store for a model version.""" ``` #### Creating a Custom Model Registry To create a custom model registry: 1. Understand core concepts of model registries. 2. Subclass `BaseModelRegistry` and implement abstract methods. 3. Create a `ModelRegistryConfig` class inheriting from `BaseModelRegistryConfig`. 4. Combine implementation and configuration by inheriting from `BaseModelRegistryFlavor`. Register the custom flavor using: ```shell zenml model-registry flavor register ``` #### Workflow Integration - The `CustomModelRegistryFlavor` is used during flavor creation. - The `CustomModelRegistryConfig` validates user input during stack component registration. - The `CustomModelRegistry` is utilized when the component is in action. For a complete implementation example, refer to the [MLFlowModelRegistry](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). ================================================== === File: docs/book/component-guide/model-registries/mlflow.md === # MLflow Model Registry Summary ## Overview MLflow is a tool for tracking experiments, managing models, and deploying them. The MLflow model registry allows users to manage and track ML models and their artifacts, providing a UI for browsing models. ## Use Cases - Track different model versions during development and deployment. - Manage model deployments across various environments. - Monitor and compare model performance over time. - Simplify model deployment to production or staging environments. ## Deployment To use the MLflow model registry, install the MLflow integration: ```shell zenml integration install mlflow -y ``` Register the MLflow model registry component in your stack: ```shell zenml model-registry register mlflow_model_registry --flavor=mlflow zenml stack register custom_stack -r mlflow_model_registry ... --set ``` **Note:** The MLflow model registry uses the same configuration as the MLflow Experiment Tracker. Use MLflow version 2.2.1 or higher due to a critical vulnerability. ## Usage ### Register Models in a Pipeline Use the `mlflow_register_model_step` to register a model logged to MLflow: ```python from zenml import pipeline from zenml.integrations.mlflow.steps.mlflow_registry import mlflow_register_model_step @pipeline def mlflow_registry_training_pipeline(): model = ... mlflow_register_model_step(model=model, name="tensorflow-mnist-model") ``` ### Parameters for `mlflow_register_model_step` - `name`: Required model name. - `version`: Model version. - `trained_model_name`: Name of the model artifact in MLflow. - `model_source_uri`: Path to the model. - `description`: Model version description. - `metadata`: Metadata list for the model version. ### Register Models via CLI To manually register a model version: ```shell zenml model-registry models register-version Tensorflow-model \ --description="A new version with accuracy 98.88%" \ -v 1 \ --model-uri="file:///.../model" \ -m key1 value1 -m key2 value2 \ --zenml-pipeline-name="mlflow_training_pipeline" \ --zenml-step-name="trainer" ``` ### Deploy Registered Models After registration, models can be deployed as prediction services. Refer to the MLflow model deployer documentation for details. ### Interact with Registered Models List all registered models: ```shell zenml model-registry models list ``` List versions of a specific model: ```shell zenml model-registry models list-versions tensorflow-mnist-model ``` Get details of a specific model version: ```shell zenml model-registry models get-version tensorflow-mnist-model -v 1 ``` ### Deletion To delete a registered model or a specific version: ```shell zenml model-registry models delete REGISTERED_MODEL_NAME zenml model-registry models delete-version REGISTERED_MODEL_NAME -v VERSION ``` For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_registry.MLFlowModelRegistry). ================================================== === File: docs/book/component-guide/model-registries/model-registries.md === # Model Registries Model registries are centralized solutions for managing and tracking machine learning models throughout their development and deployment stages. They enable version control, configuration tracking, and reproducibility by storing metadata such as version, configuration, and metrics. In ZenML, model registries are Stack Components that facilitate the retrieval, loading, and deployment of trained models, along with information on the training pipeline. ### Key Concepts - **RegisteredModel**: A logical grouping of models to track different versions, containing metadata like name, description, and tags. It can be user-created or automatically generated. - **RegistryModelVersion**: A specific model version identified by a unique version number, containing metadata and a reference to the model artifact. It also includes the pipeline name, run ID, and step name. - **ModelVersionStage**: Represents the state of a model version, which can be `None`, `Staging`, `Production`, or `Archived`, tracking the model's lifecycle. ### Usage ZenML's Artifact Store allows for programmatic artifact management, but model registries provide a visual interface for managing model metadata, especially with remote orchestrators. They are ideal for centralizing model state management and simplifying model retrieval and deployment. ### Integration in ZenML Stack Model registries are optional components that work with an experiment tracker. They can only be used if an experiment tracker is also in place. If not, models can still be stored but must be manually retrieved from the artifact store. #### Model Registry Flavors Model registries can be integrated with different flavors: | Model Registry | Flavor | Integration | Notes | |----------------|--------|-------------|-------| | [MLflow](mlflow.md) | `mlflow` | `mlflow` | Add MLflow as Model Registry to your stack | | [Custom Implementation](custom.md) | _custom_ | | _custom_ | To view available flavors, use the command: ```shell zenml model-registry flavor list ``` ### How to Use 1. Register a model registry in your stack, matching the flavor of your experiment tracker. 2. Register your trained model using one of the following methods: - Built-in step in the pipeline. - ZenML CLI. - Model registry UI. 3. Retrieve and load models for deployment or experimentation. For more details on fetching runs, refer to the [documentation on fetching pipelines](../../how-to/pipeline-development/build-pipelines/fetching-pipelines.md). ================================================== === File: docs/book/component-guide/model-deployers/custom.md === ### Develop a Custom Model Deployer To deploy and manage machine-learning models, ZenML includes a `Model Deployer` stack component that interacts with deployment tools and serves as a model registry. It allows listing deployed models, managing their lifecycle (suspend, resume, delete), and supports continuous deployment by updating existing model servers. #### Base Abstraction The model deployer is defined by three main criteria: 1. **Efficient Deployment**: Holds configuration attributes for interacting with remote model serving tools. 2. **Continuous Deployment Logic**: Implements `deploy_model` to update existing model servers without creating new ones. It can be used in ZenML pipelines or for ad-hoc deployments. 3. **BaseService Registry**: Acts as a registry for remote model servers, allowing recreation of `BaseService` instances from persisted configurations. The model deployer also includes lifecycle management methods: `stop_model_server`, `start_model_server`, and `delete_model_server`. #### Interface ```python from abc import ABC, abstractmethod from typing import Dict, Optional, Type from uuid import UUID from zenml.enums import StackComponentType from zenml.services import BaseService, ServiceConfig from zenml.stack import StackComponent, StackComponentConfig, Flavor DEFAULT_DEPLOYMENT_TIMEOUT = 300 class BaseModelDeployerConfig(StackComponentConfig): """Base class for model deployer configurations.""" class BaseModelDeployer(StackComponent, ABC): @abstractmethod def perform_deploy_model(self, id: UUID, config: ServiceConfig, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT) -> BaseService: """Deploy a model.""" @abstractmethod def perform_stop_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT, force: bool = False) -> BaseService: """Stop a model server.""" @abstractmethod def perform_start_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT) -> BaseService: """Start a model server.""" @abstractmethod def perform_delete_model(self, service: BaseService, timeout: int = DEFAULT_DEPLOYMENT_TIMEOUT, force: bool = False) -> None: """Delete a model server.""" class BaseModelDeployerFlavor(Flavor): @property @abstractmethod def name(self): """Flavor name.""" @property def type(self) -> StackComponentType: return StackComponentType.MODEL_DEPLOYER @property def config_class(self) -> Type[BaseModelDeployerConfig]: return BaseModelDeployerConfig @property @abstractmethod def implementation_class(self) -> Type[BaseModelDeployer]: """Implementation class.""" ``` #### Building Custom Model Deployers To create a custom model deployer flavor: 1. Inherit from `BaseModelDeployer` and implement abstract methods. 2. Create a configuration class inheriting from `BaseModelDeployerConfig`. 3. Combine both in a class inheriting from `BaseModelDeployerFlavor`, providing a `name`. 4. Create a service class inheriting from `BaseService`. Register the flavor using the CLI: ```shell zenml model-deployer flavor register ``` Example registration: ```shell zenml model-deployer flavor register flavors.my_flavor.MyModelDeployerFlavor ``` #### Important Notes - The custom flavor class is utilized during flavor creation via CLI. - The configuration class is used for validation during stack component registration. - The implementation class is invoked when the component is in use, allowing separation of configuration from implementation. For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-model_deployers/#zenml.model_deployers.base_model_deployer.BaseModelDeployer). ================================================== === File: docs/book/component-guide/model-deployers/model-deployers.md === # Model Deployers Model deployment involves making machine learning models available for predictions on real-world data. There are two main types of predictions: **batch predictions** for large datasets and **real-time predictions** for individual data points. Model deployers serve these models either in real-time or batch mode. ## Key Concepts - **Online Serving**: Hosting models via a managed web service with API access (HTTP/GRPC) for low-latency inference. - **Batch Inference**: Making predictions on a batch of observations, typically storing results in files or databases. ## Use Cases Model deployers are optional components in the ZenML stack, primarily used for deploying models in development or production environments (local, Kubernetes, or cloud). They enable continuous training and deployment pipelines. ## Model Deployer Flavors ZenML includes a `local` MLflow model deployer and several integrations for production environments: | Model Deployer | Flavor | Integration | Notes | |----------------|-------------|---------------|---------------------------------------------| | MLflow | `mlflow` | `mlflow` | Deploys ML Model locally | | BentoML | `bentoml` | `bentoml` | Deploys ML models locally or in production | | Seldon Core | `seldon` | `seldon Core` | Deploys models in Kubernetes | | Hugging Face | `huggingface` | `huggingface` | Deploys models on Hugging Face Inference Endpoints | | Databricks | `databricks` | `databricks` | Deploys models to Databricks Inference Endpoints | | vLLM | `vllm` | `vllm` | Deploys LLMs locally | | Custom | _custom_ | | Custom implementation for Artifact Store | ### Configuration Example ```shell # Configure MLflow model deployer zenml model-deployer register mlflow --flavor=mlflow # Configure Seldon Core model deployer zenml model-deployer register seldon --flavor=seldon \ --kubernetes_context=zenml-eks --kubernetes_namespace=zenml-workloads \ --base_url=http://your-url.com ``` ## Role in ZenML Stack - **Seamless Deployment**: Deploys models across various environments efficiently. - **Lifecycle Management**: Manages model server lifecycle (start, stop, delete, update). Key methods include: - `deploy_model`: Deploys a model and returns a Service object. - `find_model_server`: Lists deployed model servers. - `stop_model_server`, `start_model_server`, `delete_model_server`: Manage server states. ### Service Object Represents a deployed model server, stored in the database with attributes: - `config`: Deployment configuration. - `status`: Operational status (e.g., prediction URL). ### Interaction Example ```python from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() services = model_deployer.find_model_server(pipeline_name="LLM_pipeline", pipeline_step_name="huggingface_model_deployer_step", model_name="LLAMA-7B") if services: if services[0].is_running: print(f"Model server {services[0].config['model_name']} is running at {services[0].status['prediction_url']}") else: model_deployer.start_model_server(services[0]) else: service = model_deployer.deploy_model(pipeline_name="LLM_pipeline", pipeline_step_name="huggingface_model_deployer_step", model_name="LLAMA-7B", model_uri="s3://path/to/model", ...) print(f"Model server {service.config['model_name']} deployed at {service.status['prediction_url']}") ``` ## CLI Interaction You can manage model servers via CLI: ```shell $ zenml model-deployer models list $ zenml model-deployer models describe $ zenml model-deployer models get-url $ zenml model-deployer models delete ``` ## Python Metadata Access To access the prediction URL of a deployed model: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") deployer_step = pipeline_run.steps[""] deployed_model_url = deployer_step.run_metadata["deployed_model_url"].value ``` ZenML integrations provide standard pipeline steps for continuous model deployment, managing all aspects of deployment and saving configurations in the Artifact Store for future use. ================================================== === File: docs/book/component-guide/model-deployers/bentoml.md === ### Summary: Deploying Models Locally with BentoML **BentoML Overview** BentoML is an open-source framework for serving machine learning models, supporting local, cloud, and Kubernetes deployments. The BentoML Model Deployer allows for the deployment and management of BentoML models on a local HTTP server. **Deployment Paths** - **Local HTTP Server**: For development and production use cases. - **Containerized Service**: For more complex production settings. Note that `bentoctl` is deprecated and may not work with the latest versions. **When to Use** Use the BentoML Model Deployer to standardize model deployment within an organization or to simplify the process of transforming models into production-ready solutions. **Getting Started** 1. Install the required Python packages: ```bash zenml integration install bentoml -y ``` 2. Register the BentoML model deployer: ```bash zenml model-deployer register bentoml_deployer --flavor=bentoml ``` **Deployment Process** 1. **Create a BentoML Service**: Define how your model will be served. ```python import bentoml from bentoml.validators import DType, Shape import numpy as np import torch @bentoml.service(name=SERVICE_NAME) class MNISTService: def __init__(self): self.model = bentoml.pytorch.load_model(MODEL_NAME) self.model.eval() @bentoml.api() async def predict_ndarray(self, inp: Annotated[np.ndarray, DType("float32"), Shape((28, 28))]) -> np.ndarray: inp = np.expand_dims(inp, (0, 1)) return to_numpy(await self.model(torch.tensor(inp))) @bentoml.api() async def predict_image(self, f: PILImage) -> np.ndarray: arr = np.array(f) / 255.0 arr = np.expand_dims(arr, (0, 1)).astype("float32") return to_numpy(await self.model(torch.tensor(arr))) ``` 2. **Build Your Own Bento**: You can customize the bento build process. ```python context = get_step_context() labels = {"model_uri": model.uri, "bento_uri": os.path.join(context.get_output_artifact_uri(), DEFAULT_BENTO_FILENAME)} model = load_artifact_from_response(model) bentoml.pytorch.save_model(model_name, model, labels=labels) bento = bentos.build(service=service, models=[model_name], version=version, labels=labels) ``` 3. **Use ZenML Bento Builder Step**: To build the bento bundle. ```python from zenml import pipeline, step from zenml.integrations.bentoml.steps import bento_builder_step @pipeline def bento_builder_pipeline(): bento = bento_builder_step(model=model, model_name="pytorch_mnist", service="service.py:CLASS_NAME") ``` 4. **Deploy the Bento**: Use `bentoml_model_deployer_step` for local or containerized deployment. - **Local Deployment**: ```python @pipeline def bento_deployer_pipeline(): deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", port=3001) ``` - **Containerized Deployment**: ```python @pipeline def bento_deployer_pipeline(): deployed_model = bentoml_model_deployer_step(bento=bento, model_name="pytorch_mnist", deployment_type="container", image="my-custom-image") ``` 5. **Predict with Deployed Model**: Use the BentoML client to send requests. ```python @step def predictor(inference_data: Dict[str, List], service: BentoMLDeploymentService) -> None: service.start(timeout=10) for img, data in inference_data.items(): prediction = service.predict("predict_ndarray", np.array(data)) ``` **From Local to Cloud with `bentoctl`** `bentoctl` is deprecated but was used for deploying models to cloud environments like AWS Lambda, SageMaker, Google Cloud, and Azure. For more details, refer to the [BentoML documentation](https://docs.bentoml.org/en/latest/guides/model-store.html#manage-models) and the [ZenML SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-bentoml/#zenml.integrations.bentoml.model_deployers.bentoml_model_deployer). ================================================== === File: docs/book/component-guide/model-deployers/seldon.md === ### Summary: Deploying Models to Kubernetes with Seldon Core **Overview**: Seldon Core is a production-grade model serving platform that simplifies deploying machine learning models as REST/GRPC microservices. It offers features like monitoring, logging, model explainers, outlier detection, and advanced deployment strategies (e.g., A/B testing, canary deployments). **Limitations**: The Seldon Core model deployer is not supported on **MacOS**. #### When to Use Seldon Core - For advanced Kubernetes infrastructure deployment. - To manage model lifecycle with zero downtime (updates, scaling, monitoring). - To utilize advanced API endpoints (REST/GRPC). - For complex deployment processes using custom transformers and routers. #### Deployment Prerequisites 1. Access to a Kubernetes cluster (recommended: use a Service Connector). 2. Seldon Core must be installed and running in the cluster. 3. Models must be stored in persistent shared storage (e.g., AWS S3, GCS). #### Installation Steps for Seldon Core on EKS 1. Configure EKS cluster access: ```bash aws eks --region us-east-1 update-kubeconfig --name zenml-cluster --alias zenml-eks ``` 2. Install Istio: ```bash curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.5.0 sh - cd istio-1.5.0/ bin/istioctl manifest apply --set profile=demo ``` 3. Set up Istio gateway: ```bash curl https://raw.githubusercontent.com/SeldonIO/seldon-core/master/notebooks/resources/seldon-gateway.yaml | kubectl apply -f - ``` 4. Install Seldon Core: ```bash helm install seldon-core seldon-core-operator \ --repo https://storage.googleapis.com/seldon-charts \ --set usageMetrics.enabled=true \ --set istio.enabled=true \ --namespace seldon-system ``` 5. Test installation: ```bash kubectl apply -f iris.yaml ``` Example `iris.yaml`: ```yaml apiVersion: machinelearning.seldon.io/v1 kind: SeldonDeployment metadata: name: iris-model namespace: default spec: name: iris predictors: - graph: implementation: SKLEARN_SERVER modelUri: gs://seldon-models/v1.14.0-dev/sklearn/iris name: classifier name: default replicas: 1 ``` #### Service Connector Setup To authenticate to a remote Kubernetes cluster, use Service Connectors for auto-configuration and security. Options include AWS, GCP, Azure, or a generic Kubernetes connector. Register a Service Connector: ```bash zenml service-connector register --type aws --resource-type kubernetes-cluster --resource-name --auto-configure ``` #### Registering the Seldon Core Model Deployer ```bash zenml model-deployer register --flavor=seldon \ --kubernetes_namespace= \ --base_url=http://$INGRESS_HOST ``` Connect to the target cluster: ```bash zenml model-deployer connect -i ``` #### Managing Authentication Seldon Core requires access to persistent storage for models. If using explicit credentials, configure them in the Artifact Store. If not, Seldon Core will use implicit authentication based on the Kubernetes cluster configuration. #### Custom Code Deployment You can deploy custom pre- and post-processing code with your model by defining a custom predict function: ```python def custom_predict(model: Any, request: Array_Like) -> Array_Like: # Custom prediction logic ``` Register the custom function in a pipeline: ```python seldon_custom_model_deployer_step( model=model, predict_function="", service_config=SeldonDeploymentConfig( model_name="", replicas=1, implementation="custom", resources=SeldonResourceRequirements( limits={"cpu": "200m", "memory": "250Mi"} ), serviceAccountName="kubernetes-service-account", ), ) ``` #### Conclusion Seldon Core provides a robust framework for deploying machine learning models on Kubernetes, with extensive capabilities for managing model lifecycles and custom deployments. For more detailed configurations and options, refer to the official documentation. ================================================== === File: docs/book/component-guide/model-deployers/mlflow.md === ### Summary: Deploying Models Locally with MLflow **MLflow Overview** The MLflow Model Deployer is part of the ZenML integration, allowing local deployment and management of MLflow models on a local MLflow server. Currently, it is intended for development use only and not for production. **Use Cases** Utilize the MLflow Model Deployer for: - Easy local model deployment and real-time predictions. - Simple deployment without complex infrastructure like Kubernetes. **Installation** To deploy models, install the MLflow integration with: ```bash zenml integration install mlflow -y ``` Register the MLflow model deployer: ```bash zenml model-deployer register mlflow_deployer --flavor=mlflow ``` **Deployment Steps** 1. **Deploy a Logged Model** If you have the model URI: ```python from zenml import pipeline, step, get_step_context from zenml.client import Client @step def deploy_model() -> Optional[MLFlowDeploymentService]: zenml_client = Client() model_deployer = zenml_client.active_stack.model_deployer mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", description="An example of deploying a model using the MLflow Model Deployer", pipeline_name=get_step_context().pipeline_name, pipeline_step_name=get_step_context().step_name, model_uri="runs://model" or "models://", model_name="model", workers=1, mlserver=False, timeout=DEFAULT_SERVICE_START_STOP_TIMEOUT ) service = model_deployer.deploy_model(config=mlflow_deployment_config) return service ``` 2. **Deploy Without Known URI** If the model URI is unknown: ```python from zenml import pipeline, step, get_step_context from zenml.client import Client from mlflow.tracking import MlflowClient, artifact_utils @step def deploy_model() -> Optional[MLFlowDeploymentService]: zenml_client = Client() model_deployer = zenml_client.active_stack.model_deployer experiment_tracker = zenml_client.active_stack.experiment_tracker mlflow_run_id = experiment_tracker.get_run_id( experiment_name=get_step_context().pipeline_name, run_name=get_step_context().run_name, ) experiment_tracker.configure_mlflow() client = MlflowClient() model_name = "model" model_uri = artifact_utils.get_artifact_uri(run_id=mlflow_run_id, artifact_path=model_name) mlflow_deployment_config = MLFlowDeploymentConfig( name="mlflow-model-deployment-example", description="An example of deploying a model using the MLflow Model Deployer", pipeline_name=get_step_context().pipeline_name, pipeline_step_name=get_step_context().step_name, model_uri=model_uri, model_name=model_name, workers=1, mlserver=False, timeout=300, ) service = model_deployer.deploy_model(config=mlflow_deployment_config) return service ``` **Configuration Options** In `MLFlowDeploymentService`, you can configure: - `name`, `description`, `pipeline_name`, `pipeline_step_name` - `model_name`, `model_version` - `silent_daemon`, `blocking` - `model_uri`, `workers`, `mlserver`, `timeout` **Running Inference** 1. **Load a Deployed Service** To run inference on a deployed model: ```python import json import requests from zenml import step from zenml.integrations.mlflow.model_deployers.mlflow_model_deployer import MLFlowModelDeployer @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, model_name: str = "model") -> None: model_deployer = MLFlowModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name) if not existing_services: raise RuntimeError("No running service found.") service = existing_services[0] payload = json.dumps({"inputs": {"messages": [{"role": "user", "content": "Tell a joke!"}]}, "params": {"temperature": 0.5, "max_tokens": 20}}) response = requests.post(url=service.get_prediction_url(), data=payload, headers={"Content-Type": "application/json"}) return response.json() ``` 2. **Use Service in Same Pipeline** For inference using a pre-built predict method: ```python from typing_extensions import Annotated import numpy as np from zenml import step from zenml.integrations.mlflow.services import MLFlowDeploymentService @step def predictor(service: MLFlowDeploymentService, data: np.ndarray) -> Annotated[np.ndarray, "predictions"]: prediction = service.predict(data) return prediction.argmax(axis=-1) ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.model_deployers). ================================================== === File: docs/book/component-guide/model-deployers/huggingface.md === ### Summary: Deploying Models to Hugging Face Inference Endpoints **Overview:** Hugging Face Inference Endpoints allow for secure deployment of `transformers`, `sentence-transformers`, and `diffusers` models on managed, autoscaling infrastructure. This service eliminates the need to manage containers and GPUs. **When to Use:** - Deploy Transformers, Sentence-Transformers, or Diffusion models on secure infrastructure. - Utilize a fully-managed solution for inference. - Create production-ready APIs with minimal MLOps involvement. - Focus on cost-effectiveness and enterprise security, especially for offline endpoints connected to VPCs. **Deployment Steps:** 1. **Install Hugging Face Integration:** ```bash zenml integration install huggingface -y ``` 2. **Register the Model Deployer:** ```bash zenml model-deployer register --flavor=huggingface --token= --namespace= ``` 3. **Update ZenML Stack:** ```bash zenml stack update --model-deployer= ``` **Using the Model Deployer:** - Deploy models using the `huggingface_model_deployer_step`. - Perform batch inference with `HuggingFaceDeploymentService`. **Example Deployment Pipeline:** ```python from zenml import pipeline from zenml.config import DockerSettings from zenml.integrations.huggingface.services import HuggingFaceServiceConfig from zenml.integrations.huggingface.steps import huggingface_model_deployer_step docker_settings = DockerSettings(required_integrations=["huggingface"]) @pipeline(enable_cache=True, settings={"docker": docker_settings}) def huggingface_deployment_pipeline(model_name: str = "hf", timeout: int = 1200): service_config = HuggingFaceServiceConfig(model_name=model_name) huggingface_model_deployer_step(service_config=service_config, timeout=timeout) ``` **Configurable Attributes in `HuggingFaceServiceConfig`:** - `model_name`: Name of the model. - `endpoint_name`: Name of the inference endpoint (prefixed with `zenml-`). - `repository`: User's or organization's model repository. - `framework`: ML framework (e.g., `"pytorch"`). - `accelerator`: Hardware for inference (e.g., `"cpu"`). - `instance_size` and `instance_type`: Size and type of instances for hosting. - `region`: Cloud region for the endpoint. - `vendor`: Cloud provider (e.g., `"aws"`). - `token`: Hugging Face authentication token. - `min_replica` and `max_replica`: Scaling settings. - `task`: Supported ML task (e.g., `"text-classification"`). **Running Inference:** Example code to run inference on a provisioned endpoint: ```python from zenml import step, pipeline from zenml.integrations.huggingface.model_deployers import HuggingFaceModelDeployer from zenml.integrations.huggingface.services import HuggingFaceDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> HuggingFaceDeploymentService: model_deployer = HuggingFaceModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name, running=running) if not existing_services: raise RuntimeError(f"No running service found for {model_name}.") return existing_services[0] @step def predictor(service: HuggingFaceDeploymentService, data: str) -> str: return service.predict(data) @pipeline def huggingface_deployment_inference_pipeline(pipeline_name: str, pipeline_step_name: str = "huggingface_model_deployer_step"): inference_data = ... model_deployment_service = prediction_service_loader(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) predictions = predictor(model_deployment_service, inference_data) ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-huggingface/) and Hugging Face endpoint [code](https://github.com/huggingface/huggingface_hub/blob/5e3b603ccc7cd6523d998e75f82848215abf9415/src/huggingface_hub/hf_api.py#L6957). ================================================== === File: docs/book/component-guide/model-deployers/databricks.md === ### Summary: Deploying Models to Databricks Inference Endpoints **Overview**: Databricks Model Serving provides a unified interface to deploy, manage, and query AI models as REST APIs without the need for containers or GPUs. It offers dedicated, autoscaling infrastructure managed by Databricks. **When to Use**: - If you're using Databricks for data and ML workloads. - To deploy AI models easily without managing infrastructure. - For enterprise security with offline endpoints connected to VPCs. - To create production-ready APIs with minimal MLOps involvement. **Deployment Steps**: 1. Install Databricks ZenML integration: ```bash zenml integration install databricks -y ``` 2. Register the Databricks model deployer: ```bash zenml model-deployer register --flavor=databricks --host= --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} ``` - **Note**: Create a Databricks service account for permissions and authentication. 3. Update your ZenML stack: ```bash zenml stack update --model-deployer= ``` **Configuration Options** (`DatabricksServiceConfig`): - `model_name`: Name of the model in the Databricks Model Registry. - `model_version`: Version of the model. - `workload_size`: Size of the workload (`Small`, `Medium`, `Large`). - `scale_to_zero_enabled`: Enable/disable scaling to zero. - `env_vars`: Environment variables for the model serving container. - `workload_type`: Type of workload (`CPU`, `GPU_LARGE`, `GPU_MEDIUM`, `GPU_SMALL`, `MULTIGPU_MEDIUM`). - `endpoint_secret_name`: Secret name for endpoint security. **Inference Example**: To run inference on a provisioned endpoint, use the following code: ```python from zenml import step, pipeline from zenml.integrations.databricks.model_deployers import DatabricksModelDeployer from zenml.integrations.databricks.services import DatabricksDeploymentService @step(enable_cache=False) def prediction_service_loader(pipeline_name: str, pipeline_step_name: str, running: bool = True, model_name: str = "default") -> DatabricksDeploymentService: model_deployer = DatabricksModelDeployer.get_active_model_deployer() existing_services = model_deployer.find_model_server(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name, model_name=model_name, running=running) if not existing_services: raise RuntimeError(f"No running inference endpoint found for '{model_name}'.") return existing_services[0] @step def predictor(service: DatabricksDeploymentService, data: str) -> str: return service.predict(data) @pipeline def databricks_deployment_inference_pipeline(pipeline_name: str, pipeline_step_name: str = "databricks_model_deployer_step"): inference_data = ... model_deployment_service = prediction_service_loader(pipeline_name=pipeline_name, pipeline_step_name=pipeline_step_name) predictions = predictor(model_deployment_service, inference_data) ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.model_deployers). ================================================== === File: docs/book/component-guide/model-deployers/vllm.md === ### vLLM Documentation Summary **vLLM Overview** [vLLM](https://docs.vllm.ai/en/latest/) is a library designed for efficient LLM inference and serving, suitable for: - Deploying large language models with high throughput via an OpenAI-compatible API server. - Continuous batching of requests. - Supporting quantization formats: GPTQ, AWQ, INT4, INT8, FP8. - Features like PagedAttention, Speculative decoding, and Chunked pre-fill. **Deployment Steps** 1. **Install vLLM Integration**: Run the following command to install the vLLM integration with ZenML: ```bash zenml integration install vllm -y ``` 2. **Register Model Deployer**: Register the vLLM model deployer: ```bash zenml model-deployer register vllm_deployer --flavor=vllm ``` This sets up a local vLLM deployment server as a daemon to serve the latest model. **Usage** To deploy an LLM, use the `vllm_model_deployer_step` in your pipeline. Here’s a concise example: ```python from zenml import pipeline from typing import Annotated from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> Annotated[VLLMDeploymentService, "GPT2"]: return vllm_model_deployer_step(model=model, timeout=timeout) ``` **Configuration Options** Within `VLLMDeploymentService`, you can configure: - `model`: Hugging Face model name or path. - `tokenizer`: Hugging Face tokenizer name or path (defaults to model if unspecified). - `served_model_name`: API model name (defaults to model name). - `trust_remote_code`: Trust remote code from Hugging Face. - `tokenizer_mode`: Options: ['auto', 'slow', 'mistral']. - `dtype`: Data type for weights/activations: ['auto', 'half', 'float16', 'bfloat16', 'float', 'float32']. - `revision`: Specific model version (branch, tag, or commit id; defaults to latest). For practical examples, refer to the [deployment pipeline](https://github.com/zenml-io/zenml-projects/blob/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer/pipelines/deploy_pipeline.py#L25) and a [GPT-2 model run](https://github.com/zenml-io/zenml-projects/tree/79f67ea52c3908b9b33c9a41eef18cb7d72362e8/llm-vllm-deployer). ================================================== === File: docs/book/component-guide/alerters/custom.md === ### Develop a Custom Alerter #### Overview To create a custom alerter in ZenML, it's essential to understand the component flavor concepts outlined in the [general guide](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction The base class for alerters defines two abstract methods: - `post(message: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message to a chat service, returning `True` if successful. - `ask(question: str, params: Optional[BaseAlerterStepParameters]) -> bool`: Posts a message and waits for approval, returning `True` only if approved. ```python class BaseAlerter(StackComponent, ABC): def post(self, message: str, params: Optional[BaseAlerterStepParameters]) -> bool: return True def ask(self, question: str, params: Optional[BaseAlerterStepParameters]) -> bool: return True ``` #### Creating a Custom Alerter 1. **Implement the Alerter Class**: Inherit from `BaseAlerter` and implement `post()` and `ask()` methods. ```python from typing import Optional from zenml.alerter import BaseAlerter, BaseAlerterStepParameters class MyAlerter(BaseAlerter): def post(self, message: str, config: Optional[BaseAlerterStepParameters]) -> bool: return "Hey, I implemented an alerter." def ask(self, question: str, config: Optional[BaseAlerterStepParameters]) -> bool: return True ``` 2. **Create a Configuration Class** (if needed): ```python from zenml.alerter.base_alerter import BaseAlerterConfig class MyAlerterConfig(BaseAlerterConfig): my_param: str ``` 3. **Define a Flavor Class**: ```python from typing import Type, TYPE_CHECKING from zenml.alerter import BaseAlerterFlavor if TYPE_CHECKING: from zenml.stack import StackComponent, StackComponentConfig class MyAlerterFlavor(BaseAlerterFlavor): @property def name(self) -> str: return "my_alerter" @property def config_class(self) -> Type[StackComponentConfig]: from my_alerter_config import MyAlerterConfig return MyAlerterConfig @property def implementation_class(self) -> Type[StackComponent]: from my_alerter import MyAlerter return MyAlerter ``` #### Registering the Alerter Flavor Register your new flavor via the CLI using dot notation: ```shell zenml alerter flavor register ``` Example: ```shell zenml alerter flavor register flavors.my_flavor.MyAlerterFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository to avoid resolution issues. - After registration, list available alerter flavors: ```shell zenml alerter flavor list ``` #### Workflow Integration - **MyAlerterFlavor** is used during flavor creation. - **MyAlerterConfig** is used for validating values during stack component registration. - **MyAlerter** is utilized when the component is in action, allowing separation of configuration from implementation. This structure enables registration of flavors and components without requiring all dependencies to be installed locally. ================================================== === File: docs/book/component-guide/alerters/discord.md === ### Discord Alerter Overview The `DiscordAlerter` allows sending messages to a Discord channel from ZenML pipelines. It includes two main steps: - **`discord_alerter_post_step`**: Sends a message to a Discord channel and returns success status. - **`discord_alerter_ask_step`**: Sends a message and waits for user feedback, returning `True` only if the user approves the action in Discord. ### Use Cases - **Immediate Notifications**: Get alerts on failures (e.g., model performance issues). - **Human-in-the-Loop**: Integrate user approval before executing critical steps (e.g., model deployment). ### Requirements Install the Discord integration: ```shell zenml integration install discord -y ``` ### Setting Up a Discord Bot 1. Create a Discord workspace and channel. 2. Create a Discord App with a bot and obtain the bot token. Ensure the bot has permissions to send and receive messages. ### Registering a Discord Alerter Register the `discord` alerter in ZenML: ```shell zenml alerter register discord_alerter \ --flavor=discord \ --discord_token= \ --default_discord_channel_id= ``` Add the alerter to your stack: ```shell zenml stack register ... -al discord_alerter ``` **Parameters**: - **DISCORD_CHANNEL_ID**: Copy from the channel settings (enable Developer Mode if not visible). - **DISCORD_TOKEN**: Obtain from the bot setup instructions. ### Using the Discord Alerter Import the steps and integrate them into your pipeline. A formatter step is typically needed to generate the message. Example: ```python from zenml.integrations.discord.steps.discord_alerter_ask_step import discord_alerter_ask_step from zenml import step, pipeline @step def my_formatter_step(artifact) -> str: return f"Here is my artifact {artifact}!" @pipeline def my_pipeline(...): ... artifact = ... message = my_formatter_step(artifact) approved = discord_alerter_ask_step(message) ... # Conditional behavior based on `approved` if __name__ == "__main__": my_pipeline() ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-discord/#zenml.integrations.discord.alerters.discord_alerter.DiscordAlerter). ================================================== === File: docs/book/component-guide/alerters/slack.md === ### Slack Alerter Documentation Summary The `SlackAlerter` allows sending messages and questions to a Slack channel from ZenML pipelines. #### Setup Instructions 1. **Create a Slack App**: - Set up a Slack workspace and create a Slack App with a bot. - Grant the following permissions in `OAuth & Permissions`: - `chat:write` - `channels:read` - `channels:history` - Invite the app to your desired channel using `/invite` or through channel settings. 2. **Registering Slack Alerter in ZenML**: - Install the Slack integration: ```shell zenml integration install slack -y ``` - Create a secret and register the alerter: ```shell zenml secret create slack_token --oauth_token= zenml alerter register slack_alerter \ --flavor=slack \ --slack_token={{slack_token.oauth_token}} \ --slack_channel_id= ``` - Parameters: - ``: Found in channel details (starts with `C...`). - ``: Found in Slack app settings. - Add the alerter to your stack: ```shell zenml stack register ... -al slack_alerter --set ``` #### Usage 1. **Direct Methods**: - Use `post()` and `ask()` methods: ```python from zenml import pipeline, step from zenml.client import Client @step def post_statement() -> None: Client().active_stack.alerter.post("Step finished!") @step def ask_question() -> bool: return Client().active_stack.alerter.ask("Should I continue?") @pipeline(enable_cache=False) def my_pipeline(): post_statement() ask_question() if __name__ == "__main__": my_pipeline() ``` 2. **Custom Settings**: - Specify channel ID at runtime: ```python @step(settings={"alerter": {"slack_channel_id": }}) def post_statement() -> None: Client().active_stack.alerter.post("Posting to another channel!") ``` 3. **Using `SlackAlerterParameters` and `SlackAlerterPayload`**: - Customize messages: ```python from zenml import pipeline, step, get_step_context from zenml.client import Client from zenml.integrations.slack.alerters.slack_alerter import ( SlackAlerterParameters, SlackAlerterPayload ) @step def post_statement() -> None: params = SlackAlerterParameters( payload=SlackAlerterPayload( pipeline_name=get_step_context().pipeline.name, step_name=get_step_context().step_run.name, stack_name=Client().active_stack.name, ), ) Client().active_stack.alerter.post("Message with pipeline info.", params=params) @step def ask_question() -> bool: message = ":tada: Should I continue? (Y/N)" blocks = [{"type": "header", "text": {"type": "plain_text", "text": message, "emoji": True}}] params = SlackAlerterParameters(blocks=blocks, approve_msg_options=["Y"], disapprove_msg_options=["N"]) return Client().active_stack.alerter.ask(question=message, params=params) ``` 4. **Predefined Steps**: - Use built-in steps for simplicity: ```python from zenml import pipeline from zenml.integrations.slack.steps import slack_alerter_post_step, slack_alerter_ask_step @pipeline(enable_cache=False) def my_pipeline(): slack_alerter_post_step("Posting a statement.") slack_alerter_ask_step("Asking a question. Should I continue?") if __name__ == "__main__": my_pipeline() ``` For further details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-slack/#zenml.integrations.slack.alerters.slack_alerter.SlackAlerter). ================================================== === File: docs/book/component-guide/alerters/alerters.md === ### Alerters Overview **Alerters** enable sending messages to chat services (e.g., Slack, Discord) from pipelines for notifications on failures, monitoring, and human-in-the-loop ML. ### Available Alerter Integrations - **SlackAlerter**: Interacts with Slack channels. - **DiscordAlerter**: Interacts with Discord channels. - **Custom Implementation**: Extend the alerter abstraction for other chat services. | Alerter | Flavor | Integration | Notes | |----------------------|----------|-------------|---------------------------------------| | [Slack](slack.md) | `slack` | `slack` | Interacts with a Slack channel | | [Discord](discord.md) | `discord`| `discord` | Interacts with a Discord channel | | [Custom](custom.md) | _custom_ | | Provide your own implementation | To view available alerter flavors, use: ```shell zenml alerter flavor list ``` ### Using Alerters with ZenML 1. Register an alerter component: ```shell zenml alerter register ... ``` 2. Add it to your stack: ```shell zenml stack register ... -al ``` 3. Import and use the alerter standard steps in your pipelines. ================================================== === File: docs/book/component-guide/container-registries/azure.md === ### Azure Container Registry Overview **Azure Container Registry** is a built-in container registry with ZenML that utilizes the [Azure Container Registry](https://azure.microsoft.com/en-us/services/container-registry/) for storing container images. #### Use Cases - Use when components of your stack need to pull/push container images. - Requires access to Azure. #### Deployment Steps 1. Visit [Azure Portal](https://portal.azure.com/#create/Microsoft.ContainerRegistry). 2. Select subscription, resource group, location, and registry name. 3. Click `Review + Create`. #### Registry URI Format The Azure container registry URI follows this format: ``` .azurecr.io ``` Example: ``` zenmlregistry.azurecr.io ``` To find your registry URI: - Go to Azure Portal and search for `container registries`. - Use the registry name to construct the URI. #### Usage Requirements - Install and run [Docker](https://www.docker.com). - Obtain the registry URI. **Register the container registry:** ```shell zenml container-registry register --flavor=azure --uri= zenml stack update -c ``` #### Authentication Methods Authentication is necessary to use Azure Container Registry: 1. **Local Authentication** (quick setup): - Requires Azure CLI installed. - Login command: ```shell az acr login --name= ``` **Note:** This method is not portable across environments. 2. **Azure Service Connector** (recommended): - Provides auto-configuration and better security. - Register with: ```sh zenml service-connector register --type azure -i ``` For a non-interactive setup: ```sh zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type docker-registry --resource-id ``` #### Connecting to Azure Container Registry After setting up the Azure Service Connector, register and connect the Azure Container Registry: ```sh zenml container-registry register -f azure --uri= zenml container-registry connect -i ``` For non-interactive connection: ```sh zenml container-registry connect --connector ``` #### Using in ZenML Stack To register and set a stack with the Azure Container Registry: ```sh zenml stack register -c ... --set ``` #### Local Docker Client Authentication To temporarily authenticate your local Docker client: ```sh zenml service-connector login --resource-type docker-registry --resource-id ``` For further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.azure_container_registry.AzureContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/custom.md === ### Develop a Custom Container Registry #### Overview Before creating a custom container registry in ZenML, review the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational concepts. #### Base Abstraction ZenML's container registries have a simple base abstraction comprising a `uri` in the configuration and a non-abstract `prepare_image_push` method for validation. ```python from abc import abstractmethod from typing import Type from zenml.enums import StackComponentType from zenml.stack import Flavor from zenml.stack.authentication_mixin import AuthenticationConfigMixin, AuthenticationMixin from zenml.utils import docker_utils class BaseContainerRegistryConfig(AuthenticationConfigMixin): """Base config for a container registry.""" uri: str class BaseContainerRegistry(AuthenticationMixin): """Base class for ZenML container registries.""" def prepare_image_push(self, image_name: str) -> None: """Prepare checks before pushing an image.""" def push_image(self, image_name: str) -> str: """Push a Docker image.""" if not image_name.startswith(self.config.uri): raise ValueError(f"Image `{image_name}` does not belong to registry `{self.config.uri}`.") self.prepare_image_push(image_name) return docker_utils.push_image(image_name) class BaseContainerRegistryFlavor(Flavor): """Base flavor for container registries.""" @property @abstractmethod def name(self) -> str: """Returns the flavor name.""" @property def type(self) -> StackComponentType: """Returns the flavor type.""" return StackComponentType.CONTAINER_REGISTRY @property def config_class(self) -> Type[BaseContainerRegistryConfig]: """Config class for this flavor.""" return BaseContainerRegistryConfig @property def implementation_class(self) -> Type[BaseContainerRegistry]: """Implementation class.""" return BaseContainerRegistry ``` #### Building Your Own Container Registry To create a custom container registry flavor: 1. Inherit from `BaseContainerRegistry` and implement `prepare_image_push` for validation. 2. Create a class inheriting from `BaseContainerRegistryConfig` for additional configuration. 3. Combine both by inheriting from `BaseContainerRegistryFlavor`. Register your flavor via CLI: ```shell zenml container-registry flavor register ``` For example, if your flavor class is in `flavors/my_flavor.py`: ```shell zenml container-registry flavor register flavors.my_flavor.MyContainerRegistryFlavor ``` **Important:** Initialize ZenML at the root of your repository to ensure proper flavor resolution. After registration, list available flavors: ```shell zenml container-registry flavor list ``` #### Key Points - **CustomContainerRegistryFlavor** is used during flavor creation. - **CustomContainerRegistryConfig** is used for validating user inputs during registration. - **CustomContainerRegistry** is utilized when the component is in use, allowing separation of configuration from implementation. This design enables flavor and component registration without requiring all dependencies to be installed locally. ================================================== === File: docs/book/component-guide/container-registries/dockerhub.md === ### DockerHub Container Registry in ZenML **Overview**: DockerHub is a built-in container registry in ZenML for storing container images. #### When to Use - Use DockerHub if: - Your stack components need to pull or push container images. - You have a DockerHub account. #### Deployment 1. Create a DockerHub account. 2. Images built in ZenML will be published in a **public** repository by default. For **private** repositories, create one on DockerHub before running the pipeline. 3. The repository name is determined by the remote orchestrator or step operator in your stack. #### Finding the Registry URI The DockerHub registry URI formats: ```shell # or docker.io/ ``` **Examples**: - zenml - my-username - docker.io/zenml - docker.io/my-username To get your URI: - Use your DockerHub account name in the format `docker.io/`. #### Usage Requirements: - Docker installed and running. - Registry URI (refer to the previous section). Register the container registry: ```shell zenml container-registry register \ --flavor=dockerhub \ --uri= # Update the active stack zenml stack update -c ``` Log in to DockerHub to enable image pulling and pushing: ```shell docker login ``` Use your DockerHub account name and password or a personal access token. For more details on configurable attributes of the DockerHub registry, refer to the [SDK Docs](https://apidocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.dockerhub_container_registry.DockerHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/default.md === ### Summary of Default Container Registry Documentation **Default Container Registry Overview** The Default Container Registry is a built-in feature of ZenML that supports local and remote container registries. It allows any URI format. **When to Use** Use the Default Container Registry for a local registry or if using a remote registry not covered by other flavors. **Local Registry URI Format** Specify a local registry URI as follows: ```shell localhost: # Examples: localhost:5000 localhost:8000 localhost:9999 ``` **Usage Requirements** - Docker must be installed and running. - Provide the registry URI. **Registering the Container Registry** To register and use the Default Container Registry: ```shell zenml container-registry register --flavor=default --uri= zenml stack update -c ``` **Authentication Methods** For private registries, configure authentication. Local Authentication is quick for local setups, but for remote registries, use a Docker Service Connector. For cloud providers (AWS, GCP, Azure), use the appropriate container registry flavor. **Local Authentication** Leverages Docker client credentials from the local environment. Log in using: ```shell docker login --username --password-stdin ``` *Note: Local authentication is not portable across environments.* **Docker Service Connector (Recommended)** Set up a Docker Service Connector for better authentication management: ```sh zenml service-connector register --type docker -i # Non-interactive zenml service-connector register --type docker --username= --password= ``` **Connecting to a Container Registry** After setting up the Service Connector, register and connect the Default Container Registry: ```sh zenml container-registry register -f default --uri= zenml container-registry connect -i # Non-interactive zenml container-registry connect --connector ``` **Using the Default Container Registry in a ZenML Stack** Register and set a stack with the new container registry: ```sh zenml stack register -c ... --set ``` **Local Login for Docker CLI** To interact with the remote registry via Docker CLI after connecting: ```sh zenml service-connector login ``` For further details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.default_container_registry.DefaultContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/gcp.md === ### Google Cloud Container Registry Overview The Google Cloud Container Registry (GCP Container Registry) is integrated with ZenML and utilizes the Google Artifact Registry. **Important:** Google Container Registry is being phased out in favor of Artifact Registry, which will fully replace it by March 18, 2025. ### When to Use GCP Container Registry Use the GCP Container Registry if: - Your stack components require pulling or pushing container images. - You have access to GCP. ### Deployment Steps 1. **Enable Google Artifact Registry**: [Enable it here](https://console.cloud.google.com/marketplace/product/google/artifactregistry.googleapis.com). 2. **Create a Docker Repository**: [Create it here](https://console.cloud.google.com/artifacts). ### Registry URI Format The URI format for the GCP container registry is: ```shell -docker.pkg.dev// ``` **Examples:** - `europe-west1-docker.pkg.dev/zenml/my-repo` - `southamerica-east1-docker.pkg.dev/zenml/zenml-test` ### Using GCP Container Registry Requirements: - **Docker** installed. - **Registry URI** obtained from the previous section. To register the container registry: ```shell zenml container-registry register --flavor=gcp --uri= zenml stack update -c ``` ### Authentication Methods Authentication is essential for using the GCP Container Registry: #### Local Authentication - Quick setup using local Docker client credentials. - Requires GCP CLI installation. - Configure Docker for Google Artifact Registry: ```shell gcloud auth configure-docker -docker.pkg.dev ``` **Note:** Local authentication is not portable across environments. #### GCP Service Connector (Recommended) - Provides auto-configuration and better security. - Register a Service Connector: ```sh zenml service-connector register --type gcp -i ``` - Non-interactive registration: ```sh zenml service-connector register --type gcp --resource-type docker-registry --auto-configure ``` ### Connecting GCP Container Registry To connect the GCP Container Registry to a Service Connector: ```sh zenml container-registry connect -i ``` For non-interactive connection: ```sh zenml container-registry connect --connector ``` ### Final Steps To use the GCP Container Registry in a ZenML Stack: ```sh zenml stack register -c ... --set ``` For detailed attributes and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.gcp_container_registry.GCPContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/aws.md === # Amazon Elastic Container Registry (ECR) Overview ## Description Amazon ECR is a container registry service integrated with ZenML's AWS integration, allowing storage of container images. ## When to Use Use AWS ECR if: - Your stack components need to pull or push container images. - You have access to AWS ECR. ## Deployment Steps 1. **Create a Repository**: - Go to the [ECR website](https://console.aws.amazon.com/ecr). - Select the correct region. - Click on `Create repository` and create a private repository. 2. **URI Format**: The ECR URI format is: ``` .dkr.ecr..amazonaws.com ``` Example URIs: ``` 123456789.dkr.ecr.eu-west-2.amazonaws.com ``` 3. **Get Your URI**: - Find your `Account ID` in the AWS console. - Choose the desired region from the [regional endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints). ## Usage 1. **Install ZenML AWS Integration**: ```shell zenml integration install aws ``` 2. **Register the Container Registry**: ```shell zenml container-registry register --flavor=aws --uri= zenml stack update -c ``` 3. **Authentication**: - **Local Authentication**: Quick setup using local AWS CLI credentials. ```shell aws ecr get-login-password --region | docker login --username AWS --password-stdin ``` - **AWS Service Connector (Recommended)**: For better security and management. ```shell zenml service-connector register --type aws -i ``` Non-interactive: ```shell zenml service-connector register --type aws --resource-type docker-registry --auto-configure ``` ## Connecting to ECR 1. **Register and Connect**: ```shell zenml container-registry register -f aws --uri= zenml container-registry connect -i ``` Non-interactive: ```shell zenml container-registry connect --connector ``` 2. **Use in ZenML Stack**: ```shell zenml stack register -c ... --set ``` 3. **Local Docker Client Login** (if needed): ```shell zenml service-connector login --resource-type docker-registry ``` ## Additional Information For a full list of configurable attributes and further details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.container_registries.aws_container_registry.AWSContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/github.md === ### GitHub Container Registry The GitHub Container Registry is integrated with ZenML for storing container images. #### When to Use Use the GitHub container registry if: - Your stack components need to pull or push container images. - You are using GitHub for your projects. If not using GitHub, consider other container registry options. #### Deployment The GitHub container registry is enabled by default upon creating a GitHub account. #### Finding the Registry URI The URI format is: ``` ghcr.io/ ``` **Examples:** - `ghcr.io/zenml` - `ghcr.io/my-username` - `ghcr.io/my-organization` To find your URI, replace `` with your GitHub username or organization name. #### Usage Requirements To use the GitHub container registry, ensure you have: - Docker installed and running. - The registry URI (see the previous section). - A configured Docker client for pulling and pushing images. Follow [this guide](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry) to create a personal access token and log in. #### Registering the Container Registry To register the container registry and update your active stack, use: ```shell zenml container-registry register \ --flavor=github \ --uri= zenml stack update -c ``` For more details and configurable attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-container_registries/#zenml.container_registries.github_container_registry.GitHubContainerRegistry). ================================================== === File: docs/book/component-guide/container-registries/container-registries.md === # Container Registries Container registries are crucial for storing Docker images used in remote MLOps stacks, enabling the containerization of machine learning pipelines for isolated execution. ### When to Use A container registry is required when components of your stack need to push or pull container images, particularly for ZenML's remote orchestrators, step operators, and some model deployers. Check the documentation of the specific component to determine if a container registry is necessary. ### Container Registry Flavors ZenML supports several container registry flavors: - **Default flavor**: Accepts any URI without validation; suitable for local or unsupported remote registries. - **Specific flavors**: Validates URIs and ensures push permissions. **Recommendation**: Use specific container registry flavors for additional URI validation. | Container Registry | Flavor | Integration | URI Example | |--------------------|---------|--------------|--------------------------------------| | DefaultContainerRegistry | `default` | _built-in_ | - | | DockerHubContainerRegistry | `dockerhub` | _built-in_ | docker.io/zenml | | GCPContainerRegistry | `gcp` | _built-in_ | gcr.io/zenml | | AzureContainerRegistry | `azure` | _built-in_ | zenml.azurecr.io | | GitHubContainerRegistry | `github` | _built-in_ | ghcr.io/zenml | | AWSContainerRegistry | `aws` | `aws` | 123456789.dkr.ecr.us-east-1.amazonaws.com | To view available container registry flavors, use: ```shell zenml container-registry flavor list ``` ================================================== === File: docs/book/component-guide/feature-stores/custom.md === ### Develop a Custom Feature Store **Overview**: Feature stores enable data teams to serve data through an offline store and an online low-latency store, ensuring synchronization between them. They also provide a centralized registry for features and feature schemas for team or organizational use. **Guidance**: Before creating a custom feature store, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) to understand ZenML's component flavor concepts. **Important Note**: The base abstraction for feature stores is currently in development, and extensions are not available at this time. For immediate use, refer to the list of existing feature stores. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/component-guide/feature-stores/feature-stores.md === ### Feature Stores Feature stores enable data teams to manage data through both offline and online low-latency stores, ensuring synchronization between them. They provide a centralized registry for features and feature schemas, facilitating access for data scientists during model training and real-time inference. Feast addresses the issue of train-serve skew, where training and serving data diverge. ### When to Use Feature Stores Feature stores are optional in the ZenML Stack and should be utilized to: - Productionalize new features - Reuse existing features across pipelines and models - Ensure consistency between training and serving data - Maintain a central registry of features and schemas ### Available Feature Stores ZenML features the following integrations for production use cases: | Feature Store | Flavor | Integration | Notes | |-----------------------------|---------|-------------|--------------------------------------------| | [FeastFeatureStore](feast.md) | `feast` | `feast` | Connect ZenML with existing Feast | | [Custom Implementation](custom.md) | _custom_ | | Extend the feature store abstraction | To view available feature store flavors, use: ```shell zenml feature-store flavor list ``` ### How to Use Feature Stores The feature store implementation is based on the Feast integration. For usage details, refer to the [Feast documentation](feast.md#how-do-you-use-it). ================================================== === File: docs/book/component-guide/feature-stores/feast.md === ### Summary of Feast Feature Store Documentation **Feast Overview** Feast (Feature Store) is designed for managing and serving machine learning features in production. It provides access to feature data from both low-latency online stores (for real-time predictions) and offline stores (for batch scoring or model training). **Use Cases** Feast enables: - Access to offline/batch data for training. - Access to online data during inference. **Deployment** To use Feast with ZenML, first ensure you have a Feast feature store. If not, refer to the [Feast Documentation](https://docs.feast.dev/how-to-guides/feast-snowflake-gcp-aws/deploy-a-feature-store) for deployment instructions. Install the Feast integration in ZenML: ```shell zenml integration install feast ``` Register the feature store as a ZenML stack component: ```shell zenml feature-store register feast_store --flavor=feast --feast_repo="" zenml stack register ... -f feast_store ``` **Usage** Currently, online data retrieval is supported locally but not in production settings. To retrieve historical features, create a step that interfaces with the feature store: ```python from datetime import datetime from typing import Any, Dict, List, Union import pandas as pd from zenml import step from zenml.client import Client @step def get_historical_features(entity_dict: Union[Dict[str, Any], str], features: List[str], full_feature_names: bool = False) -> pd.DataFrame: feature_store = Client().active_stack.feature_store if not feature_store: raise DoesNotExistException("Feast feature store component is not available.") entity_dict["event_timestamp"] = [datetime.fromisoformat(val) for val in entity_dict["event_timestamp"]] entity_df = pd.DataFrame.from_dict(entity_dict) return feature_store.get_historical_features(entity_df=entity_df, features=features, full_feature_names=full_feature_names) entity_dict = { "driver_id": [1001, 1002, 1003], "label_driver_reported_satisfaction": [1, 5, 3], "event_timestamp": [ datetime(2021, 4, 12, 10, 59, 42).isoformat(), datetime(2021, 4, 12, 8, 12, 10).isoformat(), datetime(2021, 4, 12, 16, 40, 26).isoformat(), ], } features = [ "driver_hourly_stats:conv_rate", "driver_hourly_stats:acc_rate", "driver_hourly_stats:avg_daily_trips", ] @pipeline def my_pipeline(): my_features = get_historical_features(entity_dict, features) ... ``` **Important Note** ZenML's use of Pydantic limits serialization to basic data types, necessitating conversions for complex types like `DataFrame` and `datetime`. For more details on configurable attributes of the Feast feature store, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-feast/#zenml.integrations.feast.feature_stores.feast_feature_store.FeastFeatureStore). ================================================== === File: docs/book/component-guide/step-operators/step-operators.md === # Step Operators The step operator allows execution of individual pipeline steps in specialized environments optimized for specific workloads, such as accessing GPUs or distributed processing frameworks like [Spark](https://spark.apache.org/). ### Comparison to Orchestrators The orchestrator is a mandatory component that executes all pipeline steps in order and provides scheduling features. In contrast, the step operator executes individual steps in separate environments when the orchestrator's environment is insufficient. ### When to Use It Use a step operator when pipeline steps require resources unavailable in the orchestrator's runtime environments. For example, if a step trains a computer vision model needing a GPU, but the orchestrator (e.g., a [Kubeflow orchestrator](../orchestrators/kubeflow.md)) runs on a cluster without GPU nodes, a step operator like [SageMaker](sagemaker.md), [Vertex](vertex.md), or [AzureML](azureml.md) should be used. ### Step Operator Flavors ZenML provides the following integrations for executing steps on major cloud providers: | Step Operator | Flavor | Integration | Notes | |---------------|------------|-------------|-------------------------------------| | [AzureML](azureml.md) | `azureml` | `azure` | Executes steps using AzureML | | [Kubernetes](kubernetes.md) | `kubernetes` | `kubernetes` | Executes steps using Kubernetes Pods | | [Modal](modal.md) | `modal` | `modal` | Executes steps using Modal | | [SageMaker](sagemaker.md) | `sagemaker` | `aws` | Executes steps using SageMaker | | [Spark](spark-kubernetes.md) | `spark` | `spark` | Executes steps in a distributed manner using Spark on Kubernetes | | [Vertex](vertex.md) | `vertex` | `gcp` | Executes steps using Vertex AI | | [Custom Implementation](custom.md) | _custom_ | | Allows custom step operator implementations | To view available flavors, use: ```shell zenml step-operator flavor list ``` ### How to Use It You don't need to interact directly with ZenML step operators. As long as the desired step operator is part of your active [ZenML stack](../../user-guide/production-guide/understand-stacks.md), specify it in the `@step` decorator: ```python from zenml import step @step(step_operator=) def my_step(...) -> ...: ... ``` #### Specifying Per-Step Resources For additional hardware resources, specify them on your steps as outlined [here](../../how-to/pipeline-development/training-with-gpus/README.md). #### Enabling CUDA for GPU-Backed Hardware To run steps on a GPU, follow [these instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for full GPU acceleration. ================================================== === File: docs/book/component-guide/step-operators/custom.md === ### Developing a Custom Step Operator in ZenML #### Overview To create a custom step operator in ZenML, familiarize yourself with the [general guide on custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Base Abstraction The `BaseStepOperator` is the abstract class for executing pipeline steps in a separate environment. It provides a basic interface: ```python from abc import ABC, abstractmethod from typing import List, Type from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig, Flavor from zenml.config.step_run_info import StepRunInfo class BaseStepOperatorConfig(StackComponentConfig): """Base config for step operators.""" class BaseStepOperator(StackComponent, ABC): """Base class for ZenML step operators.""" @abstractmethod def launch(self, info: StepRunInfo, entrypoint_command: List[str]) -> None: """Executes a step synchronously.""" ``` #### Creating a Custom Step Operator To build a custom flavor for a step operator: 1. **Subclass `BaseStepOperator`** and implement the `launch` method: - Set up the execution environment (e.g., Docker image) with necessary `pip` dependencies from `info.pipeline.docker_settings`. - Run the entrypoint command using `entrypoint_command`. 2. **Handle Resources**: If applicable, manage resources defined in `info.config.resource_settings`. 3. **Configuration Class**: Create a class inheriting from `BaseStepOperatorConfig` for custom parameters. 4. **Flavor Class**: Inherit from `BaseStepOperatorFlavor`, implement the `name` property, and register it via CLI: ```shell zenml step-operator flavor register ``` Example registration: ```shell zenml step-operator flavor register flavors.my_flavor.MyStepOperatorFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository to avoid resolution issues. - After registration, verify the flavor with: ```shell zenml step-operator flavor list ``` #### Interaction in ZenML Workflow - The **CustomStepOperatorFlavor** is used during flavor creation. - The **CustomStepOperatorConfig** validates user inputs during registration. - The **CustomStepOperator** is utilized when the component is in action, allowing separation of configuration and implementation. #### Enabling CUDA for GPU For GPU execution, follow the [GPU training instructions](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for optimal performance. ================================================== === File: docs/book/component-guide/step-operators/spark-kubernetes.md === ### Executing Individual Steps on Spark #### Overview The `spark` integration provides two key step operators: - **SparkStepOperator**: Base class for all Spark-related step operators. - **KubernetesSparkStepOperator**: Launches ZenML steps as Spark applications using Kubernetes. #### SparkStepOperator Configuration The `SparkStepOperatorConfig` class defines the configuration parameters: ```python from typing import Optional, Dict, Any from zenml.step_operators import BaseStepOperatorConfig class SparkStepOperatorConfig(BaseStepOperatorConfig): master: str # Master URL for the cluster (Kubernetes, YARN, etc.) deploy_mode: str = "cluster" # 'cluster' or 'client' submit_kwargs: Optional[Dict[str, Any]] = None # Additional parameters ``` #### SparkStepOperator Implementation The `SparkStepOperator` class handles the Spark job execution with the following methods: ```python from typing import List from pyspark.conf import SparkConf from zenml.step_operators import BaseStepOperator class SparkStepOperator(BaseStepOperator): def _resource_configuration(self, spark_config: SparkConf, resource_configuration: "ResourceSettings") -> None: pass def _backend_configuration(self, spark_config: SparkConf, step_config: "StepConfiguration") -> None: pass def _io_configuration(self, spark_config: SparkConf) -> None: pass def _additional_configuration(self, spark_config: SparkConf) -> None: pass def _launch_spark_job(self, spark_config: SparkConf, entrypoint_command: List[str]) -> None: pass def launch(self, info: "StepRunInfo", entrypoint_command: List[str]) -> None: pass ``` #### Configuration Details - **master**: URL for the Spark cluster. - **deploy_mode**: Determines where the driver runs. - **submit_kwargs**: JSON string for additional parameters. The configuration is handled through methods that translate ZenML settings to Spark configurations. The `_launch_spark_job` method executes the Spark job via `spark-submit`. #### KubernetesSparkStepOperator The `KubernetesSparkStepOperator` extends `SparkStepOperator` and includes Kubernetes-specific configurations: ```python from typing import Optional from zenml.integrations.spark.step_operators.spark_step_operator import SparkStepOperatorConfig class KubernetesSparkStepOperatorConfig(SparkStepOperatorConfig): namespace: Optional[str] = None # Kubernetes namespace service_account: Optional[str] = None # Service account for Spark components ``` The `_backend_configuration` method is tailored for Kubernetes, ensuring proper image handling. #### Usage Guidelines Use the Spark step operator for: - Large data processing. - Steps benefiting from distributed computing. #### Deployment Steps 1. **Remote ZenML Server**: Follow the deployment guide. 2. **Kubernetes Cluster**: Set up using cloud providers or custom infrastructure. #### Spark EKS Setup Guide 1. Create an Amazon EKS cluster role and node role. 2. Attach `AmazonRDSFullAccess` and `AmazonS3FullAccess` policies. 3. Create the cluster and node group. 4. Build the Docker image for Spark drivers and executors using the `docker-image-tool`. #### RBAC Configuration Create a `rbac.yaml` file for Kubernetes access: ```yaml apiVersion: v1 kind: Namespace metadata: name: spark-namespace --- apiVersion: v1 kind: ServiceAccount metadata: name: spark-service-account namespace: spark-namespace --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: spark-role namespace: spark-namespace subjects: - kind: ServiceAccount name: spark-service-account namespace: spark-namespace roleRef: kind: ClusterRole name: edit apiGroup: rbac.authorization.k8s.io ``` Execute the following commands to apply RBAC settings: ```bash aws eks --region=$REGION update-kubeconfig --name=$EKS_CLUSTER_NAME kubectl create -f rbac.yaml ``` #### Registering the Step Operator To use the `KubernetesSparkStepOperator`: ```bash zenml step-operator register spark_step_operator \ --flavor=spark-kubernetes \ --master=k8s://$EKS_API_SERVER_ENDPOINT \ --namespace= \ --service_account= ``` Register the stack: ```bash zenml stack register spark_stack \ -o default \ -s spark_step_operator \ -a spark_artifact_store \ -c spark_container_registry \ -i local_builder \ --set ``` #### Using the Step Operator Define a step using the `@step` decorator: ```python from zenml import step @step(step_operator=) def step_on_spark(...) -> ...: ... ``` #### Additional Configuration For more configuration options, refer to the SDK documentation for `SparkStepOperatorSettings`. ================================================== === File: docs/book/component-guide/step-operators/sagemaker.md === ### Summary of Executing Individual Steps in SageMaker **Overview**: Amazon SageMaker provides compute instances for training jobs and a UI for model management. ZenML's SageMaker step operator allows submission of individual steps to SageMaker. #### When to Use - Use the SageMaker step operator if: - Your pipeline steps require computing resources not available in your orchestrator. - You have access to SageMaker. #### Deployment Requirements 1. **IAM Role**: Create an IAM role with `AmazonS3FullAccess` and `AmazonSageMakerFullAccess` policies. [Setup Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html#sagemaker-roles-create-execution-role). 2. **ZenML AWS Integration**: Install using: ```shell zenml integration install aws ``` 3. **Docker**: Ensure Docker is installed and running. 4. **AWS Container Registry**: Set up as part of your stack. [Setup Guide](../container-registries/aws.md#how-to-deploy-it). 5. **Remote Artifact Store**: Required for artifact read/write access. Refer to the specific documentation for setup. 6. **Instance Type**: Choose an instance type from the [available types](https://docs.aws.amazon.com/sagemaker/latest/dg/notebooks-available-instance-types.html). 7. **Optional**: Create an experiment to group SageMaker runs. [Experiment Guide](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments-create.html). #### Authentication Methods 1. **Service Connector** (Recommended): - Register a service connector and connect it to your step operator: ```shell zenml service-connector register --type aws -i zenml step-operator register --flavor=sagemaker --role= --instance_type= zenml step-operator connect --connector zenml stack register -s ... --set ``` 2. **Implicit Authentication**: - For local orchestrators, ZenML uses the `default` profile in `~/.aws/config`. - For remote orchestrators, ensure the environment can authenticate to AWS and assume the specified IAM role. ```shell zenml step-operator register --flavor=sagemaker --role= --instance_type= zenml stack register -s ... --set python run.py # Uses `default` profile ``` #### Using the SageMaker Step Operator To execute steps in SageMaker, specify it in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` **Note**: ZenML builds a Docker image `/zenml:` for running steps in SageMaker. #### Additional Configuration - Use `SagemakerStepOperatorSettings` for additional configurations. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-aws/#zenml.integrations.aws.flavors.sagemaker_step_operator_flavor.SagemakerStepOperatorSettings) for attributes and [runtime configuration](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) for settings. #### Enabling CUDA for GPU To run steps on GPU, follow the instructions for enabling CUDA. This requires additional settings customization for full GPU acceleration. ================================================== === File: docs/book/component-guide/step-operators/modal.md === ### Modal Step Operator Overview **Modal** is a cloud infrastructure platform optimized for fast execution, particularly in building Docker images and provisioning hardware. The **ZenML Modal step operator** allows users to run individual steps on Modal compute instances. #### When to Use Utilize the Modal step operator if: - Fast execution is needed for resource-intensive steps (CPU, GPU, memory). - Specific hardware requirements must be defined (e.g., GPU type, CPU count). - You have access to Modal. #### Deployment Steps 1. **Sign Up**: Create a Modal account [here](https://modal.com/signup). 2. **Install CLI**: Run: ```shell pip install modal modal setup ``` #### Usage Requirements To use the Modal step operator: - Install the ZenML `modal` integration: ```shell zenml integration install modal ``` - Ensure Docker is installed and running. - Set up a cloud artifact store and a cloud container registry compatible with ZenML. #### Registering the Step Operator Register the step operator with: ```shell zenml step-operator register --flavor=modal zenml stack update -s ... ``` #### Executing Steps To execute a step, specify the step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ZenML builds a Docker image containing your code for execution on Modal. #### Additional Configuration Specify hardware requirements using the `ResourceSettings` class: ```python from zenml.config import ResourceSettings from zenml.integrations.modal.flavors import ModalStepOperatorSettings modal_settings = ModalStepOperatorSettings(gpu="A100") resource_settings = ResourceSettings(cpu=2, memory="32GB") @step( step_operator="modal", settings={ "step_operator": modal_settings, "resources": resource_settings } ) def my_modal_step(): ... ``` **Note**: The `cpu` parameter in `ResourceSettings` accepts a single integer, indicating a soft minimum limit. For example, 2 CPUs and 32GB memory costs approximately $1.03 per hour. This configuration runs `my_modal_step` on a Modal instance with 1 A100 GPU, 2 CPUs, and 32GB memory. For supported GPU types, refer to the [Modal docs](https://modal.com/docs/reference/modal.gpu). Settings for region and cloud provider are available for Modal Enterprise and Team plan customers, with certain combinations potentially restricted. For error handling, Modal provides detailed messages for troubleshooting. More information on region selection can be found in the [Modal docs](https://modal.com/docs/guide/region-selection). ================================================== === File: docs/book/component-guide/step-operators/azureml.md === ### AzureML Step Operator Overview AzureML provides compute instances for training jobs and a UI for model management. ZenML's AzureML step operator allows submission of individual pipeline steps to AzureML compute instances. #### When to Use Use the AzureML step operator if: - Your pipeline steps require computing resources not available from your orchestrator. - You have access to AzureML. For other cloud providers, consider using the SageMaker or Vertex step operators. #### Deployment Steps 1. **Create an AzureML Workspace**: Include an Azure container registry and an Azure storage account. 2. **(Optional)** Create a compute instance or cluster via Azure Machine Learning Studio. If not specified, the operator will use serverless compute or provision a new target. 3. **(Optional)** Create a Service Principal for authentication if using a service connector. #### Usage Requirements - Install the ZenML Azure integration: ```shell zenml integration install azure ``` - Ensure Docker is installed and running. - Set up an Azure container registry and artifact store as part of your stack. - Have an AzureML workspace and optional compute cluster. #### Authentication Methods 1. **Service Connector** (recommended): - Register a service connector and connect it to your step operator: ```shell zenml service-connector register --type azure -i zenml step-operator register --flavor=azureml --subscription_id= --resource_group= --workspace_name= zenml step-operator connect --connector zenml stack register -s ... --set ``` 2. **Implicit Authentication**: - For local orchestrators, ZenML uses the Azure CLI configuration. Ensure the CLI has necessary permissions. - For remote orchestrators, the environment must support implicit authentication. #### Executing Steps To execute a step using the AzureML step operator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ZenML builds a Docker image `/zenml:` for step execution. #### Additional Configuration The `AzureMLStepOperatorSettings` class allows configuration of compute resources: 1. **Serverless Compute** (default): Set `mode` to `serverless`. 2. **Compute Instance**: Set `mode` to `compute-instance`, requires `compute_name`. 3. **Compute Cluster**: Set `mode` to `compute-cluster`, requires `compute_name`. Example for defining a compute instance: ```python from zenml.integrations.azure.flavors import AzureMLStepOperatorSettings azureml_settings = AzureMLStepOperatorSettings( mode="compute-instance", compute_name="MyComputeInstance", compute_size="Standard_NC6s_v3", ) @step(settings={"step_operator": azureml_settings}) def my_azureml_step(): # YOUR STEP CODE ... ``` #### GPU Support To enable CUDA for GPU-backed hardware, follow the specific instructions to ensure proper configuration for GPU acceleration. For more details, refer to the [AzureMLStepOperatorSettings SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.flavors.azureml_step_operator_flavor.AzureMLStepOperatorSettings). ================================================== === File: docs/book/component-guide/step-operators/kubernetes.md === ### Kubernetes Step Operator Overview ZenML's Kubernetes step operator allows execution of individual pipeline steps on Kubernetes pods. #### When to Use - If pipeline steps require additional computing resources (CPU, GPU, memory) not available from the orchestrator. - If you have access to a Kubernetes cluster. #### Deployment Requirements 1. **Kubernetes Cluster**: Must be deployed (refer to the cloud guide for options). 2. **ZenML Kubernetes Integration**: Install with: ```shell zenml integration install kubernetes ``` 3. **Docker**: Installed and running, or a remote image builder in your stack. 4. **Remote Artifact Store**: Required for reading/writing step artifacts. **Recommendation**: Set up a Service Connector for connecting the Kubernetes step operator to the cluster, especially for managed cloud providers (AWS, GCP, Azure). #### Registering the Step Operator You can register the step operator in two ways: 1. **Using a Service Connector**: ```shell zenml step-operator register --flavor kubernetes zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml step-operator connect --connector ``` 2. **Using `kubectl` Client**: ```shell zenml step-operator register --flavor=kubernetes --kubernetes_context= ``` #### Updating the Active Stack Add the step operator to your active stack: ```shell zenml stack update -s ``` #### Defining Steps Specify the step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` ZenML builds a Docker image to run your steps in Kubernetes. #### Interacting with Pods To debug, you can interact with Kubernetes pods using labels: ```shell kubectl delete pod -n zenml -l pipeline= ``` #### Additional Configuration Use `KubernetesStepOperatorSettings` for further configuration: - **pod_settings**: Node selectors, labels, affinity, tolerations, image pull secrets. - **service_account_name**: Specify the service account for the pods. Example configuration: ```python from zenml.integrations.kubernetes.flavors import KubernetesStepOperatorSettings kubernetes_settings = KubernetesStepOperatorSettings( pod_settings={ "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, "affinity": {...}, # Define affinity settings "tolerations": [...], # Define tolerations "resources": {...}, # Define resource requests and limits "annotations": {...}, # Define annotations "volumes": [...], # Define volumes "volume_mounts": [...], # Define volume mounts "host_ipc": True, "image_pull_secrets": ["regcred"], "labels": {...} # Define labels }, kubernetes_namespace="ml-pipelines", service_account_name="zenml-pipeline-runner" ) @step(settings={"step_operator": kubernetes_settings}) def my_kubernetes_step(): ... ``` #### Enabling CUDA for GPU To run steps on GPU, follow the instructions to enable CUDA for full acceleration. For a complete list of attributes and configurations, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.step_operators.kubernetes_step_operator.KubernetesStepOperator). ================================================== === File: docs/book/component-guide/step-operators/vertex.md === ### Summary of Executing Individual Steps in Vertex AI **Overview**: Google Cloud Vertex AI provides specialized compute instances for training jobs and a UI for managing models and logs. ZenML's Vertex AI step operator enables submission of individual steps to Vertex AI compute instances. #### When to Use - Use the Vertex step operator if: - Your pipeline steps require resources not available from your orchestrator. - You have access to Vertex AI. #### Deployment Steps 1. **Enable Vertex AI**: [Enable here](https://console.cloud.google.com/vertex-ai). 2. **Create a Service Account**: Grant permissions for Vertex AI jobs (`roles/aiplatform.admin`) and container registry (`roles/storage.admin`). #### Usage Requirements - Install ZenML GCP integration: ```shell zenml integration install gcp ``` - Ensure Docker is installed and running. - Enable Vertex AI and have a service account file. - Set up a GCR container registry. - Optionally specify a machine type (default: `n1-standard-4`). - Configure a remote artifact store for reading/writing artifacts. #### Authentication Methods 1. **Using `gcloud` CLI**: ```shell gcloud auth login zenml step-operator register --flavor=vertex --project= --region= ``` 2. **Using Service Account Key File**: ```shell zenml step-operator register --flavor=vertex --project= --region= --service_account_path= ``` 3. **Using GCP Service Connector** (recommended): ```shell zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@ zenml step-operator register --flavor=vertex --region= zenml step-operator connect --connector ``` #### Registering the Step Operator Update the active stack to include the step operator: ```shell zenml stack update -s ``` #### Using the Step Operator Specify the step operator in the `@step` decorator: ```python from zenml import step @step(step_operator=) def trainer(...) -> ...: """Train a model.""" ``` #### Additional Configuration Specify service account, network, and reserved IP ranges: ```shell zenml step-operator register --flavor=vertex --project= --region= --service_account= --network= --reserved_ip_ranges= ``` #### VertexStepOperatorSettings Example Customize settings for the step operator: ```python from zenml import step from zenml.integrations.gcp.flavors.vertex_step_operator_flavor import VertexStepOperatorSettings @step(step_operator=, settings={"step_operator": VertexStepOperatorSettings( accelerator_type="NVIDIA_TESLA_T4", accelerator_count=1, machine_type="n1-standard-2", disk_type="pd-ssd", disk_size_gb=100, )}) def trainer(...) -> ...: """Train a model.""" ``` #### Enabling CUDA for GPU Follow instructions to enable CUDA for GPU acceleration. #### Using Persistent Resources To speed up development: 1. Create a persistent resource via GCP UI or [GCP docs](https://cloud.google.com/vertex-ai/docs/training/persistent-resource-create). 2. Ensure the step operator is configured with the appropriate service account. 3. Use the persistent resource in your code: ```python @step(step_operator=, settings={"step_operator": VertexStepOperatorSettings( persistent_resource_id="my-persistent-resource", machine_type="n1-standard-4", accelerator_type="NVIDIA_TESLA_T4", accelerator_count=1, )}) def trainer(...) -> ...: """Train a model.""" ``` **Note**: Persistent resources incur costs while running, even when idle. Monitor usage accordingly. ================================================== === File: docs/book/component-guide/experiment-trackers/custom.md === ### Develop a Custom Experiment Tracker #### Overview To create a custom experiment tracker in ZenML, familiarize yourself with the [general guide to writing custom component flavors](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). Note that the base abstraction for the Experiment Tracker is currently in progress, and extensions are not recommended until its release. #### Steps to Build a Custom Experiment Tracker 1. **Create a Class**: Inherit from `BaseExperimentTracker` and implement the abstract methods. 2. **Configuration Class**: If needed, inherit from `BaseExperimentTrackerConfig` to add configuration parameters. 3. **Combine Implementation**: Inherit from `BaseExperimentTrackerFlavor` to bring both the implementation and configuration together. #### Registering the Custom Tracker Register your custom flavor using the CLI with the following command: ```shell zenml experiment-tracker flavor register ``` For example, if your flavor class `MyExperimentTrackerFlavor` is in `flavors/my_flavor.py`, use: ```shell zenml experiment-tracker flavor register flavors.my_flavor.MyExperimentTrackerFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository for proper path resolution. - After registration, list available flavors with: ```shell zenml experiment-tracker flavor list ``` #### Class Utilization - **CustomExperimentTrackerFlavor**: Used during flavor creation via CLI. - **CustomExperimentTrackerConfig**: Validates user input when registering/updating a stack component. - **CustomExperimentTracker**: Engaged when the component is in use, allowing separation of flavor configuration from implementation. This design enables registration of flavors and components even if their major dependencies are not installed locally. ================================================== === File: docs/book/component-guide/experiment-trackers/vertexai.md === ### Vertex AI Experiment Tracker Overview The Vertex AI Experiment Tracker is part of the ZenML integration for Vertex AI, utilizing the Vertex AI tracking service to log and visualize pipeline step information, including models, parameters, and metrics. #### Use Cases - Ideal for iterative ML experimentation and visualizing results from automated pipeline runs. - Best for users already familiar with Vertex AI or those building workflows within the Google Cloud ecosystem. - Consider alternative Experiment Tracker flavors if unfamiliar with Vertex AI or using other cloud providers. #### Configuration To configure the Vertex AI Experiment Tracker, install the GCP ZenML integration: ```shell zenml integration install gcp -y ``` **Main Configuration Options:** - `project`: GCP project name (inferred if `None`). - `location`: GCP location for experiments (defaults to `us-central1`). - `staging_bucket`: GCS bucket for staging artifacts (format: `gs://...`). - `service_account_path`: Path to service account credential JSON file. **Registering the Tracker:** ```shell zenml experiment-tracker register vertex_experiment_tracker \ --flavor=vertex \ --project= \ --location= \ --staging_bucket=gs:// zenml stack register custom_stack -e vertex_experiment_tracker ... --set ``` #### Authentication Methods - **Implicit Authentication**: Quick local setup using `gcloud auth login`. Not recommended for team or production use. - **GCP Service Connector (Recommended)**: Offers auto-configuration and security for long-lived credentials. Register using: ```shell zenml service-connector register --type gcp -i ``` After setting up the connector, register the tracker: ```shell zenml experiment-tracker register \ --flavor=vertex \ --project= \ --location= \ --staging_bucket=gs:// zenml experiment-tracker connect --connector ``` - **GCP Credentials**: Store a GCP Service Account Key in a ZenML Secret and reference it in the tracker configuration: ```shell zenml experiment-tracker register \ --flavor=vertex \ --project= \ --location= \ --staging_bucket=gs:// \ --service_account_path=path/to/service_account_key.json ``` #### Usage To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator. **Example 1: Logging Metrics** ```python from google.cloud import aiplatform class VertexAICallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): aiplatform.log_time_series_metrics(metrics=logs) @step(experiment_tracker="") def train_model(...): aiplatform.autolog() model.fit(..., callbacks=[VertexAICallback()]) aiplatform.log_metrics(...) ``` **Example 2: Uploading TensorBoard Logs** ```python @step(experiment_tracker="") def train_model(...): aiplatform.start_upload_tb_log(...) model.fit(...) aiplatform.end_upload_tb_log() aiplatform.log_metrics(...) ``` #### Accessing Experiment Tracker UI Retrieve the URL for the Vertex AI experiment linked to a ZenML run: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` #### Additional Configuration You can specify an experiment name or TensorBoard instance using `VertexExperimentTrackerSettings`: ```python from zenml.integrations.gcp.flavors.vertex_experiment_tracker_flavor import VertexExperimentTrackerSettings vertexai_settings = VertexExperimentTrackerSettings( experiment="", experiment_tensorboard="TENSORBOARD_RESOURCE_NAME" ) @step(experiment_tracker="", settings={"experiment_tracker": vertexai_settings}) def step_one(data: np.ndarray) -> np.ndarray: ... ``` For more details on configuration, refer to the ZenML documentation. ================================================== === File: docs/book/component-guide/experiment-trackers/neptune.md === ### Neptune Experiment Tracker Overview The Neptune Experiment Tracker integrates with [neptune.ai](https://neptune.ai/product/experiment-tracking) for logging and visualizing information from ZenML pipeline steps (models, parameters, metrics). It is useful during the ML experimentation phase and for production-ready model registries. #### Use Cases - Continuation of using neptune.ai for tracking results while adopting MLOps practices with ZenML. - Enhanced visual navigation of results from ZenML pipeline runs. - Sharing artifacts and metrics with teams or stakeholders. Consider other [Experiment Tracker flavors](./experiment-trackers.md#experiment-tracker-flavors) if unfamiliar with neptune.ai. ### Deployment 1. **Install the Integration:** ```shell zenml integration install neptune -y ``` 2. **Configure Authentication:** - **API Token**: Required for connecting to Neptune. Can be set via environment variables or ZenML secrets. - **Project**: Specify in the format "workspace-name/project-name". #### Authentication Methods - **ZenML Secret (Recommended)**: ```shell zenml secret create neptune_secret --api_token= ``` Register the tracker: ```shell zenml experiment-tracker register neptune_experiment_tracker \ --flavor=neptune \ --project= \ --api_token={{neptune_secret.api_token}} zenml stack register neptune_stack -e neptune_experiment_tracker ... --set ``` - **Basic Authentication** (not recommended for production): ```shell zenml experiment-tracker register neptune_experiment_tracker --flavor=neptune \ --project= --api_token= zenml stack register neptune_stack -e neptune_experiment_tracker ... --set ``` ### Usage To log information from a ZenML pipeline step: 1. Use the `@step` decorator with the experiment tracker. 2. Fetch the Neptune run object and log data. Example: ```python from zenml.integrations.neptune.experiment_trackers.run_state import get_neptune_run from zenml import step from sklearn.svm import SVC from sklearn.datasets import load_iris from zenml.client import Client @step(experiment_tracker="neptune_experiment_tracker") def train_model() -> SVC: iris = load_iris() model = SVC(kernel="rbf", C=1.0) model.fit(iris.data, iris.target) neptune_run = get_neptune_run() neptune_run["parameters"] = {"kernel": "rbf", "C": 1.0} return model ``` #### Logging Metadata Use `get_step_context` to log ZenML metadata: ```python @step(experiment_tracker="neptune_tracker") def my_step(): neptune_run = get_neptune_run() context = get_step_context() neptune_run["pipeline_metadata"] = stringify_unsupported(context.pipeline_run.get_metadata().dict()) ``` #### Adding Tags Use `NeptuneExperimentTrackerSettings` to add tags: ```python from zenml.integrations.neptune.flavors import NeptuneExperimentTrackerSettings neptune_settings = NeptuneExperimentTrackerSettings(tags={"keras", "mnist"}) ``` ### Neptune UI Access the Neptune web UI for detailed insights about tracked experiments. Each pipeline run is logged as a separate experiment, viewable in the Neptune dashboard. ### Full Code Example ```python from zenml import pipeline, step from zenml.client import Client from zenml.integrations.neptune.experiment_trackers import NeptuneExperimentTracker from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.metrics import accuracy_score @step(experiment_tracker="neptune_experiment_tracker") def train_model() -> SVC: iris = load_iris() X_train, _, y_train, _ = train_test_split(iris.data, iris.target, test_size=0.2) model = SVC(kernel="rbf", C=1.0) model.fit(X_train, y_train) neptune_run = get_neptune_run() neptune_run["parameters"] = {"kernel": "rbf", "C": 1.0} return model @step(experiment_tracker="neptune_experiment_tracker") def evaluate_model(model: SVC): iris = load_iris() _, X_test, _, y_test = train_test_split(iris.data, iris.target, test_size=0.2) accuracy = accuracy_score(y_test, model.predict(X_test)) neptune_run = get_neptune_run() neptune_run["metrics/accuracy"] = accuracy return accuracy @pipeline def ml_pipeline(): model = train_model() evaluate_model(model) if __name__ == "__main__": ml_pipeline() ``` ### Further Reading Refer to [Neptune's documentation](https://docs.neptune.ai/integrations/zenml/) for more details on using this integration. ================================================== === File: docs/book/component-guide/experiment-trackers/mlflow.md === # MLflow Experiment Tracker Summary The MLflow Experiment Tracker, integrated with ZenML, utilizes the MLflow tracking service to log and visualize pipeline step information (models, parameters, metrics). ## Use Cases Use the MLflow Experiment Tracker if: - You are already using MLflow for tracking experiment results and want to integrate it with ZenML. - You need an interactive way to navigate results from ZenML pipeline runs. - Your team has a shared MLflow Tracking service and you want to connect ZenML to it. Consider other Experiment Tracker flavors if you are unfamiliar with MLflow. ## Configuration To configure the MLflow Experiment Tracker, install the integration: ```shell zenml integration install mlflow -y ``` ### Deployment Scenarios 1. **Localhost**: Requires a local Artifact Store. Not suitable for collaborative settings. ```shell zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow zenml stack register custom_stack -e mlflow_experiment_tracker ... --set ``` 2. **Remote Tracking**: Requires a deployed MLflow Tracking Server with authentication parameters. - Use MLflow version 2.2.1 or higher due to a critical vulnerability. 3. **Databricks**: Requires a Databricks workspace with a managed MLflow Tracking server. Authentication parameters are needed. ### Authentication Methods - **Basic Authentication**: Directly configure credentials (not recommended for production). ```shell zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow --tracking_uri= --tracking_token= ``` - **ZenML Secret (Recommended)**: Store credentials securely. ```shell zenml secret create mlflow_secret --username= --password= zenml experiment-tracker register mlflow --flavor=mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ... ``` ## Usage To log information from a ZenML pipeline step, enable the experiment tracker using the `@step` decorator and utilize MLflow's logging capabilities: ```python import mlflow @step(experiment_tracker="") def tf_trainer(x_train: np.ndarray, y_train: np.ndarray) -> tf.keras.Model: mlflow.tensorflow.autolog() mlflow.log_param(...) mlflow.log_metric(...) mlflow.log_artifact(...) return model ``` You can dynamically reference the experiment tracker: ```python from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def tf_trainer(...): ... ``` ### MLflow UI Access the MLflow UI to view tracked experiments. The URL can be found in the metadata of the step: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` For local MLflow, start the UI with: ```bash mlflow ui --backend-store-uri ``` ### Additional Configuration You can pass `MLFlowExperimentTrackerSettings` for nested runs or additional tags: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings(nested=True, tags={"key": "value"}) @step(experiment_tracker="", settings={"experiment_tracker": mlflow_settings}) def step_one(data: np.ndarray) -> np.ndarray: ... ``` For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-mlflow/#zenml.integrations.mlflow.experiment_trackers.mlflow_experiment_tracker). ================================================== === File: docs/book/component-guide/experiment-trackers/experiment-trackers.md === ### Experiment Trackers in ZenML **Overview**: Experiment trackers in ZenML allow users to log and visualize ML experiments, linking pipeline runs to experiments through stack components. They enhance usability by providing intuitive UIs for browsing and comparing logged information. **Key Points**: - **Integration**: Experiment Trackers are optional stack components that need to be registered in your ZenML stack. ZenML already tracks artifacts via the mandatory Artifact Store. - **Usability**: While ZenML records artifacts programmatically, Experiment Trackers offer a user-friendly interface for visualizing data. - **Architecture**: Experiment Trackers fit into the ZenML stack architecture, facilitating tracking and visualization. **Available Experiment Tracker Flavors**: | Tracker | Flavor | Integration | Notes | |---------|--------|-------------|-------| | [Comet](comet.md) | `comet` | `comet` | Adds Comet tracking capabilities | | [MLflow](mlflow.md) | `mlflow` | `mlflow` | Adds MLflow tracking capabilities | | [Neptune](neptune.md) | `neptune` | `neptune` | Adds Neptune tracking capabilities | | [Weights & Biases](wandb.md) | `wandb` | `wandb` | Adds Weights & Biases tracking capabilities | | [Custom Implementation](custom.md) | _custom_ | | Custom tracker options | **Command to List Flavors**: ```shell zenml experiment-tracker flavor list ``` **Usage Steps**: 1. **Configure**: Add an Experiment Tracker to your ZenML stack. 2. **Enable**: Decorate pipeline steps to enable the Experiment Tracker. 3. **Log**: Explicitly log models, metrics, and data within your steps. 4. **Access UI**: Retrieve the Experiment Tracker UI URL for a specific pipeline step: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") step = pipeline_run.steps[""] experiment_tracker_url = step.run_metadata["experiment_tracker_url"].value ``` **Note**: If a pipeline step fails, the corresponding run will be marked as failed in the Experiment Tracker. For detailed usage, refer to the documentation for the specific Experiment Tracker flavor you are using. ================================================== === File: docs/book/component-guide/experiment-trackers/wandb.md === # Weights & Biases Experiment Tracker Summary ## Overview The Weights & Biases (W&B) Experiment Tracker is integrated with ZenML to log and visualize pipeline step information (models, parameters, metrics) using the W&B platform. It is ideal for iterative ML experimentation and can also be used for automated pipeline runs. ## When to Use - If you already use W&B for tracking and want to integrate it with ZenML. - For a visually interactive way to navigate results from ZenML pipeline runs. - To share artifacts and metrics with your team or stakeholders. ## Deployment To deploy the W&B Experiment Tracker, install the integration: ```shell zenml integration install wandb -y ``` ### Authentication Methods You need to configure the following credentials: - `api_key`: Mandatory API key for your W&B account. - `project_name`: Name of the project for the run. - `entity`: Username or team name for sending runs. #### Basic Authentication (Not Recommended for Production) ```shell zenml experiment-tracker register wandb_experiment_tracker --flavor=wandb \ --entity= --project_name= --api_key= zenml stack register custom_stack -e wandb_experiment_tracker ... --set ``` #### ZenML Secret (Recommended) Create a ZenML secret for secure storage: ```shell zenml secret create wandb_secret \ --entity= \ --project_name= \ --api_key= ``` Then register the tracker: ```shell zenml experiment-tracker register wandb_tracker \ --flavor=wandb \ --entity={{wandb_secret.entity}} \ --project_name={{wandb_secret.project_name}} \ --api_key={{wandb_secret.api_key}} ``` ## Usage To log information from a ZenML pipeline step, enable the experiment tracker with the `@step` decorator and use W&B logging: ```python import wandb from wandb.integration.keras import WandbCallback @step(experiment_tracker="") def tf_trainer(...): ... model.fit(..., callbacks=[WandbCallback(log_evaluation=True)]) wandb.log({"": metric}) ``` You can dynamically reference the experiment tracker: ```python from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def tf_trainer(...): ... ``` ### W&B UI Each ZenML step using W&B creates a separate experiment run, accessible via the W&B UI. The URL for a specific run can be retrieved: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value print(tracking_url) ``` ### Additional Configuration You can pass `WandbExperimentTrackerSettings` to customize settings or add tags: ```python wandb_settings = WandbExperimentTrackerSettings(tags=["some_tag"]) @step(experiment_tracker="", settings={"experiment_tracker": wandb_settings}) def my_step(...): ... ``` ## Full Code Example A complete example of a ZenML pipeline using W&B integration: ```python from zenml import pipeline, step from zenml.client import Client from zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor import WandbExperimentTrackerSettings from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments from datasets import load_dataset import wandb experiment_tracker = Client().active_stack.experiment_tracker @step def prepare_data(): dataset = load_dataset("imdb") ... return train_dataset, eval_dataset @step(experiment_tracker=experiment_tracker.name) def train_model(train_dataset, eval_dataset): model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=2) training_args = TrainingArguments(...) trainer = Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset) trainer.train() wandb.log({"final_evaluation": eval_results}) return model @pipeline(enable_cache=False) def fine_tuning_pipeline(): train_dataset, eval_dataset = prepare_data() model = train_model(train_dataset, eval_dataset) if __name__ == "__main__": wandb_settings = WandbExperimentTrackerSettings(tags=["distilbert", "imdb"]) fine_tuning_pipeline.with_options(settings={"experiment_tracker": wandb_settings})() ``` For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-wandb/#zenml.integrations.wandb.flavors.wandb_experiment_tracker_flavor.WandbExperimentTrackerSettings). ================================================== === File: docs/book/component-guide/experiment-trackers/comet.md === # Comet Experiment Tracker with ZenML The Comet Experiment Tracker integrates with ZenML to log and visualize information from ML pipeline steps using the Comet platform. It is useful for tracking experiment results during iterative ML experimentation and for automated pipeline runs. ## Use Cases - Continue using Comet for projects already utilizing it. - Gain interactive visualization of ZenML pipeline results (models, metrics, datasets). - Share logged artifacts and metrics with teams or stakeholders. Consider other Experiment Tracker flavors if unfamiliar with Comet. ## Deployment To deploy the Comet Experiment Tracker, install the integration: ```bash zenml integration install comet -y ``` ### Authentication Methods 1. **ZenML Secret (Recommended)**: Store credentials securely. ```bash zenml secret create comet_secret \ --workspace= \ --project_name= \ --api_key= ``` Register the tracker: ```bash zenml experiment-tracker register comet_tracker \ --flavor=comet \ --workspace={{comet_secret.workspace}} \ --project_name={{comet_secret.project_name}} \ --api_key={{comet_secret.api_key}} ``` 2. **Basic Authentication**: Directly configure credentials (not recommended for production). ```bash zenml experiment-tracker register comet_experiment_tracker --flavor=comet \ --workspace= --project_name= --api_key= ``` ## Usage Enable the experiment tracker in a ZenML pipeline step using the `@step` decorator: ```python from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step(experiment_tracker=experiment_tracker.name) def my_step(): experiment_tracker.log_metrics({"my_metric": 42}) experiment_tracker.log_params({"my_param": "hello"}) ``` ### Comet UI Each ZenML step using Comet creates a separate experiment, viewable in the Comet UI. The experiment URL can be accessed via: ```python tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value ``` ## Full Code Example Here’s a simplified script demonstrating the integration: ```python from comet_ml.integration.sklearn import log_model import numpy as np from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC from sklearn.metrics import accuracy_score from zenml import pipeline, step from zenml.client import Client experiment_tracker = Client().active_stack.experiment_tracker @step def load_data(): iris = load_iris() return iris.data, iris.target @step def preprocess_data(X, y): return train_test_split(X, y, test_size=0.2, random_state=42) @step(experiment_tracker=experiment_tracker.name) def train_model(X_train, y_train): model = SVC().fit(X_train, y_train) log_model(experiment=experiment_tracker.experiment, model_name="SVC", model=model) return model @step(experiment_tracker=experiment_tracker.name) def evaluate_model(model, X_test, y_test): accuracy = accuracy_score(y_test, model.predict(X_test)) experiment_tracker.log_metrics({"accuracy": accuracy}) return accuracy @pipeline def iris_classification_pipeline(): X, y = load_data() X_train, X_test, y_train, y_test = preprocess_data(X, y) model = train_model(X_train, y_train) evaluate_model(model, X_test, y_test) if __name__ == "__main__": iris_classification_pipeline()() ``` ### Additional Configuration For tagging experiments, use `CometExperimentTrackerSettings`: ```python comet_settings = CometExperimentTrackerSettings(tags=["some_tag"]) @step(experiment_tracker="", settings={"experiment_tracker": comet_settings}) def my_step(): ... ``` Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-comet/#zenml.integrations.comet.flavors.comet_experiment_tracker_flavor.CometExperimentTrackerSettings) for more attributes and configuration details. ================================================== === File: docs/book/component-guide/annotators/annotators.md === ### Annotators in ZenML **Overview**: Annotators are a component of the ZenML stack that facilitate data annotation within ML workflows. They enable users to launch annotation tasks, configure datasets, and track labeled tasks through CLI commands. Data annotation is crucial in MLOps, and ZenML aims to enhance iterative annotation workflows by integrating labeling processes into ML practices. **When to Annotate**: 1. **At the Start**: Begin labeling data to bootstrap models, clarifying rules and standards for consistent labeling. 2. **As New Data Arrives**: Regularly check and label incoming data to maintain model accuracy and adapt to changes. 3. **Inference Samples**: Store and label data from model predictions to compare with actual labels, aiding in model retraining. 4. **Ad Hoc Interventions**: Identify and correct bad labels or address class imbalances through targeted annotation. **Core Features**: - Seamless integration of labels in training steps. - Versioning of annotation data. - Conversion of annotation data to/from custom formats. - Generation of UI config files for web annotation interfaces. **Available Annotators**: ZenML supports several annotators through integrations: | Annotator | Flavor | Integration | Notes | |-------------------------|----------------|------------------|------------------------------------| | [ArgillaAnnotator](argilla.md) | `argilla` | `argilla` | Connect ZenML with Argilla | | [LabelStudioAnnotator](label-studio.md) | `label_studio` | `label_studio` | Connect ZenML with Label Studio | | [PigeonAnnotator](pigeon.md) | `pigeon` | `pigeon` | Notebook only; for image/text classification | | [ProdigyAnnotator](prodigy.md) | `prodigy` | `prodigy` | Connect ZenML with [Prodigy](https://prodi.gy/) | | [Custom Implementation](custom.md) | _custom_ | | Extend the annotator abstraction | **Command to List Annotator Flavors**: ```shell zenml annotator flavor list ``` **Usage**: The annotator implementation is primarily based on the Label Studio integration. For detailed usage, refer to the [Label Studio documentation](label-studio.md#how-do-you-use-it). Note that Pigeon has limited functionality and is Jupyter notebook-specific. **Naming Conventions**: - ZenML uses "Dataset" for groups of annotations/tasks (Label Studio calls this a "Project"). - The combination of an annotation and its source data is referred to as "tasks" in ZenML, aligning with Label Studio's terminology. Other terms like "annotation" and "prediction" are commonly used across tools. ================================================== === File: docs/book/component-guide/annotators/custom.md === ### Develop a Custom Annotator **Overview**: Annotators are a stack component in ZenML that facilitate data annotation within your pipelines. They can be launched via CLI commands to configure datasets and track labeled tasks. **Important Notes**: - Familiarize yourself with the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavors. - Currently, the base abstraction for annotators is under development, limiting the ability to extend them. Refer to the list of available feature stores for immediate use. **Key Features**: - Data annotation integration in ZenML stacks. - CLI commands for launching annotation and managing datasets. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/component-guide/annotators/argilla.md === ### Summary of Argilla Documentation **Argilla Overview** Argilla is a collaboration tool designed for AI engineers and domain experts to create high-quality datasets for machine learning projects. It facilitates data curation through human and machine feedback, supporting the entire MLOps cycle from data labeling to model monitoring. **Use Cases** Argilla is ideal for labeling textual data in ML workflows. It can be integrated into a ZenML stack, supporting both local (Docker-backed) and deployed instances, including deployment as a Hugging Face Space. **Deployment Instructions** To deploy Argilla with ZenML, install the integration: ```shell zenml integration install argilla ``` You can register the annotator with an API key directly or as a secret for security. For secret registration: ```shell zenml secret create argilla_secrets --api_key="" ``` Register the annotator: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --port=6900 ``` For a deployed instance, specify the URL without a trailing slash. If using a private Hugging Face Space, include the headers parameter: ```shell zenml annotator register argilla --flavor argilla --authentication_secret=argilla_secrets --instance_url="https://[your-owner-name]-[your_space_name].hf.space" --headers='{"Authorization": "Bearer {[your_hugging_face_token]}"}' ``` Add components to a stack: ```shell zenml stack copy default annotation zenml stack update annotation -an zenml stack set annotation ``` Verify with: ```shell zenml annotator dataset list ``` **Usage** Access data and annotations via the CLI or ZenML SDK. For dataset annotation, use: ```shell zenml annotator dataset annotate ``` **Argilla Annotator Component** The Argilla annotator inherits from `BaseAnnotator`, requiring core methods for dataset registration and state management. Key functionalities include dataset registration, annotation export, and starting the annotator daemon. **Argilla Annotator SDK** To use the SDK in Python: ```python from zenml.client import Client client = Client() annotator = client.active_stack.annotator # List dataset names dataset_names = annotator.get_dataset_names() # Get a specific dataset dataset = annotator.get_dataset("dataset_name") # Get annotations for a dataset annotations = annotator.get_labeled_data(dataset_name="dataset_name") ``` For further details, refer to the [Argilla documentation](https://docs.argilla.io/en/latest/). ================================================== === File: docs/book/component-guide/annotators/pigeon.md === ### Pigeon Annotation Tool Overview **Pigeon** is an open-source annotation tool for quick labeling of data within Jupyter notebooks, supporting: - Text Classification - Image Classification - Text Captioning #### Use Cases Pigeon is ideal for: - Labeling small to medium-sized datasets in ML workflows - Quick labeling tasks - Iterative labeling during exploratory phases - Collaborative labeling in Jupyter notebooks #### Deployment Steps 1. **Install Pigeon Integration:** ```shell zenml integration install pigeon ``` 2. **Register the Annotator:** ```shell zenml annotator register pigeon --flavor pigeon --output_dir="path/to/dir" ``` 3. **Update Your Stack:** ```shell zenml stack update --annotator pigeon ``` #### Usage Access the Pigeon annotator in your Jupyter notebook: - **Text Classification Example:** ```python from zenml.client import Client annotator = Client().active_stack.annotator annotations = annotator.annotate( data=['I love this movie', 'I was really disappointed by the book'], options=['positive', 'negative'] ) ``` - **Image Classification Example:** ```python from zenml.client import Client from IPython.display import display, Image annotator = Client().active_stack.annotator annotations = annotator.annotate( data=['/path/to/image1.png', '/path/to/image2.png'], options=['cat', 'dog'], display_fn=lambda filename: display(Image(filename)) ) ``` #### Dataset Management Commands - List datasets: `zenml annotator dataset list` - Delete a dataset: `zenml annotator dataset delete ` - Get dataset statistics: `zenml annotator dataset stats ` **Output:** Annotations are saved as JSON files in the specified output directory, with each file named after its dataset. #### Acknowledgements Pigeon was developed by [Anastasis Germanidis](https://github.com/agermanidis) and is available as a [Python package](https://pypi.org/project/pigeon-jupyter/) and [GitHub repository](https://github.com/agermanidis/pigeon). It is licensed under the Apache License and has been updated for compatibility with recent `ipywidgets` versions. ================================================== === File: docs/book/component-guide/annotators/label-studio.md === ### Label Studio Integration with ZenML **Overview** Label Studio is an open-source annotation platform for data scientists and ML practitioners, supporting various annotation types, including: - **Computer Vision**: image classification, object detection, semantic segmentation - **Audio & Speech**: classification, speaker diarization, emotion recognition, transcription - **Text/NLP**: classification, NER, question answering, sentiment analysis - **Time Series**: classification, segmentation, event recognition - **Multi-Modal**: dialogue processing, OCR, time series with reference **Use Case** Integrate Label Studio into your ML workflow for data labeling. It requires cloud artifact stores (AWS S3, GCP/GCS, Azure Blob Storage) and does not support purely local stacks. **Deployment Steps** 1. Install the Label Studio integration: ```shell zenml integration install label_studio ``` 2. Clone and run Label Studio locally: ```shell git clone https://github.com/HumanSignal/label-studio.git cd label-studio docker-compose up -d ``` 3. Access the web interface at [http://localhost:8080/](http://localhost:8080/) to obtain your API key from the account settings. 4. Register the API key: ```shell zenml secret create label_studio_secrets --api_key="" ``` 5. Register the annotator: ```shell zenml annotator register label_studio --flavor label_studio --authentication_secret="label_studio_secrets" --port=8080 ``` For deployed instances, include the instance URL: ```shell zenml annotator register label_studio --flavor label_studio --authentication_secret="" --instance_url="" --port=80 ``` 6. Create and set your stack: ```shell zenml stack copy default annotation zenml stack update annotation -a zenml stack update annotation -an zenml stack set annotation ``` **Usage** Use the CLI to manage datasets: - List datasets: `zenml annotator dataset list` - Annotate a dataset: `zenml annotator dataset annotate ` **Key Components** - **Label Studio Annotator**: Inherits from `BaseAnnotator`, handling dataset registration, annotation export, and daemon process management. - **Standard Steps**: - `LabelStudioDatasetRegistrationConfig`: Config for dataset registration. - `LabelStudioDatasetSyncConfig`: Config for syncing new data. - `get_or_create_dataset`: Registers or retrieves a dataset. - `get_labeled_data`: Fetches labeled data in Label Studio format. - `sync_new_data_to_label_studio`: Ensures data and annotations are synced with the artifact store. **Helper Functions** ZenML provides functions to generate 'label config' strings for object detection, image classification, and OCR. Refer to the `label_config_generators` module for implementation details. This integration facilitates a streamlined annotation process, essential for effective ML workflows. ================================================== === File: docs/book/component-guide/annotators/prodigy.md === ### Prodigy Overview Prodigy is a paid annotation tool designed for creating training and evaluation data for machine learning models. It facilitates data inspection, cleaning, error analysis, and the development of rule-based systems. The Prodigy Python library provides pre-built workflows and command-line commands for various tasks, allowing customization of data loading, saving, and the annotation interface. ### When to Use Prodigy Consider using Prodigy when you need to label data as part of your machine learning workflow, integrating it as an optional annotator stack component in ZenML. ### Deployment Steps 1. **Install Prodigy**: Requires a license. Follow the [Prodigy installation guide](https://prodi.gy/docs/install). Ensure `urllib3<2` is installed. 2. **Register Prodigy with ZenML**: ```shell zenml integration export-requirements --output-file prodigy-requirements.txt prodigy zenml annotator register prodigy --flavor prodigy ``` Optionally, specify a custom configuration file. 3. **Set Active Stack**: ```shell zenml stack copy default annotation zenml stack update annotation -an prodigy zenml stack set annotation ``` ### Using Prodigy Prodigy does not require pre-starting the annotator. Use it as per the [Prodigy documentation](https://prodi.gy). Access your data and annotations via the ZenML CLI commands: - List datasets: ```shell zenml annotator dataset list ``` - Annotate a dataset: ```shell zenml annotator dataset annotate your_dataset --command="textcat.manual news_topics ./news_headlines.jsonl --label Technology,Politics,Economy,Entertainment" ``` ### Importing Annotations in ZenML To import annotations within a ZenML step: ```python from typing import List, Dict, Any from zenml import step from zenml.client import Client @step def import_annotations() -> List[Dict[str, Any]]: zenml_client = Client() annotations = zenml_client.active_stack.annotator.get_labeled_data(dataset_name="my_dataset") return annotations ``` ### Prodigy Annotator Stack Component The Prodigy annotator component extends the `BaseAnnotator` class, requiring core methods for dataset registration and annotation export. It includes additional methods specific to Prodigy for state management and custom features. ================================================== === File: docs/book/component-guide/image-builders/local.md === ### Local Image Builder Overview The Local Image Builder is a built-in feature of ZenML that utilizes the local Docker installation on your client machine to build container images. It employs the official Docker Python library, which retrieves authentication credentials from the default location: `$HOME/.docker/config.json`. To specify a different configuration directory, set the `DOCKER_CONFIG` environment variable: ```shell export DOCKER_CONFIG=/path/to/config_dir ``` Ensure that this directory contains a `config.json` file. ### When to Use Use the Local Image Builder if: - You can install and run Docker on your client machine. - You want to utilize remote components that require containerization without complex infrastructure setup. ### Deployment and Usage The Local Image Builder is included with ZenML and requires no additional setup. To use it, ensure: - Docker is installed and running. - The Docker client is authenticated to push to the desired container registry. To register the image builder and create a new stack, use: ```shell zenml image-builder register --flavor=local zenml stack register -i ... --set ``` For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-image_builders/#zenml.image_builders.local_image_builder.LocalImageBuilder). ================================================== === File: docs/book/component-guide/image-builders/custom.md === # Develop a Custom Image Builder ## Overview To create a custom image builder in ZenML, it's essential to understand the component flavor concepts outlined in the [general guide](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). ## Base Abstraction The `BaseImageBuilder` is an abstract class for building Docker images, requiring subclassing to implement specific functionality. It provides a basic interface: ```python from abc import ABC, abstractmethod from typing import Any, Dict, Optional, Type from zenml.container_registries import BaseContainerRegistry from zenml.image_builders import BuildContext from zenml.stack import StackComponent class BaseImageBuilder(StackComponent, ABC): """Base class for ZenML image builders.""" @property def build_context_class(self) -> Type["BuildContext"]: return BuildContext @abstractmethod def build( self, image_name: str, build_context: "BuildContext", docker_build_options: Dict[str, Any], container_registry: Optional["BaseContainerRegistry"] = None, ) -> str: """Builds a Docker image and optionally pushes it to a registry.""" ``` ## Building a Custom Image Builder To create a custom image builder: 1. **Subclass `BaseImageBuilder`**: Implement the `build` method to utilize the build context for Docker image creation and optional registry pushing. 2. **Configuration Class**: Inherit from `BaseImageBuilderConfig` to define any additional configuration parameters. 3. **Flavor Class**: Combine the implementation and configuration by inheriting from `BaseImageBuilderFlavor`, ensuring to provide a `name`. Register the flavor via CLI: ```shell zenml image-builder flavor register ``` For example: ```shell zenml image-builder flavor register flavors.my_flavor.MyImageBuilderFlavor ``` **Note**: Ensure ZenML is initialized at the root of your repository to avoid resolution issues. After registration, list available flavors: ```shell zenml image-builder flavor list ``` ## Important Considerations - **Flavor and Config Usage**: The `CustomImageBuilderFlavor` is used during flavor creation, while `CustomImageBuilderConfig` validates user inputs during registration. Custom validators can be added since `Config` objects are `pydantic` objects. - **Implementation Use**: The `CustomImageBuilder` is utilized when the component is in operation, allowing separation of flavor configuration from implementation. ## Custom Build Context If a different build context is needed, subclass `BuildContext` and override the `build_context_class` property in your image builder implementation to specify the custom context. ================================================== === File: docs/book/component-guide/image-builders/gcp.md === ### Google Cloud Image Builder Overview The Google Cloud Image Builder is a component of the ZenML `gcp` integration that utilizes [Google Cloud Build](https://cloud.google.com/build) to create container images. #### When to Use - If Docker cannot be installed or used on your local machine. - If you are already utilizing Google Cloud Platform (GCP). - If your architecture is primarily based on GCP components like the [GCS Artifact Store](../artifact-stores/gcp.md) or [Vertex Orchestrator](../orchestrators/vertex.md). #### Deployment Steps 1. Enable Google Cloud Build APIs in your GCP project. 2. Install the ZenML `gcp` integration: ```shell zenml integration install gcp ``` 3. Set up a GCP Artifact Store and a GCP container registry. 4. Optionally specify the GCP project ID and service account credentials. #### Registering the Image Builder To register the image builder: ```shell zenml image-builder register \ --flavor=gcp \ --cloud_builder_image= \ --network= \ --build_timeout= zenml stack register -i ... --set ``` #### Authentication Methods - **Local Authentication**: Quick setup using local GCP credentials. Requires Google Cloud CLI. - **GCP Service Connector** (recommended): Provides better security and reusability across stack components. Register with: ```shell zenml service-connector register --type gcp -i ``` For auto-configuration: ```shell zenml service-connector register --type gcp --resource-type gcp-generic --resource-name --auto-configure ``` #### Connecting the Image Builder After registration, connect the image builder to the GCP Service Connector: ```shell zenml image-builder connect -i ``` Or non-interactively: ```shell zenml image-builder connect --connector ``` #### Using GCP Credentials Alternatively, use a GCP Service Account Key: ```shell zenml image-builder register \ --flavor=gcp \ --project= \ --service_account_path= \ --cloud_builder_image= \ --network= \ --build_timeout= zenml stack register -i ... --set ``` #### Caveats - Google Cloud Build uses a `cloudbuild` network for build steps, allowing access to GCP services. - For private dependencies, use a custom base image with `keyrings.google-artifactregistry-auth`: ```dockerfile FROM zenmldocker/zenml:latest RUN pip install keyrings.google-artifactregistry-auth ``` **Note**: Specify the ZenML version and Python version in the base image tag. This summary provides essential details on using the Google Cloud Image Builder with ZenML, covering deployment, registration, authentication, and caveats. ================================================== === File: docs/book/component-guide/image-builders/kaniko.md === ### Kaniko Image Builder Overview The Kaniko image builder, part of the ZenML `kaniko` integration, utilizes [Kaniko](https://github.com/GoogleContainerTools/kaniko) to build container images, particularly when Docker cannot be used on the client machine and Kubernetes is available. ### Deployment Requirements - A deployed Kubernetes cluster. - ZenML `kaniko` integration installed: ```shell zenml integration install kaniko ``` - [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) installed. - A remote container registry as part of your stack. - Optionally, a remote artifact store if using `store_context_in_artifact_store=True`. - Optional `pod_running_timeout` attribute to adjust Kaniko pod timeout. ### Registering the Image Builder To register the image builder: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= [ --pod_running_timeout= ] ``` To activate the stack: ```shell zenml stack register -i ... --set ``` Refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kaniko/#zenml.integrations.kaniko.image_builders.kaniko_image_builder.KanikoImageBuilder) for more attributes. ### Authentication The Kaniko pod must authenticate with: - Container registry for pushing images. - Private registries for pulling parent images. - Artifact store if configured to store build context. ### Cloud Provider Specific Configurations #### AWS 1. Attach `EC2InstanceProfileForImageBuilderECRContainerBuilds` policy to EKS node IAM role. 2. Set environment variables: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --env='[{"name": "AWS_SDK_LOAD_CONFIG", "value": "true"}, {"name": "AWS_EC2_METADATA_DISABLED", "value": "true"}]' ``` #### GCP 1. Enable workload identity. 2. Create a Google service account and bind it to a Kubernetes service account. 3. Set permissions for GCR and GCP bucket access: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --kubernetes_namespace= \ --service_account_name= ``` #### Azure 1. Create a Kubernetes `configmap` for Docker config: ```shell kubectl create configmap docker-config --from-literal='config.json={ "credHelpers": { "mycr.azurecr.io": "acr-env" } }' ``` 2. Configure the image builder to mount the `configmap`: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --volume_mounts='[{"name": "docker-config", "mountPath": "/kaniko/.docker/"}]' \ --volumes='[{"name": "docker-config", "configMap": {"name": "docker-config"}}]' ``` ### Additional Parameters To pass additional parameters to the Kaniko build: ```shell zenml image-builder register \ --flavor=kaniko \ --kubernetes_context= \ --executor_args='["--label", "key=value"]' # Adds a label to the final image ``` Common flags include: - `--cache`: Disable caching (default: true). - `--cache-dir`: Directory for cached layers (default: `/cache`). - `--cleanup`: Disable cleanup of the working directory (default: true). For more flags, refer to the [Kaniko additional flags](https://github.com/GoogleContainerTools/kaniko#additional-flags). ================================================== === File: docs/book/component-guide/image-builders/aws.md === ### AWS Image Builder with ZenML **Overview**: The AWS Image Builder is a component of the ZenML `aws` integration that utilizes [AWS CodeBuild](https://aws.amazon.com/codebuild) for building container images. #### When to Use - If you cannot install Docker locally. - If you are already using AWS. - If your stack includes AWS components like [S3 Artifact Store](../artifact-stores/s3.md) or [SageMaker Orchestrator](../orchestrators/sagemaker.md). #### Deployment For deploying a full ZenML cloud stack including the AWS image builder, refer to the [in-browser stack deployment wizard](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md) or the [ZenML AWS Terraform module](../../how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md). #### Usage Requirements 1. **Install ZenML AWS Integration**: ```shell zenml integration install aws ``` 2. **S3 Artifact Store**: Required for uploading build context. 3. **AWS Container Registry**: Recommended for pushing built images. 4. **AWS CodeBuild Project**: Must be created in the desired AWS region. Configuration settings include: - **Source Type**: `Amazon S3` - **Bucket**: Same as S3 Artifact Store - **Environment Type**: `Linux Container` - **Environment Image**: `bentolor/docker-dind-awscli` - **Privileged Mode**: `false` **Service Role Permissions**: Ensure the service role has permissions for S3 and ECR (if applicable): ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::/*" }, { "Effect": "Allow", "Action": ["ecr:PutImage"], "Resource": "arn:aws:ecr:::repository/" }, { "Effect": "Allow", "Action": ["ecr:GetAuthorizationToken"], "Resource": "*" } ] } ``` 5. **AWS Service Connector**: Recommended for triggering builds and managing permissions. #### Authentication Methods - **Local Authentication**: Quick setup but not portable. - **AWS Service Connector**: Recommended for better security and multi-component access. **Register AWS Service Connector**: ```shell zenml service-connector register --type aws -i ``` **Example of Non-Interactive Registration**: ```shell zenml service-connector register --type aws --resource-type aws-generic --auto-configure ``` **Permissions for CodeBuild**: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["codebuild:StartBuild"], "Resource": "arn:aws:codebuild:::project/" } ] } ``` **Register AWS Image Builder**: ```shell zenml image-builder register \ --flavor=aws \ --code_build_project= \ --connector ``` **Connect AWS Image Builder to Service Connector**: ```shell zenml image-builder connect --connector ``` #### Customizing AWS CodeBuild Builds You can customize the image builder with: - `build_image`: Default is `bentolor/docker-dind-awscli`. - `compute_type`: Default is `BUILD_GENERAL1_SMALL`. - `custom_env_vars`: Custom environment variables. - `implicit_container_registry_auth`: Use implicit (default) or explicit authentication. For explicit authentication, ensure the necessary permissions are granted to the service role for the ECR registry. This summary encapsulates the essential steps and configurations needed to utilize the AWS Image Builder within ZenML effectively. ================================================== === File: docs/book/component-guide/image-builders/image-builders.md === ### Image Builders in ZenML **Overview**: The image builder is crucial for building container images in remote MLOps environments, enabling the execution of machine-learning pipelines. **When to Use**: An image builder is required when components of your ZenML stack need to create container images, particularly for remote orchestrators, step operators, and some model deployers. **Image Builder Flavors**: ZenML provides several image builders: | Image Builder | Flavor | Integration | Notes | |-----------------------|----------|-------------|-----------------------------------------| | [LocalImageBuilder](local.md) | `local` | _built-in_ | Builds Docker images locally. | | [KanikoImageBuilder](kaniko.md) | `kaniko` | `kaniko` | Builds Docker images in Kubernetes. | | [GCPImageBuilder](gcp.md) | `gcp` | `gcp` | Uses Google Cloud Build. | | [AWSImageBuilder](aws.md) | `aws` | `aws` | Uses AWS Code Build. | | [Custom Implementation](custom.md) | _custom_ | | Allows for custom image builder implementations. | To view available image builder flavors, use: ```shell zenml image-builder flavor list ``` **Usage**: You do not need to interact with the image builder directly. It is automatically utilized by any component in your active ZenML stack that requires container image building. ================================================== === File: docs/book/component-guide/artifact-stores/azure.md === ### Azure Blob Storage - ZenML Artifact Store The Azure Artifact Store is a ZenML integration that utilizes Azure Blob Storage for storing artifacts. It is suitable for projects requiring shared storage, remote components, or production-grade MLOps. #### Use Cases Consider using Azure Blob Storage when: - You need to share pipeline results with team members or stakeholders. - Components in your stack run remotely (e.g., on Kubernetes). - Local storage is insufficient for your needs. - You are running pipelines at scale. If Azure Blob Storage is not accessible, consider other Artifact Store options. #### Deployment Steps 1. **Install Azure Integration**: ```shell zenml integration install azure -y ``` 2. **Register Azure Artifact Store**: The root path URI must point to an Azure Blob Storage container in the format `az://container-name` or `abfs://container-name`. ```shell zenml artifact-store register az_store -f azure --path=az://container-name zenml stack register custom_stack -a az_store ... --set ``` #### Authentication Methods Authentication is required to use Azure Artifact Store. Options include: - **Implicit Authentication**: Quick local setup using environment variables. - Set `AZURE_STORAGE_ACCOUNT_NAME` and either `AZURE_STORAGE_ACCOUNT_KEY` or `AZURE_STORAGE_SAS_TOKEN`. - **Azure Service Connector (Recommended)**: Provides better security and management. ```shell zenml service-connector register --type azure -i ``` Non-interactive example: ```shell zenml service-connector register --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= --resource-type blob-container --resource-id ``` #### Connecting Artifact Store to Service Connector After setting up the Azure Service Connector: ```shell zenml artifact-store register -f azure --path='az://your-container' zenml artifact-store connect -i ``` For non-interactive connection: ```shell zenml artifact-store connect --connector ``` #### Using ZenML Secret for Credentials You can create a ZenML Secret to store Azure credentials: ```shell zenml secret create az_secret --account_name='' --account_key='' ``` Then register the Artifact Store using the secret: ```shell zenml artifact-store register az_store -f azure --path='az://your-container' --authentication_secret=az_secret zenml stack register custom_stack -a az_store ... --set ``` #### Conclusion Using the Azure Artifact Store is similar to other Artifact Store flavors in ZenML. For detailed implementation and configuration, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-azure/#zenml.integrations.azure.artifact_stores). ================================================== === File: docs/book/component-guide/artifact-stores/s3.md === ### Summary: Storing Artifacts in an AWS S3 Bucket #### Overview The S3 Artifact Store is a ZenML integration that utilizes AWS S3 or S3-compatible services (e.g., MinIO, Ceph RGW) for artifact storage. It is suitable for projects that require shared access, remote components, or scalable storage solutions. #### When to Use S3 Artifact Store - **Collaboration**: Share pipeline results with team members or stakeholders. - **Remote Components**: Integrate with cloud-based orchestrators (e.g., Kubeflow). - **Storage Limitations**: Overcome local storage constraints. - **Production Needs**: Support production-grade MLOps. #### Deployment Steps 1. **Install S3 Integration**: ```shell zenml integration install s3 -y ``` 2. **Register S3 Artifact Store**: - **URI Format**: `s3://bucket-name` - **Register Command**: ```shell zenml artifact-store register s3_store -f s3 --path=s3://bucket-name zenml stack register custom_stack -a s3_store ... --set ``` #### Authentication Methods - **Implicit Authentication**: Quick local setup using AWS CLI credentials. Requires AWS CLI installation. - **AWS Service Connector (Recommended)**: For better security and integration with AWS services. - **Register Connector**: ```sh zenml service-connector register --type aws -i ``` - **Auto-Configure Example**: ```sh zenml service-connector register --type aws --resource-type s3-bucket --resource-name --auto-configure ``` #### Connecting S3 Artifact Store to AWS - **Connect Command**: ```sh zenml artifact-store connect -i ``` - **Non-Interactive Version**: ```sh zenml artifact-store connect --connector ``` #### Using ZenML Secrets - Store AWS access keys in ZenML secrets for enhanced security: ```shell zenml secret create s3_secret --aws_access_key_id='' --aws_secret_access_key='' zenml artifact-store register s3_store -f s3 --path='s3://your-bucket' --authentication_secret=s3_secret ``` #### Advanced Configuration - Customize connection parameters using `client_kwargs`, `config_kwargs`, and `s3_additional_kwargs`: ```shell zenml artifact-store register minio_store -f s3 --path='s3://minio_bucket' --authentication_secret=s3_secret --client_kwargs='{"endpoint_url": "http://minio.cluster.local:9000", "region_name": "us-east-1"}' ``` #### Usage Using the S3 Artifact Store is similar to other Artifact Store flavors in ZenML. For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-s3/#zenml.integrations.s3.artifact_stores.s3_artifact_store). ================================================== === File: docs/book/component-guide/artifact-stores/local.md === ### Local Artifact Store The Local Artifact Store is a built-in ZenML [Artifact Store](./artifact-stores.md) that stores artifacts on your local filesystem. #### Use Cases - Ideal for getting started with ZenML without needing additional resources or managed object-store services (e.g., Amazon S3, Google Cloud Storage). - Suitable for evaluation or experimental phases where sharing artifacts is unnecessary. **Warning**: Not for production use. The local filesystem cannot be shared, limiting access to artifacts from other machines. Artifact visualizations are unavailable when using a local Artifact Store with a cloud-deployed ZenML instance. It lacks high-availability, scalability, and backup features expected in production-grade MLOps systems. **Compatibility**: - Only local Orchestrators (e.g., local, local Kubeflow, local Kubernetes) and local Model Deployers (e.g., MLflow) can be used with a Local Artifact Store. - Step Operators cannot be used due to their requirement for remote environments. Transitioning to a team or production setting allows for easy replacement of the Local Artifact Store with other flavors without code changes. #### Deployment The default stack in ZenML includes a local Artifact Store: ```shell $ zenml stack list # Output shows the default stack with local artifact store $ zenml artifact-store describe # Displays configuration details including the storage path ``` Artifacts are stored in a specified folder on your local filesystem. You can create additional local Artifact Store instances: ```shell # Register a new local artifact store zenml artifact-store register custom_local --flavor local # Register and set a stack with the new artifact store zenml stack register custom_stack -o default -a custom_local --set ``` **Warning**: The local Artifact Store accepts a `path` parameter during registration. It's recommended to use the default path to avoid unexpected results, as other local components depend on this convention. For more information, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.local_artifact_store). #### Usage Using the Local Artifact Store is similar to using any other Artifact Store flavor. ================================================== === File: docs/book/component-guide/artifact-stores/custom.md === ### Developing a Custom Artifact Store in ZenML ZenML provides built-in Artifact Store implementations for local and cloud storage (AWS, GCP, Azure). To create a custom Artifact Store, follow these guidelines. #### Base Abstraction The `BaseArtifactStore` class is central to ZenML's architecture. Key points include: 1. **Configuration Parameter**: The `path` parameter indicates the root path of the artifact store. 2. **Supported Schemes**: The `SUPPORTED_SCHEMES` variable must be defined in subclasses to specify supported file path schemes (e.g., Azure: `{"abfs://", "az://"}`). 3. **Abstract Methods**: Subclasses must implement the following methods: - `open`, `copyfile`, `exists`, `glob`, `isdir`, `listdir`, `makedirs`, `mkdir`, `remove`, `rename`, `rmtree`, `stat`, `walk`. #### Base Classes ```python from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig class BaseArtifactStoreConfig(StackComponentConfig): path: str SUPPORTED_SCHEMES: ClassVar[Set[str]] class BaseArtifactStore(StackComponent): @abstractmethod def open(self, name: PathType, mode: str = "r") -> Any: pass @abstractmethod def copyfile(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: pass @abstractmethod def exists(self, path: PathType) -> bool: pass @abstractmethod def glob(self, pattern: PathType) -> List[PathType]: pass @abstractmethod def isdir(self, path: PathType) -> bool: pass @abstractmethod def listdir(self, path: PathType) -> List[PathType]: pass @abstractmethod def makedirs(self, path: PathType) -> None: pass @abstractmethod def mkdir(self, path: PathType) -> None: pass @abstractmethod def remove(self, path: PathType) -> None: pass @abstractmethod def rename(self, src: PathType, dst: PathType, overwrite: bool = False) -> None: pass @abstractmethod def rmtree(self, path: PathType) -> None: pass @abstractmethod def stat(self, path: PathType) -> Any: pass @abstractmethod def walk(self, top: PathType, topdown: bool = True, onerror: Optional[Callable[..., None]] = None) -> Iterable[Tuple[PathType, List[PathType], List[PathType]]]: pass class BaseArtifactStoreFlavor(Flavor): @property @abstractmethod def name(self) -> Type["BaseArtifactStore"]: pass @property def type(self) -> StackComponentType: return StackComponentType.ARTIFACT_STORE @property def config_class(self) -> Type[StackComponentConfig]: return BaseArtifactStoreConfig @property @abstractmethod def implementation_class(self) -> Type["BaseArtifactStore"]: pass ``` #### Creating a Custom Artifact Store To implement a custom Artifact Store: 1. Inherit from `BaseArtifactStore` and implement the abstract methods. 2. Inherit from `BaseArtifactStoreConfig` and define `SUPPORTED_SCHEMES`. 3. Combine both by inheriting from `BaseArtifactStoreFlavor`. Register your custom flavor using: ```shell zenml artifact-store flavor register ``` Example: ```shell zenml artifact-store flavor register flavors.my_flavor.MyArtifactStoreFlavor ``` #### Important Notes - Ensure ZenML is initialized at the root of your repository to resolve the flavor class correctly. - After registration, list available artifact store flavors with: ```shell zenml artifact-store flavor list ``` #### Artifact Visualizations ZenML saves visualizations alongside artifacts in the artifact store. Your custom store must authenticate to the backend without relying on local environment variables. For deployed instances, install necessary dependencies in the environment where ZenML is deployed. For further details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-artifact_stores/#zenml.artifact_stores.base_artifact_store.BaseArtifactStore). ================================================== === File: docs/book/component-guide/artifact-stores/gcp.md === ### Google Cloud Storage (GCS) Artifact Store The GCS Artifact Store is a ZenML integration that utilizes Google Cloud Storage (GCS) for storing ZenML artifacts. It is suitable for projects that require shared storage, remote components, or production-grade MLOps. #### When to Use GCS Artifact Store - **Team Collaboration**: Share pipeline results with team members or stakeholders. - **Remote Components**: Integrate with orchestrators like Kubeflow or Kubernetes. - **Storage Limitations**: Overcome local storage constraints. - **Scalability**: Handle large-scale pipeline demands. #### Deployment Steps 1. **Install GCP Integration**: ```shell zenml integration install gcp -y ``` 2. **Register GCS Artifact Store**: - **Mandatory Parameter**: Root path URI in the format `gs://bucket-name`. ```shell zenml artifact-store register gs_store -f gcp --path=gs://bucket-name zenml stack register custom_stack -a gs_store ... --set ``` #### Authentication Methods - **Implicit Authentication**: Quick local setup using existing Google Cloud CLI credentials. Not suitable for remote components. - **GCP Service Connector (Recommended)**: Provides better security and configuration management. Register using: ```shell zenml service-connector register --type gcp -i ``` For a specific GCS bucket: ```shell zenml service-connector register --type gcp --resource-type gcs-bucket --resource-name --auto-configure ``` #### Connecting GCS Artifact Store 1. **Connect to GCS Bucket**: ```shell zenml artifact-store connect -i ``` Non-interactive: ```shell zenml artifact-store connect --connector ``` 2. **Register Stack**: ```shell zenml stack register -a ... --set ``` #### Using GCP Credentials For enhanced security, create a GCP Service Account Key, store it as a ZenML Secret, and reference it in the Artifact Store configuration: ```shell zenml secret create gcp_secret --token=@path/to/service_account_key.json zenml artifact-store register gcs_store -f gcp --path='gs://your-bucket' --authentication_secret=gcp_secret zenml stack register custom_stack -a gs_store ... --set ``` #### Usage Using the GCS Artifact Store is similar to other Artifact Store flavors in ZenML. For detailed information, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.artifact_stores.gcp_artifact_store). ================================================== === File: docs/book/component-guide/artifact-stores/artifact-stores.md === # Artifact Stores ## Overview The Artifact Store is a key component of the MLOps stack, serving as a persistence layer for artifacts (datasets, models) generated by machine learning pipelines. ZenML serializes and saves data in the Artifact Store, enabling features like caching, provenance tracking, and pipeline reproducibility. ## Key Points - **Materializers**: Determine how artifacts are serialized and stored. Most default Materializers use the active Stack's Artifact Store. Custom Materializers can be created for specific storage needs. - **Stack Component**: The Artifact Store must be registered as part of your ZenML Stack. - **Mandatory Component**: Required for storing all artifacts produced by pipeline runs. ## Artifact Store Flavors ZenML supports various Artifact Store flavors: | Artifact Store | Flavor | Integration | URI Schema(s) | Notes | |----------------|--------|-------------|----------------|-------| | Local | `local`| Built-in | None | Default, local filesystem storage. | | Amazon S3 | `s3` | `s3` | `s3://` | Uses AWS S3 as backend. | | Google Cloud | `gcp` | `gcp` | `gs://` | Uses Google Cloud Storage. | | Azure | `azure`| `azure` | `abfs://`, `az://`| Uses Azure Blob Storage. | | Custom | _custom_| | _custom_ | Extend the Artifact Store abstraction. | To list available flavors: ```shell zenml artifact-store flavor list ``` ## Configuration Each Artifact Store requires a `path` attribute (URI) when registered: ```shell zenml artifact-store register s3_store -f s3 --path s3://my_bucket ``` ## Usage Typically, users interact with higher-level APIs for storing and retrieving artifacts. Direct interaction with the Artifact Store API is necessary for: - Implementing custom Materializers. - Storing custom objects. ### Artifact Store API All Artifact Stores implement a standard IO API similar to a file system. Access low-level API via: - `zenml.io.fileio`: Utilities for manipulating objects (e.g., `open`, `copy`, `remove`). - `zenml.utils.io_utils`: Higher-level utilities for transferring objects. **Example: Writing to the Artifact Store** ```python import os from zenml.client import Client from zenml.io import fileio root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") fileio.makedirs(os.path.dirname(artifact_uri)) with fileio.open(artifact_uri, "w") as f: f.write("example artifact") ``` **Example: Reading from the Artifact Store** ```python from zenml.client import Client from zenml.utils import io_utils root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.txt") artifact_contents = io_utils.read_file_contents_as_string(artifact_uri) ``` **Temporary File Usage** For serialization with external libraries: ```python import os import tempfile from zenml.client import Client from zenml.io import fileio root_path = Client().active_stack.artifact_store.path artifact_uri = os.path.join(root_path, "artifacts", "examples", "test.json") with tempfile.NamedTemporaryFile(mode="w", delete=True) as f: # Save to temporary file and copy to artifact store external_lib.external_object.save_to_file(f.name) fileio.copy(f.name, artifact_uri) ``` This summary captures the essential details about the Artifact Store, its configuration, usage, and examples while maintaining clarity and conciseness. ================================================== === File: docs/book/component-guide/data-validators/custom.md === ### Developing a Custom Data Validator in ZenML #### Overview Before creating a custom data validator, review the [general guide to writing custom component flavors in ZenML](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md) for foundational knowledge on component flavors. #### Important Notes - **Base Abstraction in Progress**: The base abstraction for Data Validators is under development. Avoid extending them until the update is complete. You can use existing data validator flavors or implement your own, but be prepared for potential refactoring. #### Steps to Build a Custom Data Validator 1. **Create a Class**: Inherit from the `BaseDataValidator` class and override necessary abstract methods based on the library/service you want to integrate. 2. **Configuration Class**: If needed, create a class inheriting from `BaseDataValidatorConfig`. 3. **Combine Classes**: Inherit from `BaseDataValidatorFlavor` to bring both classes together. 4. **Standard Steps (Optional)**: Provide standard steps for easy integration into pipelines. #### Registration Register your custom data validator flavor using the CLI with dot notation: ```shell zenml data-validator flavor register ``` For example, if your flavor class is defined in `flavors/my_flavor.py`: ```shell zenml data-validator flavor register flavors.my_flavor.MyDataValidatorFlavor ``` #### Best Practices - Initialize ZenML at the root of your repository using `zenml init` to ensure proper resolution of the flavor class. - After registration, verify the new flavor: ```shell zenml data-validator flavor list ``` #### Workflow Integration - **CustomDataValidatorFlavor**: Used during flavor creation via CLI. - **CustomDataValidatorConfig**: Validates user input during stack component registration. - **CustomDataValidator**: Engaged when the component is in use, allowing separation of flavor configuration from implementation. This structure enables registration of flavors and components even if major dependencies are not installed locally. ================================================== === File: docs/book/component-guide/data-validators/great-expectations.md === # Great Expectations with ZenML ## Overview Great Expectations is an open-source library for data quality checks, profiling, and documentation. The ZenML integration allows you to run data quality tests on `pandas.DataFrame` datasets in your pipelines, with results documented for visual interpretation. ## Use Cases Utilize Great Expectations when you need: - **Data Profiling**: Automatically generate validation rules (Expectations) from dataset properties. - **Data Quality Checks**: Validate datasets against predefined or inferred Expectations. - **Data Documentation**: Maintain human-readable documentation of validation rules and results. ## Deployment To use the Great Expectations Data Validator in ZenML, install the integration: ```shell zenml integration install great_expectations -y ``` ### Registering the Data Validator For a quick setup, register the Data Validator and stack: ```shell zenml data-validator register ge_data_validator --flavor=great_expectations zenml stack register custom_stack -dv ge_data_validator ... --set ``` ### Configuration Options 1. **Let ZenML Manage Configuration**: ZenML initializes and manages the Great Expectations configuration, storing all data in the ZenML Artifact Store. 2. **Use Existing Configuration**: Point to your existing `great_expectations.yaml` file: ```shell zenml data-validator register ge_data_validator --flavor=great_expectations --context_root_dir=/path/to/my/great_expectations ``` 3. **Migrate Configuration to ZenML**: Load your existing configuration using the `@` operator: ```shell zenml data-validator register ge_data_validator --flavor=great_expectations --context_config=@/path/to/my/great_expectations/great_expectations.yaml ``` ### Advanced Configuration - `configure_zenml_stores`: Automatically update configuration to use ZenML Artifact Store. - `configure_local_docs`: Set up a local Data Docs site for visualization. ## Usage in Pipelines ### Data Profiler Step Automatically generate an Expectation Suite from a `pandas.DataFrame`: ```python from zenml.integrations.great_expectations.steps import great_expectations_profiler_step ge_profiler_step = great_expectations_profiler_step.with_options( parameters={ "expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df", } ) ``` ### Data Validator Step Validate a dataset against an existing Expectation Suite: ```python from zenml.integrations.great_expectations.steps import great_expectations_validator_step ge_validator_step = great_expectations_validator_step.with_options( parameters={ "expectation_suite_name": "steel_plates_suite", "data_asset_name": "steel_plates_train_df", } ) ``` ### Example Pipeline A simple pipeline for profiling and validation: ```python from zenml import pipeline @pipeline def profiling_pipeline(): dataset, _ = importer() ge_profiler_step(dataset) @pipeline def validation_pipeline(): dataset, condition = importer() results = ge_validator_step(dataset, condition) ``` ## Direct Usage of Great Expectations You can directly interact with Great Expectations while using ZenML's managed context: ```python import great_expectations as ge from zenml.integrations.great_expectations.data_validators import GreatExpectationsDataValidator @step def create_custom_expectation_suite() -> ExpectationSuite: context = GreatExpectationsDataValidator.get_data_context() suite = context.create_expectation_suite("custom_suite") # Add expectations... context.save_expectation_suite(suite) return suite ``` ## Visualizing Results Results can be visualized in the ZenML dashboard or using the `artifact.visualize()` method in Jupyter notebooks: ```python from zenml.client import Client def visualize_results(pipeline_name: str, step_name: str) -> None: pipeline = Client().get_pipeline(pipeline_name) last_run = pipeline.last_run validation_step = last_run.steps[step_name] validation_step.visualize() ``` This summary captures the essential technical details and usage of Great Expectations with ZenML, enabling users to implement data quality checks effectively. ================================================== === File: docs/book/component-guide/data-validators/deepchecks.md === ### Summary of Deepchecks Integration with ZenML **Overview**: The Deepchecks Data Validator, integrated with ZenML, facilitates data integrity, drift, and model performance testing within pipelines using the Deepchecks library. It supports both tabular and computer vision data formats. **Supported Formats**: - **Tabular Data**: `pandas.DataFrame`, models: `sklearn.base.ClassifierMixin` - **Computer Vision**: `torch.utils.data.dataloader.DataLoader`, models: `torch.nn.Module` **Key Features**: 1. **Data Integrity Checks**: Identify issues like missing values and conflicting labels. 2. **Data Drift Checks**: Detect skew and drift by comparing target and reference datasets. 3. **Model Performance Checks**: Evaluate model performance using metrics like confusion matrix. 4. **Multi-Model Performance Reports**: Summarize performance across multiple models. **Installation**: To install the Deepchecks integration: ```shell zenml integration install deepchecks -y ``` **Registering the Data Validator**: ```shell zenml data-validator register deepchecks_data_validator --flavor=deepchecks zenml stack register custom_stack -dv deepchecks_data_validator ... --set ``` **Usage**: Deepchecks validation checks are categorized based on input requirements: - **Data Integrity Checks**: Single dataset input. - **Data Drift Checks**: Two datasets (target and reference). - **Model Validation Checks**: Single dataset and model input. - **Model Drift Checks**: Two datasets and model input. **Standard Steps**: 1. `deepchecks_data_integrity_check_step`: For data integrity tests. 2. `deepchecks_data_drift_check_step`: For data drift tests. 3. `deepchecks_model_validation_check_step`: For model performance tests. 4. `deepchecks_model_drift_check_step`: For model comparison tests. **Example of Data Integrity Check Step**: ```python from zenml.integrations.deepchecks.steps import deepchecks_data_integrity_check_step data_validator = deepchecks_data_integrity_check_step.with_options( parameters=dict(dataset_kwargs=dict(label="target", cat_features=[])), ) @pipeline def data_validation_pipeline(): df_train, df_test = data_loader() data_validator(dataset=df_train) data_validation_pipeline() ``` **Customizing Checks**: You can specify checks and additional parameters: ```python deepchecks_data_integrity_check_step( check_list=[DeepchecksDataIntegrityCheck.TABULAR_MIXED_DATA_TYPES], dataset_kwargs=dict(label='class', cat_features=['country', 'state']), check_kwargs={...}, ) ``` **Docker Configuration for Remote Orchestrators**: For remote orchestrators, extend the Docker image to include required binaries: ```shell # deepchecks-zenml.Dockerfile ARG ZENML_VERSION=0.20.0 FROM zenmldocker/zenml:${ZENML_VERSION} AS base RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 -y ``` **Visualizing Results**: Results can be visualized in the ZenML dashboard or Jupyter notebooks: ```python from zenml.client import Client def visualize_results(pipeline_name: str, step_name: str) -> None: pipeline = Client().get_pipeline(pipeline=pipeline_name) last_run = pipeline.last_run step = last_run.steps[step_name] step.visualize() ``` This summary captures the essential information about using Deepchecks with ZenML, including installation, usage, and customization options, while maintaining clarity and conciseness. ================================================== === File: docs/book/component-guide/data-validators/data-validators.md === # Data Validators Data Validators are essential tools in machine learning (ML) for ensuring data quality and monitoring model performance throughout the ML project lifecycle. They help prevent issues that arise from poor data quality, which can lead to unreliable model outcomes. Key functionalities of Data Validators include data profiling, integrity testing, and drift detection, applicable during data ingestion, model training, evaluation, and inference. ## Key Concepts - **Data Validator**: An optional component in ZenML stacks that monitors data quality and model performance. - **Data Profiles**: Generated reports that are versioned and stored in the Artifact Store for later retrieval and visualization. ## When to Use Data Validators - To log data quality and model performance during development. - For pipelines that regularly ingest new data, to conduct integrity checks. - In continuous training pipelines, to compare new training data and model performance against references. - For batch inference or online inference pipelines, to analyze data drift and detect discrepancies. ## Data Validator Flavors ZenML offers various Data Validators, each with unique features and compatibility: | Data Validator | Features | Data Types | Model Types | Notes | Flavor/Integration | |----------------------|-------------------------------------------------|---------------------------------------------|-----------------------------------------|-----------------------------------------------------|--------------------------| | **Deepchecks** | data quality, drift, performance | tabular: `pandas.DataFrame`, CV: `torch.utils.data.dataloader.DataLoader` | tabular: `sklearn.base.ClassifierMixin`, CV: `torch.nn.Module` | Integrate Deepchecks for validation tests | `deepchecks` | | **Evidently** | data quality, drift, performance | tabular: `pandas.DataFrame` | N/A | Generate reports and visualizations | `evidently` | | **Great Expectations**| profiling, data quality | tabular: `pandas.DataFrame` | N/A | Perform testing and profiling | `great_expectations` | | **Whylogs/WhyLabs** | data drift | tabular: `pandas.DataFrame` | N/A | Generate data profiles and upload to WhyLabs | `whylogs` | To view available Data Validator flavors, use: ```shell zenml data-validator flavor list ``` ## How to Use Data Validators 1. Configure and add a Data Validator to your ZenML stack. 2. Utilize built-in validation steps in your pipelines or directly use libraries in custom steps to return results (data profiles, test reports) as artifacts. 3. Access validation artifacts in subsequent pipeline steps or fetch them later for processing or visualization. For detailed usage instructions, refer to the documentation for the specific Data Validator flavor you are using in your ZenML stack. ================================================== === File: docs/book/component-guide/data-validators/evidently.md === ### Summary of Evidently Data Validator Documentation **Overview**: The Evidently Data Validator, integrated with ZenML, utilizes the Evidently library for data quality, data drift, model drift, and performance analysis. It generates reports and checks that can trigger automated corrective actions or provide visual insights. **Use Cases**: Evidently is beneficial for: - Monitoring and debugging ML models by analyzing input data. - Running validation reports, including data integrity, model evaluation, data drift, and performance comparisons. - Supporting tabular data in `pandas.DataFrame` or CSV formats for regression and classification tasks. **Key Features**: - **Data Quality Reports**: Analyze feature statistics and compare datasets. - **Data Drift Reports**: Detect changes in feature distributions. - **Target Drift Reports**: Identify changes in target functions or model predictions. - **Performance Reports**: Evaluate model performance against past or alternative models. **Deployment**: To install the Evidently integration: ```shell zenml integration install evidently -y ``` To register the Data Validator: ```shell zenml data-validator register evidently_data_validator --flavor=evidently zenml stack register custom_stack -dv evidently_data_validator ... --set ``` **Usage**: 1. **Data Profiling**: Use `evidently_report_step` to generate reports from datasets. - Requires input datasets, and may need additional `target` and `prediction` columns. - Example configuration: ```python from zenml.integrations.evidently.steps import evidently_report_step text_data_report = evidently_report_step.with_options( parameters=dict( column_mapping=EvidentlyColumnMapping( target="Rating", numerical_features=["Age", "Positive_Feedback_Count"], categorical_features=["Division_Name", "Department_Name", "Class_Name"], text_features=["Review_Text", "Title"], ), metrics=[ EvidentlyMetricConfig.metric("DataQualityPreset"), EvidentlyMetricConfig.metric("TextOverviewPreset", column_name="Review_Text"), ], download_nltk_data=True, ), ) ``` 2. **Data Validation**: Similar to profiling, but for running validation tests. - Example configuration: ```python from zenml.integrations.evidently.steps import evidently_test_step text_data_test = evidently_test_step.with_options( parameters=dict( column_mapping=EvidentlyColumnMapping( target="Rating", numerical_features=["Age", "Positive_Feedback_Count"], categorical_features=["Division_Name", "Department_Name", "Class_Name"], text_features=["Review_Text", "Title"], ), tests=[ EvidentlyTestConfig.test("DataQualityTestPreset"), ], download_nltk_data=True, ), ) ``` 3. **Direct Usage of Evidently**: You can call Evidently functions directly in custom steps. ```python from evidently.report import Report report = Report(metrics=[metric_preset.DataQualityPreset()]) report.run(current_data=dataset, reference_data=dataset) ``` **Visualization**: Reports can be visualized in the ZenML dashboard or Jupyter notebooks using the `artifact.visualize()` method. **Conclusion**: The Evidently Data Validator provides a robust framework for monitoring and validating data quality and model performance, facilitating the integration of data validation into ML pipelines with minimal configuration. ================================================== === File: docs/book/component-guide/data-validators/whylogs.md === ### Summary of Whylogs/WhyLabs Profiling Documentation #### Overview The **whylogs/WhyLabs Data Validator** integrates with ZenML to generate and track data profiles from pipelines using the **whylogs** library. These profiles provide descriptive statistics of data, enabling automated corrective actions and visual interpretations. #### Use Cases Utilize Whylogs for: - **Data Quality**: Validate model input data quality. - **Data Drift**: Detect shifts in input features. - **Model Drift**: Identify training-serving skew and performance degradation. Currently, the integration supports **tabular data** in `pandas.DataFrame` format. #### Deployment To deploy the Whylogs Data Validator: 1. Install the integration: ```shell zenml integration install whylogs -y ``` 2. Register the Data Validator: ```shell zenml data-validator register whylogs_data_validator --flavor=whylogs zenml stack register custom_stack -dv whylogs_data_validator ... --set ``` For WhyLabs logging, create a ZenML Secret for authentication: ```shell zenml secret create whylabs_secret \ --whylabs_default_org_id= \ --whylabs_api_key= zenml data-validator register whylogs_data_validator --flavor=whylogs \ --authentication_secret=whylabs_secret ``` Enable logging in custom pipeline steps by setting `upload_to_whylabs=True`. #### Usage Whylogs profiling functions generate a `DatasetProfileView` from a `pandas.DataFrame`. There are three usage methods: 1. **Standard Step**: Use `WhylogsProfilerStep` for easy integration. ```python from zenml.integrations.whylogs.steps import get_whylogs_profiler_step train_data_profiler = get_whylogs_profiler_step(dataset_id="model-2") ``` 2. **Custom Step with Data Validator**: Directly call validation methods. ```python from zenml.integrations.whylogs.data_validators.whylogs_data_validator import WhylogsDataValidator profile = WhylogsDataValidator.get_active_data_validator().data_profiling(dataset) ``` 3. **Direct Library Use**: Utilize the whylogs library directly. ```python import whylogs as why profile = why.log(dataset).profile().view() ``` #### Visualizing Profiles Visualize profiles in the ZenML dashboard or Jupyter notebooks using: ```python from zenml.client import Client def visualize_statistics(step_name: str, reference_step_name: Optional[str] = None): pipe = Client().get_pipeline(pipeline="data_profiling_pipeline") whylogs_step = pipe.last_run.steps[step_name] whylogs_step.visualize() visualize_statistics("data_loader") ``` #### Conclusion For detailed information on configuration parameters and additional features, refer to the official Whylogs documentation and ZenML SDK docs. ================================================== === File: docs/book/component-guide/orchestrators/local.md === ### Local Orchestrator Overview The local orchestrator in ZenML allows for running pipelines on your local machine without additional setup, making it ideal for beginners and quick debugging. #### When to Use - For newcomers to ZenML wanting to run pipelines without cloud infrastructure. - When developing new pipelines for rapid experimentation and debugging. #### Deployment The local orchestrator is built into ZenML and requires no extra setup. #### Usage To register and activate the local orchestrator in your stack, use the following commands: ```shell zenml orchestrator register --flavor=local zenml stack register -o ... --set ``` Run your ZenML pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` For detailed configuration options, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local.local_orchestrator.LocalOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/custom.md === # Develop a Custom Orchestrator ## Overview To create a custom orchestrator in ZenML, it is essential to understand the component flavor concepts outlined in the [general guide](../../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). ## Base Implementation ZenML's `BaseOrchestrator` abstracts orchestration details and provides a simplified interface: ```python from abc import ABC, abstractmethod from typing import Any, Dict, Type from zenml.models import PipelineDeploymentResponseModel from zenml.enums import StackComponentType from zenml.stack import StackComponent, StackComponentConfig, Stack, Flavor class BaseOrchestratorConfig(StackComponentConfig): """Base class for all ZenML orchestrator configurations.""" class BaseOrchestrator(StackComponent, ABC): """Base class for all ZenML orchestrators.""" @abstractmethod def prepare_or_run_pipeline(self, deployment: PipelineDeploymentResponseModel, stack: Stack, environment: Dict[str, str]) -> Any: """Prepares and runs the pipeline or returns an intermediate representation.""" @abstractmethod def get_orchestrator_run_id(self) -> str: """Returns a unique run ID for the active orchestrator run.""" class BaseOrchestratorFlavor(Flavor): """Base orchestrator for all ZenML orchestrator flavors.""" @property @abstractmethod def name(self): """Returns the name of the flavor.""" @property def type(self) -> StackComponentType: return StackComponentType.ORCHESTRATOR @property def config_class(self) -> Type[BaseOrchestratorConfig]: return BaseOrchestratorConfig @property @abstractmethod def implementation_class(self) -> Type["BaseOrchestrator"]: """Implementation class for this flavor.""" ``` ## Building a Custom Orchestrator 1. **Create a Class**: Inherit from `BaseOrchestrator` and implement `prepare_or_run_pipeline(...)` and `get_orchestrator_run_id()`. 2. **Configuration Class**: Inherit from `BaseOrchestratorConfig` for any configuration parameters. 3. **Flavor Class**: Inherit from `BaseOrchestratorFlavor`, providing a name for the flavor. Register the flavor via CLI: ```shell zenml orchestrator flavor register ``` For example: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` ### Important Notes - Ensure ZenML is initialized at the root of your repository for proper flavor resolution. - After registration, check available flavors: ```shell zenml orchestrator flavor list ``` ## Implementation Guide 1. **Orchestrator Class**: Inherit from `BaseOrchestrator` or `ContainerizedOrchestrator` if using Docker images. 2. **Implement Methods**: - `prepare_or_run_pipeline(...)`: Schedule and run the pipeline, ensuring correct command execution and environment variable handling. - `get_orchestrator_run_id()`: Return a unique ID for each pipeline run. ### Optional Features - **Scheduling**: Handle `deployment.schedule` if supported; otherwise, log a warning or raise an exception. - **Resource Specification**: Manage CPU, GPU, or memory settings from `step.config.resource_settings`. ### Code Sample ```python from typing import Dict from zenml.entrypoints import StepEntrypointConfiguration from zenml.models import PipelineDeploymentResponseModel from zenml.orchestrators import ContainerizedOrchestrator from zenml.stack import Stack class MyOrchestrator(ContainerizedOrchestrator): def get_orchestrator_run_id(self) -> str: ... def prepare_or_run_pipeline(self, deployment: PipelineDeploymentResponseModel, stack: Stack, environment: Dict[str, str]) -> None: if deployment.schedule: ... for step_name, step in deployment.step_configurations.items(): image = self.get_image(deployment, step_name) command = StepEntrypointConfiguration.get_entrypoint_command() arguments = StepEntrypointConfiguration.get_entrypoint_arguments(step_name, deployment.id) upstream_steps = step.spec.upstream_steps step_settings = self.get_settings(step) if self.requires_resources_in_orchestration_environment(step): resources = step.config.resource_settings ``` ## Enabling CUDA for GPU Hardware To run steps on a GPU, follow the instructions on [CUDA setup](../../how-to/pipeline-development/training-with-gpus/README.md) to ensure proper configuration and acceleration. ================================================== === File: docs/book/component-guide/orchestrators/hyperai.md === # HyperAI Orchestrator Summary The HyperAI Orchestrator is a component of the HyperAI cloud compute platform designed for deploying AI pipelines on HyperAI instances. It is specifically intended for remote ZenML deployments and may cause issues with local deployments. ## When to Use - For managed pipeline execution. - If you are a HyperAI customer. ## Prerequisites 1. A running HyperAI instance accessible via the internet with SSH key-based access. 2. A recent version of Docker, including Docker Compose. 3. The appropriate NVIDIA Driver installed (if not pre-installed). 4. The NVIDIA Container Toolkit installed and configured (optional for GPU usage). ## Functionality The orchestrator utilizes Docker Compose to create and execute machine learning pipelines. It generates a Docker Compose file, creating services for each ZenML pipeline step and ensuring steps run only after successful upstream completion. It can connect to a container registry for Docker image transfers. ### Scheduled Pipelines Supports: - **Cron expressions** for periodic runs (requires `crontab`). - **One-time runs** at specified times (requires `at`). ## Deployment Steps 1. Configure a HyperAI Service Connector in ZenML with credentials for the HyperAI instance. 2. Register the orchestrator and link it to a stack containing a container registry and image builder. ### Service Connector Registration Example ```shell zenml service-connector register --type=hyperai --auth-method=rsa-key --base64_ssh_key= --hostnames=, --username= ``` ### Orchestrator Registration Example ```shell zenml orchestrator register --flavor=hyperai zenml stack register -o ... --set ``` ### Running a Pipeline ```shell python file_that_runs_a_zenml_pipeline.py ``` ## GPU Configuration For GPU usage, follow specific instructions to enable CUDA for full acceleration. This summary captures the essential details for understanding and using the HyperAI Orchestrator effectively. ================================================== === File: docs/book/component-guide/orchestrators/orchestrators.md === # Orchestrators in ZenML ## Overview The orchestrator is a crucial component in the MLOps stack, responsible for executing machine learning pipelines. It ensures that pipeline steps run only when all their required inputs are available. ### Key Features - **Artifact Storage**: The orchestrator stores all artifacts from pipeline runs. - **Mandatory Component**: It must be configured in all ZenML stacks. ### Orchestrator Flavors ZenML provides various orchestrators, including: | Orchestrator | Flavor | Integration | Notes | |-------------------------------|-----------------|---------------|--------------------------------| | LocalOrchestrator | `local` | _built-in_ | Runs pipelines locally. | | LocalDockerOrchestrator | `local_docker` | _built-in_ | Runs pipelines locally using Docker. | | KubernetesOrchestrator | `kubernetes` | `kubernetes` | Runs pipelines in Kubernetes. | | KubeflowOrchestrator | `kubeflow` | `kubeflow` | Runs pipelines using Kubeflow. | | VertexOrchestrator | `vertex` | `gcp` | Runs pipelines in Vertex AI. | | SagemakerOrchestrator | `sagemaker` | `aws` | Runs pipelines in Sagemaker. | | AzureMLOrchestrator | `azureml` | `azure` | Runs pipelines in AzureML. | | TektonOrchestrator | `tekton` | `tekton` | Runs pipelines using Tekton. | | AirflowOrchestrator | `airflow` | `airflow` | Runs pipelines using Airflow. | | SkypilotAWSOrchestrator | `vm_aws` | `skypilot[aws]` | Runs pipelines in AWS VMs using SkyPilot. | | SkypilotGCPOrchestrator | `vm_gcp` | `skypilot[gcp]` | Runs pipelines in GCP VMs using SkyPilot. | | SkypilotAzureOrchestrator | `vm_azure` | `skypilot[azure]` | Runs pipelines in Azure VMs using SkyPilot. | | HyperAIOrchestrator | `hyperai` | `hyperai` | Runs pipelines in HyperAI.ai instances. | | Custom Implementation | _custom_ | | Extend the orchestrator abstraction. | To view available orchestrator flavors, use: ```shell zenml orchestrator flavor list ``` ### Usage You do not need to interact directly with the orchestrator in your code. Simply ensure the desired orchestrator is part of your active ZenML stack and run your pipeline with: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Inspecting Runs For orchestrators with a UI (e.g., Kubeflow, Airflow, Vertex), retrieve the UI URL for a specific pipeline run: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ### Specifying Resources To execute steps on specific hardware, specify resources in your steps. Refer to the runtime configuration guide for details. If unsupported, consider using step operators. ================================================== === File: docs/book/component-guide/orchestrators/local-docker.md === ### Local Docker Orchestrator The Local Docker Orchestrator is a built-in orchestrator in ZenML that runs pipelines locally using Docker. #### When to Use - For running pipeline steps in isolated local environments. - For debugging pipeline issues without incurring costs for remote infrastructure. #### Deployment Ensure Docker is installed and running. #### Usage To register and activate the local Docker orchestrator in your stack: ```shell zenml orchestrator register --flavor=local_docker zenml stack register -o ... --set ``` Run your ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` #### Additional Configuration You can configure the Local Docker orchestrator using `LocalDockerOrchestratorSettings`. Refer to the [SDK docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-orchestrators/#zenml.orchestrators.local_docker.local_docker_orchestrator.LocalDockerOrchestratorSettings) for attributes and [runtime configuration](../../how-to/pipeline-development/use-configuration-files/runtime-configuration.md) for specifying settings. For example, to specify CPU count (Windows only): ```python from zenml import step, pipeline from zenml.orchestrators.local_docker.local_docker_orchestrator import LocalDockerOrchestratorSettings @step def return_one() -> int: return 1 settings = { "orchestrator": LocalDockerOrchestratorSettings(run_args={"cpu_count": 3}) } @pipeline(settings=settings) def simple_pipeline(): return_one() ``` #### Enabling CUDA for GPU To run steps on a GPU, follow the instructions [here](../../how-to/pipeline-development/training-with-gpus/README.md) to enable CUDA for GPU acceleration. ================================================== === File: docs/book/component-guide/orchestrators/skypilot-vm.md === # SkyPilot VM Orchestrator Overview The **SkyPilot VM Orchestrator** is a ZenML integration that enables provisioning and management of virtual machines (VMs) across various cloud providers supported by the SkyPilot framework. It simplifies running machine learning workloads on the cloud, providing cost efficiency, high GPU availability, and managed execution. It is recommended for users needing GPU access without the complexities of cloud infrastructure management. **Warning:** This component is intended for remote ZenML deployments only; using it locally may cause issues. ## Use Cases Utilize the SkyPilot VM Orchestrator if: - You aim to maximize cost savings via spot VMs and automatic selection of the cheapest options. - You require high GPU availability across multiple zones/regions/clouds. - You prefer not to maintain Kubernetes or pay for managed solutions like Sagemaker. ## Functionality The orchestrator uses the SkyPilot framework for VM provisioning and scaling, supporting both on-demand and managed spot VMs. It includes: - An optimizer for selecting the most cost-effective VM options. - An autostop feature to clean up idle clusters, reducing unnecessary costs. **Note:** Step-specific resources can be configured individually. For GPU support in Docker containers, use `docker_run_args=["--gpus=all"]`. ## Deployment No special steps are needed for deployment; simply use it like any other ZenML orchestrator. Ensure you have the necessary permissions for VM provisioning and configure the SkyPilot orchestrator using service connectors. **Supported Platforms:** AWS, GCP, Azure. ## Installation To use the SkyPilot VM Orchestrator, install the appropriate SkyPilot integration for your cloud provider: ```shell # For AWS pip install "zenml[connectors-aws]" zenml integration install aws skypilot_aws # For GCP pip install "zenml[connectors-gcp]" zenml integration install gcp skypilot_gcp # For Azure pip install "zenml[connectors-azure]" zenml integration install azure skypilot_azure ``` ## Configuration Examples ### AWS 1. Install integration: ```shell pip install "zenml[connectors-aws]" zenml integration install aws skypilot_aws ``` 2. Register service connector: ```shell zenml service-connector register aws-skypilot-vm --type aws --region=us-east-1 --auto-configure ``` 3. Register orchestrator: ```shell zenml orchestrator register --flavor vm_aws zenml orchestrator connect --connector aws-skypilot-vm ``` ### GCP 1. Install integration: ```shell pip install "zenml[connectors-gcp]" zenml integration install gcp skypilot_gcp ``` 2. Register service connector: ```shell gcloud auth application-default login zenml service-connector register gcp-skypilot-vm -t gcp --auth-method user-account --auto-configure ``` 3. Register orchestrator: ```shell zenml orchestrator register --flavor vm_gcp zenml orchestrator connect --connector gcp-skypilot-vm ``` ### Azure 1. Install integration: ```shell pip install "zenml[connectors-azure]" zenml integration install azure skypilot_azure ``` 2. Register service connector: ```shell zenml service-connector register azure-skypilot-vm -t azure --auth-method access-token --auto-configure ``` 3. Register orchestrator: ```shell zenml orchestrator register --flavor vm_azure zenml orchestrator connect --connector azure-skypilot-vm ``` ### Lambda Labs For Lambda Labs, install the integration: ```shell zenml integration install skypilot_lambda ``` Register the orchestrator: ```shell zenml secret create lambda_api_key --scope user --api_key= zenml orchestrator register --flavor vm_lambda --api_key={{lambda_api_key.api_key}} ``` ### Kubernetes 1. Install integration: ```shell zenml integration install skypilot_kubernetes ``` 2. Register service connector: ```shell zenml service-connector register kubernetes-skypilot --type kubernetes -i ``` 3. Register orchestrator: ```shell zenml orchestrator register --flavor sky_kubernetes zenml orchestrator connect --connector kubernetes-skypilot ``` ## Additional Configuration You can configure various attributes for the orchestrator, such as: - `instance_type`, `cpus`, `memory`, `accelerators`, `region`, `zone`, `image_id`, `disk_size`, `disk_tier`, `cluster_name`, `idle_minutes_to_autostop`, and `docker_run_args`. ### Example Configuration for AWS ```python from zenml.integrations.skypilot_aws.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings skypilot_settings = SkypilotAWSOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", use_spot=True, region="us-west-1", cluster_name="my_cluster", idle_minutes_to_autostop=60, docker_run_args=["--gpus=all"] ) @pipeline(settings={"orchestrator": skypilot_settings}) def my_pipeline(): # Pipeline implementation pass ``` ## Step-Specific Resources You can configure resources for each pipeline step individually, allowing for optimized resource allocation. If no specific settings are provided, the orchestrator defaults to the general settings. To disable step-based settings: ```shell zenml orchestrator update --disable_step_based_settings=True ``` ### Example for Step-Specific Resources ```python @step(settings={"orchestrator": high_resource_settings}) def my_resource_intensive_step(): # Step implementation pass ``` This orchestrator provides flexibility and control over resource allocation, optimizing both performance and cost for machine learning workloads. For detailed attributes and configurations, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-skypilot/#zenml.integrations.skypilot.flavors.skypilot_orchestrator_base_vm_flavor.SkypilotBaseOrchestratorSettings). ================================================== === File: docs/book/component-guide/orchestrators/sagemaker.md === ### AWS Sagemaker Orchestrator **Overview**: The ZenML Sagemaker orchestrator utilizes [Sagemaker Pipelines](https://aws.amazon.com/sagemaker/pipelines) for orchestrating machine learning workflows on AWS, providing a serverless, production-ready solution with minimal setup. **When to Use**: - If you are using AWS. - For a proven, production-grade orchestrator with UI tracking. - For managed and serverless pipeline execution. **Functionality**: Each ZenML pipeline step corresponds to a SageMaker `PipelineStep`, which can include Sagemaker Processing or Training jobs. ### Deployment Requirements 1. **ZenML Deployment**: Deploy ZenML to the cloud, ideally in the same region as Sagemaker. 2. **Permissions**: Ensure relevant IAM roles have `AmazonSageMakerFullAccess` and `sagemaker.amazonaws.com` as a Principal Service. 3. **Integrations**: Install ZenML `aws` and `s3` integrations: ```shell zenml integration install aws s3 ``` 4. **Docker**: Must be installed and running. 5. **Artifact Store**: A remote artifact store configured with `authentication_secret`. 6. **Container Registry**: A remote container registry configured. ### Authentication Methods 1. **Service Connector** (Recommended): ```shell zenml service-connector register --type aws -i zenml orchestrator register --flavor=sagemaker --execution_role= zenml orchestrator connect --connector zenml stack register -o ... --set ``` 2. **Explicit Authentication**: ```shell zenml orchestrator register --flavor=sagemaker --execution_role= --aws_access_key_id=... --aws_secret_access_key=... --region=... zenml stack register -o ... --set ``` 3. **Implicit Authentication**: Uses the `default` profile in the local AWS configuration file. ### Running Pipelines To execute a ZenML pipeline: ```shell python run.py ``` Output will indicate the status of the pipeline run. ### Sagemaker UI Access the Sagemaker Pipelines UI via Sagemaker Studio to view logs and details about pipeline runs. ### Debugging If a pipeline fails before starting, check the SageMaker UI for error messages and logs. ### Configuration - **Pipeline/Step Level Configuration**: Customize settings using `SagemakerOrchestratorSettings` for specific steps or the entire pipeline. - **Warm Pools**: Enable to reduce startup time: ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(keep_alive_period_in_seconds=300) ``` ### S3 Data Access - **Importing Data**: ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(input_data_s3_mode="File", input_data_s3_uri="s3://bucket/folder") ``` - **Exporting Data**: ```python sagemaker_orchestrator_settings = SagemakerOrchestratorSettings(output_data_s3_mode="EndOfJob", output_data_s3_uri="s3://results-bucket/results") ``` ### Tagging Add tags to pipelines and jobs for better resource management: ```python pipeline_settings = SagemakerOrchestratorSettings(pipeline_tags={"project": "my-ml-project"}) ``` ### Scheduling Pipelines Use SageMaker's scheduling capabilities with cron expressions or fixed intervals: ```python my_scheduled_pipeline.with_options(schedule=Schedule(cron_expression="0/5 * * * ? *"))() ``` ### IAM Permissions for Scheduling Ensure the IAM role has permissions for EventBridge Scheduler and SageMaker jobs. Configure trust relationships for the `scheduler_role`: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "scheduler.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ``` ### Conclusion The ZenML Sagemaker orchestrator allows efficient orchestration of ML workflows on AWS, leveraging Sagemaker's capabilities while providing flexibility in configuration, scheduling, and resource management. ================================================== === File: docs/book/component-guide/orchestrators/kubeflow.md === ### Kubeflow Orchestrator Overview The Kubeflow orchestrator is a component of ZenML that utilizes [Kubeflow Pipelines](https://www.kubeflow.org/docs/components/pipelines/overview/) for running pipelines. It is designed for remote ZenML deployments and should not be used with local deployments. #### When to Use Use the Kubeflow orchestrator if: - You need a production-grade orchestrator. - You want a UI for tracking pipeline runs. - You are comfortable with Kubernetes setup and maintenance. - You can deploy and maintain Kubeflow Pipelines. #### Deployment Steps To deploy ZenML pipelines on Kubeflow, set up a Kubernetes cluster and install Kubeflow Pipelines. Here’s a brief guide for different cloud providers: **AWS:** 1. Set up an EKS cluster. 2. Install AWS CLI and configure it. 3. Install `kubectl` and configure it: ```powershell aws eks --region REGION update-kubeconfig --name CLUSTER_NAME ``` 4. Install Kubeflow Pipelines. **GCP:** 1. Set up a GKE cluster. 2. Install Google Cloud CLI and configure it. 3. Install `kubectl` and configure it: ```powershell gcloud container clusters get-credentials CLUSTER_NAME ``` 4. Install Kubeflow Pipelines. **Azure:** 1. Set up an AKS cluster. 2. Install `az` CLI and configure it. 3. Install `kubectl` and configure it: ```powershell az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME ``` 4. Install Kubeflow Pipelines. **Other Kubernetes:** 1. Set up a Kubernetes cluster. 2. Install `kubectl` and configure it. 3. Install Kubeflow Pipelines. #### Usage Requirements To use the Kubeflow orchestrator: - A Kubernetes cluster with Kubeflow Pipelines installed. - A remote ZenML server. - Install the ZenML `kubeflow` integration: ```shell zenml integration install kubeflow ``` - Docker installed (unless using a remote Image Builder). - Optional: `kubectl` installed. #### Registering the Orchestrator 1. With a Service Connector: ```shell zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator register --flavor kubeflow --connector --resource-id zenml stack register -o -a -c ``` 2. Without a Service Connector: ```shell zenml orchestrator register --flavor=kubeflow --kubernetes_context= zenml stack register -o -a -c ``` #### Running a Pipeline To run a ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` #### Kubeflow UI Access To access the Kubeflow UI for pipeline run details: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"] ``` #### Additional Configuration You can customize the Kubeflow orchestrator using `KubeflowOrchestratorSettings` for attributes like `client_args`, `user_namespace`, and `pod_settings`. Example: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings kubeflow_settings = KubeflowOrchestratorSettings( client_args={}, user_namespace="my_namespace", pod_settings={ "affinity": {...}, "tolerations": [...] } ) ``` #### Multi-Tenancy Considerations For multi-tenancy, set the `kubeflow_hostname` parameter when registering the orchestrator: ```shell zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` Ensure to pass the correct namespace and authentication credentials in `KubeflowOrchestratorSettings`. #### Using Secrets You can store sensitive information as secrets: ```shell zenml secret create kubeflow_secret --username=admin --password=abc123 ``` And reference them in your code: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="{{kubeflow_secret.username}}", client_password="{{kubeflow_secret.password}}", user_namespace="namespace_name" ) ``` For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubeflow/#zenml.integrations.kubeflow.orchestrators.kubeflow_orchestrator.KubeflowOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/lightning.md === # Lightning AI Orchestrator The Lightning AI Orchestrator, integrated with ZenML, enables the execution of AI pipelines on Lightning AI's infrastructure, utilizing its scalable compute resources. This component is intended for remote ZenML deployments only. ## When to Use - Fast execution of pipelines on GPU instances. - Existing use of Lightning AI for machine learning projects. - Need for managed infrastructure for ML workflows. - Simplification of deployment and scaling of ML workflows. - Leverage Lightning AI's optimizations for workloads. ## Deployment To deploy the orchestrator: 1. Create a Lightning AI account and obtain credentials. 2. No additional infrastructure is required; it uses Lightning AI's managed resources. ## Functionality - Archives the ZenML repository and uploads it to Lightning AI Studio. - Creates a new studio using `lightning-sdk` and runs commands via `studio.run()`. - Supports both CPU and GPU machine types, specified in `LightningOrchestratorSettings`. - Can run in asynchronous mode, allowing background execution and status checks. ## Usage Requirements - Install the ZenML Lightning integration: ```shell zenml integration install lightning ``` - Set up a remote artifact store. - Obtain Lightning AI credentials: - `LIGHTNING_USER_ID` - `LIGHTNING_API_KEY` - Optional: `LIGHTNING_USERNAME`, `LIGHTNING_TEAMSPACE`, `LIGHTNING_ORG` ### Registering the Orchestrator Register the orchestrator with the following command: ```shell zenml orchestrator register lightning_orchestrator \ --flavor=lightning \ --user_id= \ --api_key= \ --username= \ # optional --teamspace= \ # optional --organization= # optional ``` Activate the stack: ```bash zenml stack register lightning_stack -o lightning_orchestrator ... --set ``` ## Pipeline Configuration Configure the orchestrator at the pipeline level: ```python from zenml.integrations.lightning.flavors.lightning_orchestrator_flavor import LightningOrchestratorSettings lightning_settings = LightningOrchestratorSettings( main_studio_name="my_studio", machine_type="cpu", async_mode=True, custom_commands=["pip install -r requirements.txt"] ) @pipeline(settings={"orchestrator.lightning": lightning_settings}) def my_pipeline(): ... ``` ## Running the Pipeline Execute the pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` ## Monitoring Monitor running applications through the Lightning AI UI. Retrieve the UI URL for a pipeline run: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ## Additional Configuration You can specify settings at the pipeline or step level: ```python @pipeline(settings={"orchestrator.lightning": lightning_settings}) def my_pipeline(): ... @step(settings={"orchestrator.lightning": lightning_settings}) def my_step(): ... ``` To use GPUs, specify a GPU-enabled machine type: ```python lightning_settings = LightningOrchestratorSettings(machine_type="gpu") # or "A10G" ``` Refer to [Lightning AI's documentation](https://lightning.ai/docs/overview/studios/change-gpus) for GPU-enabled machine types and specifications. ================================================== === File: docs/book/component-guide/orchestrators/azureml.md === # AzureML Orchestrator Summary **Overview**: AzureML is a cloud-based orchestration service by Microsoft for building, training, deploying, and managing machine learning models. It supports the entire ML lifecycle, from data preparation to monitoring. ## Use Cases Use AzureML if: - You are using Azure. - You need a production-grade orchestrator. - You want a UI to track pipeline runs. - You prefer a managed solution for pipelines. ## Functionality The ZenML AzureML orchestrator utilizes the AzureML Python SDK v2 to create pipelines by generating AzureML `CommandComponent` for each ZenML step. ## Deployment To deploy AzureML orchestrator: 1. Deploy ZenML to the cloud (preferably in the same region as AzureML). 2. Ensure connection to the remote ZenML server. ## Prerequisites To use AzureML orchestrator, you need: - ZenML `azure` integration: ```shell zenml integration install azure ``` - Docker installed or a remote image builder. - A remote artifact store and container registry. - An Azure resource group with an AzureML workspace. ### Authentication Methods 1. **Default Authentication**: Simplifies the process using Azure credentials. 2. **Service Principal Authentication (recommended)**: Requires creating a service principal on Azure with proper permissions. Register the ZenML Azure Service Connector: ```bash zenml service-connector register --type azure -i zenml orchestrator connect -c ``` ## Docker Usage ZenML builds a Docker image for each pipeline run at `/zenml:`. ## AzureML UI AzureML studio allows inspection, management, and debugging of pipelines. Double-clicking a step opens its overview page with configuration and execution logs. ## Settings The `AzureMLOrchestratorSettings` class configures compute resources with three modes: 1. **Serverless Compute (Default)**: ```python azureml_settings = AzureMLOrchestratorSettings(mode="serverless") ``` 2. **Compute Instance**: ```python azureml_settings = AzureMLOrchestratorSettings( mode="compute-instance", compute_name="my-gpu-instance", size="Standard_NC6s_v3", idle_time_before_shutdown_minutes=20, ) ``` 3. **Compute Cluster**: ```python azureml_settings = AzureMLOrchestratorSettings( mode="compute-cluster", compute_name="my-gpu-cluster", size="Standard_NC6s_v3", tier="Dedicated", min_instances=2, max_instances=10, idle_time_before_scaledown_down=60, ) ``` ## Scheduling Pipelines AzureML orchestrator supports scheduling pipelines using `JobSchedules` with cron expressions or intervals: ```python @pipeline def my_pipeline(): ... my_pipeline = my_pipeline.with_options( schedule=Schedule(cron_expression="*/5 * * * *") ) my_pipeline() ``` Note: Users must manage the lifecycle of the schedule through the Azure UI. For more details on compute sizes, refer to the [AzureML documentation](https://learn.microsoft.com/en-us/azure/machine-learning/concept-compute-target?view=azureml-api-2#supported-vm-series-and-sizes). ================================================== === File: docs/book/component-guide/orchestrators/kubernetes.md === ### Kubernetes Orchestrator Overview The ZenML `kubernetes` integration allows you to orchestrate and scale ML pipelines on a Kubernetes cluster without writing Kubernetes code. It serves as a lightweight alternative to distributed orchestrators like Airflow or Kubeflow, executing each pipeline step in separate Kubernetes pods, managed by a master pod via topological sorting. This approach is simpler and faster than using Kubeflow, as it eliminates the need for Kubeflow installation and maintenance. **Warning:** This component is intended for remote ZenML deployments only; local deployments may cause unexpected behavior. ### When to Use the Kubernetes Orchestrator - For lightweight pipeline execution on Kubernetes. - If you prefer not to maintain Kubeflow Pipelines. - If you want to avoid managed solutions like Vertex. ### Deployment Requirements - A deployed Kubernetes cluster (refer to the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md)). - A remote ZenML server connection. ### Setup Instructions 1. **Install the ZenML Kubernetes Integration:** ```shell zenml integration install kubernetes ``` 2. **Prerequisites:** - Docker and kubectl installed. - A remote artifact store and container registry in your stack. - Optional: Configure a Service Connector for better portability. 3. **Register the Orchestrator:** - **With Service Connector:** ```shell zenml orchestrator register --flavor kubernetes zenml orchestrator connect --connector zenml stack register -o ... --set ``` - **Without Service Connector:** ```shell zenml orchestrator register --flavor=kubernetes --kubernetes_context= zenml stack register -o ... --set ``` ### Running Pipelines To run a ZenML pipeline: ```shell python file_that_runs_a_zenml_pipeline.py ``` Logs for all Kubernetes pods will be visible in the terminal. ### Interacting with Pods You can debug by interacting with Kubernetes pods using labels: ```shell kubectl delete pod -n zenml -l pipeline= ``` ### Configuration Options - Default namespace: `zenml` - Service account: `zenml-service-account` with `edit` RBAC role. **Custom Attributes:** - `kubernetes_namespace`: Specify the namespace. - `service_account_name`: Use an existing service account. **Pod Settings:** You can customize pod settings using `KubernetesOrchestratorSettings`: ```python from zenml.integrations.kubernetes.flavors.kubernetes_orchestrator_flavor import KubernetesOrchestratorSettings kubernetes_settings = KubernetesOrchestratorSettings( pod_settings={ "node_selectors": {"cloud.google.com/gke-nodepool": "ml-pool"}, "resources": { "requests": {"cpu": "2", "memory": "4Gi"}, "limits": {"cpu": "4", "memory": "8Gi"} } }, kubernetes_namespace="ml-pipelines", service_account_name="zenml-pipeline-runner" ) ``` ### Step-Level Settings You can define settings at the step level to override pipeline settings: ```python @step(settings={"orchestrator": k8s_settings}) def train_model(data: dict) -> None: ... ``` ### GPU Configuration To run steps on GPU, follow additional instructions to enable CUDA. For more details on configuration options and attributes, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-kubernetes/#zenml.integrations.kubernetes.orchestrators.kubernetes_orchestrator.KubernetesOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/databricks.md === # Databricks Orchestrator Overview **Databricks** is a unified data analytics platform that integrates data warehouses and lakes for big data processing and machine learning. The **Databricks orchestrator** is a feature of the ZenML Databricks integration, enabling the execution of ML pipelines on Databricks, leveraging its distributed computing capabilities. ### When to Use Use the Databricks orchestrator if: - You are using Databricks for data and ML workloads. - You want to utilize its distributed computing for ML pipelines. - You seek a managed solution that integrates with Databricks services. - You need optimization for big data processing. ### Prerequisites - An active Databricks workspace (AWS, Azure, GCP). - A Databricks account or service account with permissions to create and run jobs. ### How It Works 1. ZenML creates a Python **wheel package** containing your pipeline code and dependencies. 2. The package is uploaded to Databricks. 3. ZenML uses the Databricks SDK to define a job, specifying pipeline steps and cluster settings (e.g., Spark version, worker count). 4. The job runs on Databricks, executing steps in order based on dependencies. 5. ZenML retrieves logs and job status for monitoring. ### Usage 1. **Install the Databricks integration:** ```shell zenml integration install databricks ``` 2. **Register the orchestrator:** ```shell zenml orchestrator register databricks_orchestrator --flavor=databricks --host="https://xxxxx.x.azuredatabricks.net" --client_id={{databricks.client_id}} --client_secret={{databricks.client_secret}} ``` 3. **Add the orchestrator to your stack:** ```shell zenml stack register databricks_stack -o databricks_orchestrator ... --set ``` 4. **Run a ZenML pipeline:** ```shell python run.py ``` ### Databricks UI Access detailed logs and pipeline run information through the Databricks UI. Retrieve the UI URL in Python: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"].value ``` ### Scheduling Pipelines The orchestrator supports scheduling using a **cron expression**: ```python from zenml.config.schedule import Schedule pipeline_instance.run( schedule=Schedule(cron_expression="*/5 * * * *") ) ``` **Note:** Only `cron_expression` is supported, and Java Timezone IDs must be used. ### Additional Configuration Customize the orchestrator with `DatabricksOrchestratorSettings`: ```python from zenml.integrations.databricks.flavors.databricks_orchestrator_flavor import DatabricksOrchestratorSettings databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-scala2.12", num_workers="3", node_type_id="Standard_D4s_v5", schedule_timezone="America/Los_Angeles" ) ``` Specify settings at the pipeline or step level: ```python @pipeline(settings={"orchestrator": databricks_settings}) def my_pipeline(): ... ``` ### GPU Support To enable GPU support, adjust `spark_version` and `node_type_id`: ```python databricks_settings = DatabricksOrchestratorSettings( spark_version="15.3.x-gpu-ml-scala2.12", node_type_id="Standard_NC24ads_A100_v4" ) ``` Follow specific instructions to enable **CUDA** for GPU acceleration. ### Documentation For a complete list of attributes and additional configuration options, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-databricks/#zenml.integrations.databricks.flavors.databricks_orchestrator_flavor.DatabricksOrchestratorSettings). ================================================== === File: docs/book/component-guide/orchestrators/vertex.md === # Google Cloud Vertex AI Orchestrator ## Overview Vertex AI Pipelines is a serverless ML workflow tool on Google Cloud Platform (GCP) designed for easy, production-ready pipeline orchestration with minimal setup. It is intended for use within a remote ZenML deployment. ## When to Use Use the Vertex orchestrator if: - You are using GCP. - You need a production-grade orchestrator with a UI for tracking pipeline runs. - You prefer a managed, serverless solution for running pipelines. ## Deployment To deploy the Vertex AI orchestrator, first deploy ZenML to the cloud, ideally in the same GCP project as the Vertex infrastructure. Ensure you are connected to the remote ZenML server and enable relevant Vertex APIs. ## Requirements To use the Vertex orchestrator: - Install the ZenML `gcp` integration: ```shell zenml integration install gcp ``` - Install and run Docker. - Set up a remote artifact store and container registry. - Configure GCP credentials with appropriate permissions. ### GCP Credentials and Permissions You need a GCP user account or service accounts with permissions to run Vertex AI pipelines. Options for providing credentials include: 1. Using the `gcloud` CLI for local authentication. 2. Configuring the orchestrator with a service account key file. 3. Recommended: Using a GCP Service Connector linked to the orchestrator. ### Vertex AI Pipeline Components 1. **ZenML Client Environment**: Runs ZenML code for building and submitting pipelines. Requires permissions to create jobs in Vertex Pipelines. 2. **Vertex AI Pipeline Environment**: GCP environment for running pipeline steps, requiring a workload service account with permissions to run Vertex AI pipelines. ### Configuration Use Cases 1. **Local `gcloud` CLI with User Account**: ```shell zenml orchestrator register \ --flavor=vertex \ --project= \ --location= \ --synchronous=true ``` 2. **GCP Service Connector with Single Service Account**: ```shell zenml service-connector register --type gcp --auth-method=service-account --project_id= --service_account_json=@connectors-vertex-ai-workload.json --resource-type gcp-generic zenml orchestrator register \ --flavor=vertex \ --location= \ --synchronous=true \ --workload_service_account=@.iam.gserviceaccount.com zenml orchestrator connect --connector ``` 3. **GCP Service Connector with Different Service Accounts**: This setup applies the principle of least privilege using multiple service accounts. ### Configuring the Stack To register and activate a stack with the new orchestrator: ```shell zenml stack register -o ... --set ``` ### Running Pipelines To run a ZenML pipeline using the Vertex orchestrator: ```shell python file_that_runs_a_zenml_pipeline.py ``` ### Vertex UI The Vertex UI provides details about pipeline runs. Retrieve the URL to the Vertex UI: ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run("") orchestrator_url = pipeline_run.run_metadata["orchestrator_url"] ``` ### Scheduling Pipelines To schedule a pipeline: ```python from datetime import datetime, timedelta from zenml import pipeline from zenml.config.schedule import Schedule @pipeline def first_pipeline(): ... first_pipeline = first_pipeline.with_options( schedule=Schedule(cron_expression="*/5 * * * *") ) first_pipeline() @pipeline def second_pipeline(): ... second_pipeline = second_pipeline.with_options( schedule=Schedule( cron_expression="0 * * * *", start_time=datetime.now() + timedelta(days=1), end_time=datetime.now() + timedelta(days=3), ) ) second_pipeline() ``` **Note**: The orchestrator only supports `cron_expression`, `start_time`, and `end_time` in the `Schedule` object. ### Additional Configuration Configure labels for Vertex Pipeline jobs or specify GPU settings: ```python from zenml.integrations.gcp.flavors.vertex_orchestrator_flavor import VertexOrchestratorSettings vertex_settings = VertexOrchestratorSettings(labels={"key": "value"}) ``` Specify resource settings: ```python from zenml.config import ResourceSettings resource_settings = ResourceSettings(cpu_count=8, memory="16GB") ``` For GPU usage: ```python from zenml import step, pipeline from zenml.config import ResourceSettings vertex_settings = VertexOrchestratorSettings( pod_settings={"node_selectors": {"cloud.google.com/gke-accelerator": "NVIDIA_TESLA_A100"}} ) resource_settings = ResourceSettings(gpu_count=1) @step(settings={"orchestrator": vertex_settings, "resources": resource_settings}) def my_step(): ... @pipeline(settings={"orchestrator": vertex_settings, "resources": resource_settings}) def my_pipeline(): ... ``` ### Enabling CUDA for GPU Follow specific instructions to enable CUDA for GPU-backed hardware to ensure proper acceleration. For more details, refer to the [ZenML SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-gcp/#zenml.integrations.gcp.orchestrators.vertex_orchestrator.VertexOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/tekton.md === # Tekton Orchestrator **Tekton** is an open-source framework for CI/CD systems, enabling developers to build, test, and deploy applications across various environments. This component is intended for use with a **remote ZenML deployment** only. ## When to Use Tekton Use the Tekton orchestrator if: - You need a production-grade orchestrator. - You want a UI to track pipeline runs. - You are comfortable with Kubernetes. - You can deploy and maintain Tekton Pipelines. ## Deployment Steps 1. **Set up a Kubernetes cluster** and deploy Tekton Pipelines. ### AWS - Set up an **EKS cluster**. - Install the **AWS CLI**. - Install `kubectl` and configure it: ```powershell aws eks --region REGION update-kubeconfig --name CLUSTER_NAME ``` - Install Tekton Pipelines. ### GCP - Set up a **GKE cluster**. - Install the **Google Cloud CLI**. - Install `kubectl` and configure it: ```powershell gcloud container clusters get-credentials CLUSTER_NAME ``` - Install Tekton Pipelines. ### Azure - Set up an **AKS cluster**. - Install the **Azure CLI**. - Install `kubectl` and configure it: ```powershell az aks get-credentials --resource-group RESOURCE_GROUP --name CLUSTER_NAME ``` - Install Tekton Pipelines. > **Note:** Ensure Tekton Pipelines version >=0.38.3 is used. ## Usage Requirements To use the Tekton orchestrator: - Install the ZenML `tekton` integration: ```shell zenml integration install tekton -y ``` - Have **Docker** installed and running. - Deploy Tekton pipelines on a remote cluster. - Obtain the Kubernetes context name via `kubectl config get-contexts`. - Set up a **remote artifact store** and **container registry**. ### Registering the Orchestrator 1. **With Service Connector**: ```shell zenml orchestrator register --flavor tekton zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator connect --connector zenml stack register -o ... --set ``` 2. **Without Service Connector**: ```shell zenml orchestrator register --flavor=tekton --kubernetes_context= zenml stack register -o ... --set ``` ### Running a Pipeline Run any ZenML pipeline using: ```shell python file_that_runs_a_zenml_pipeline.py ``` ## Tekton UI Access the Tekton UI for pipeline run details: ```bash kubectl get ingress -n tekton-pipelines -o jsonpath='{.items[0].spec.rules[0].host}' ``` ## Additional Configuration Configure `TektonOrchestratorSettings` for node selectors, affinity, and tolerations: ```python from zenml.integrations.tekton.flavors.tekton_orchestrator_flavor import TektonOrchestratorSettings tekton_settings = TektonOrchestratorSettings( pod_settings={ "affinity": {...}, "tolerations": [...] } ) ``` Specify resource settings for pipeline steps: ```python resource_settings = ResourceSettings(cpu_count=8, memory="16GB") ``` ### Example Usage Specify settings at the pipeline or step level: ```python @pipeline(settings={"orchestrator": tekton_settings, "resources": resource_settings}) def my_pipeline(): ... @step(settings={"orchestrator": tekton_settings, "resources": resource_settings}) def my_step(): ... ``` ## Enabling CUDA for GPU For GPU usage, follow the instructions to enable CUDA for full acceleration. For more details, refer to the [SDK docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-tekton/#zenml.integrations.tekton.orchestrators.tekton_orchestrator.TektonOrchestrator). ================================================== === File: docs/book/component-guide/orchestrators/airflow.md === ### Airflow Orchestrator for ZenML Pipelines ZenML pipelines can be executed as Airflow DAGs, leveraging Airflow's orchestration capabilities alongside ZenML's ML-specific features. Each ZenML step operates in a separate Docker container managed by Airflow. #### When to Use Airflow Orchestrator - Proven production-grade orchestrator. - Already using Airflow. - Running pipelines locally. - Willing to deploy and maintain Airflow. #### Deployment Options - **Local Deployment**: No additional setup required. - **Remote Deployment**: Requires a remote ZenML deployment. Options include: - ZenML GCP Terraform module with Google Cloud Composer. - Managed services like Google Cloud Composer, Amazon MWAA, or Astronomer. - Manual Airflow deployment (refer to official Airflow docs). **Required Python Packages for Remote Deployment**: - `pydantic~=2.7.1` - `apache-airflow-providers-docker` or `apache-airflow-providers-cncf-kubernetes` based on the chosen operator. #### Using the Airflow Orchestrator 1. Install ZenML Airflow integration: ```shell zenml integration install airflow ``` 2. Ensure Docker is installed and running. 3. Register the orchestrator: ```shell zenml orchestrator register --flavor=airflow --local=True zenml stack register -o ... --set ``` **Local Setup**: - Create a virtual environment: ```bash python -m venv airflow_server_environment source airflow_server_environment/bin/activate pip install "apache-airflow==2.4.0" "apache-airflow-providers-docker<3.8.0" "pydantic~=2.7.1" ``` - Set environment variables (optional): - `AIRFLOW_HOME`: Default is `~/airflow`. - `AIRFLOW__CORE__DAGS_FOLDER`: Default is `/dags`. - `AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL`: Default is 30 seconds. **Start Local Airflow Server**: ```bash airflow standalone ``` Access the UI at [http://localhost:8080](http://localhost:8080). **Run ZenML Pipeline**: ```shell python file_that_runs_a_zenml_pipeline.py ``` Copy the generated `.zip` file to the Airflow DAGs directory. #### Remote Deployment Considerations - Requires a remote ZenML server, a deployed Airflow server, a remote artifact store, and a remote container registry. - Running a pipeline will create a `.zip` file for Airflow, which must be placed in the DAGs directory. #### Scheduling Pipelines Schedule runs in the past: ```python from datetime import datetime, timedelta from zenml.pipelines import Schedule scheduled_pipeline = fashion_mnist_pipeline.with_options( schedule=Schedule( start_time=datetime.now() - timedelta(hours=1), end_time=datetime.now() + timedelta(hours=1), interval_second=timedelta(minutes=15), catchup=False, ) ) scheduled_pipeline() ``` #### Airflow UI Access the UI at [http://localhost:8080](http://localhost:8080). Default username is `admin`. Password can be found in `/standalone_admin_password.txt`. #### Additional Configuration Use `AirflowOrchestratorSettings` for further customization. For GPU support, follow specific instructions to enable CUDA. #### Airflow Operators ZenML supports: - `DockerOperator`: Runs Docker images on the same machine. - `KubernetesPodOperator`: Runs Docker images on Kubernetes pods. Specify the operator: ```python from zenml.integrations.airflow.flavors.airflow_orchestrator_flavor import AirflowOrchestratorSettings airflow_settings = AirflowOrchestratorSettings( operator="docker", # or "kubernetes_pod" operator_args={} ) ``` **Custom Operators**: Specify custom operator paths in `AirflowOrchestratorSettings`. **Custom DAG Generator**: Provide a custom DAG generator file for more control over DAG creation. For more details, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/integration_code_docs/integrations-airflow/#zenml.integrations.airflow.orchestrators.airflow_orchestrator.AirflowOrchestrator). ================================================== === File: docs/book/how-to/debug-and-solve-issues.md === # Debugging ZenML Issues This guide provides best practices for debugging common issues with ZenML and seeking help effectively. ### When to Seek Help Before asking for assistance, follow this checklist: - Search Slack for relevant discussions. - Check [GitHub issues](https://github.com/zenml-io/zenml/issues). - Use the search bar on the [docs](https://docs.zenml.io). - Review the [common errors](debug-and-solve-issues.md#most-common-errors) section. - Analyze [additional logs](debug-and-solve-issues.md#41-additional-logs) and [client/server logs](debug-and-solve-issues.md#client-and-server-logs). If unresolved, post your question on [Slack](https://zenml.io/slack). ### How to Post on Slack Include the following information in your query: 1. **System Information**: Run the command below and attach the output: ```shell zenml info -a -s ``` For specific package issues, use: ```shell zenml info -p ``` 2. **What Happened?**: Describe your goal, expected outcome, and actual result. 3. **Reproducing the Error**: Provide step-by-step instructions to replicate the issue. 4. **Relevant Log Output**: Always include relevant logs and error tracebacks. Run: ```shell zenml status zenml stack describe ``` For additional logs, toggle the `ZENML_LOGGING_VERBOSITY` environment variable: ```shell export ZENML_LOGGING_VERBOSITY=DEBUG ``` ### Client and Server Logs For server-related issues, view logs with: ```shell zenml logs ``` ### Common Errors 1. **Error initializing rest store**: ```bash RuntimeError: Error initializing rest store with URL 'http://127.0.0.1:8237'... ``` Solution: Re-run `zenml login --local` after each machine restart. 2. **Column 'step_configuration' cannot be null**: ```bash sqlalchemy.exc.IntegrityError: (pymysql.err.IntegrityError) (1048, "Column 'step_configuration' cannot be null") ``` Solution: Ensure step configuration does not exceed length limits. 3. **'NoneType' object has no attribute 'name'**: ```shell AttributeError: 'NoneType' object has no attribute 'name' ``` Solution: Register an experiment tracker: ```shell zenml experiment-tracker register mlflow_tracker --flavor=mlflow zenml stack update -e mlflow_tracker ``` This guide aims to streamline the debugging process and enhance the efficiency of resolving issues within ZenML. ================================================== === File: docs/book/how-to/pipeline-development/README.md === # Pipeline Development in ZenML This section provides a comprehensive overview of pipeline development in ZenML, focusing on essential components and processes. ## Key Concepts - **Pipelines**: A sequence of steps that define the workflow for data processing and model training. - **Steps**: Individual tasks within a pipeline, such as data ingestion, preprocessing, training, and evaluation. - **Artifacts**: Outputs generated by steps, which can be used as inputs for subsequent steps. ## Pipeline Creation To create a pipeline, define the steps and their dependencies: ```python from zenml.pipelines import pipeline @pipeline def my_pipeline(): step1 = data_ingestion() step2 = data_preprocessing(step1) step3 = model_training(step2) step4 = model_evaluation(step3) ``` ## Step Definitions Steps are defined using decorators and can include parameters for customization: ```python from zenml.steps import step @step def data_ingestion() -> DataFrame: # Code to ingest data return data @step def data_preprocessing(data: DataFrame) -> DataFrame: # Code to preprocess data return processed_data @step def model_training(data: DataFrame) -> Model: # Code to train model return model @step def model_evaluation(model: Model) -> Metrics: # Code to evaluate model return metrics ``` ## Execution Pipelines can be executed using the ZenML CLI or programmatically: ```python from zenml.pipelines import run run(my_pipeline) ``` ## Best Practices - Modularize steps for reusability. - Use versioning for artifacts to maintain consistency. - Implement logging and monitoring for better observability. ## Conclusion ZenML provides a structured approach to pipeline development, enabling efficient data workflows and model management. By following the outlined concepts and practices, users can create robust and scalable pipelines. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/README.md === ### Develop Locally This section outlines best practices for developing pipelines locally, allowing for faster iteration and reduced costs. Developers often work with a smaller subset of data or synthetic data. ZenML supports local development, enabling users to build pipelines locally before deploying them on more powerful remote hardware. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/keep-your-dashboard-server-clean.md === ### Summary: Keeping Your ZenML Pipeline Runs Clean #### Overview To maintain a clean development environment in ZenML, especially during multiple pipeline runs, several strategies can be employed. #### 1. Running Locally To avoid cluttering a shared server, you can run ZenML locally by disconnecting from the remote server: ```bash zenml login --local ``` Reconnect to the server using: ```bash zenml login ``` #### 2. Pipeline Runs - **Unlisted Runs**: Create runs not associated with a pipeline using: ```python pipeline_instance.run(unlisted=True) ``` These runs won't appear on the pipeline's dashboard, keeping the history focused. - **Deleting Pipeline Runs**: Delete a specific run with: ```bash zenml pipeline runs delete ``` To delete all runs from the last 24 hours: ```python #!/usr/bin/env python3 import datetime from zenml.client import Client def delete_recent_pipeline_runs(): zc = Client() time_filter = (datetime.datetime.now(datetime.timezone.utc) - datetime.timedelta(hours=24)).strftime("%Y-%m-%d %H:%M:%S") recent_runs = zc.list_pipeline_runs(created=f"gt:{time_filter}") for run in recent_runs: zc.delete_pipeline_run(run.id) print(f"Deleted {len(recent_runs)} pipeline runs.") if __name__ == "__main__": delete_recent_pipeline_runs() ``` #### 3. Pipelines - **Deleting Pipelines**: Remove unnecessary pipelines with: ```bash zenml pipeline delete ``` - **Unique Pipeline Names**: Assign unique names to runs using: ```python training_pipeline = training_pipeline.with_options(run_name="custom_pipeline_run_name") training_pipeline() ``` #### 4. Models To delete a model or its versions: ```bash zenml model delete ``` #### 5. Artifacts - **Pruning Artifacts**: Remove unreferenced artifacts with: ```bash zenml artifact prune ``` Use `--only-artifact` or `--only-metadata` flags for specific deletions. #### 6. Cleaning Environment For a complete reset on local data: ```bash zenml clean ``` Use the `--local` flag to delete local files related to the active stack. By utilizing these methods, you can effectively manage and maintain a clean pipeline dashboard in ZenML. ================================================== === File: docs/book/how-to/pipeline-development/develop-locally/local-prod-pipeline-variants.md === ### Summary: Creating Pipeline Variants for Local Development and Production in ZenML When developing ZenML pipelines, it's useful to create different variants for local development and production environments. This facilitates rapid iteration during development while ensuring a robust setup for production. The methods to achieve this include: 1. **Using Configuration Files** - Define pipeline configurations in YAML files. Example for development: ```yaml enable_cache: False parameters: dataset_name: "small_dataset" steps: load_data: enable_cache: False ``` - Apply the configuration in your pipeline: ```python from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def ml_pipeline(dataset_name: str): load_data(dataset_name) if __name__ == "__main__": ml_pipeline.with_options(config_path="path/to/config.yaml")() ``` 2. **Implementing Variants in Code** - Create variants directly in your code using a boolean flag: ```python import os from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def ml_pipeline(is_dev: bool = False): dataset = "small_dataset" if is_dev else "full_dataset" load_data(dataset) if __name__ == "__main__": is_dev = os.environ.get("ZENML_ENVIRONMENT") == "dev" ml_pipeline(is_dev=is_dev) ``` 3. **Using Environment Variables** - Determine the variant to run based on environment variables: ```python import os config_path = "config_dev.yaml" if os.environ.get("ZENML_ENVIRONMENT") == "dev" else "config_prod.yaml" ml_pipeline.with_options(config_path=config_path)() ``` - Run the pipeline with: ``` ZENML_ENVIRONMENT=dev python run.py ``` ### Development Variant Considerations When creating a development variant, optimize for faster iteration: - Use smaller datasets - Specify a local stack - Reduce training epochs and batch size - Use a smaller base model Example configuration for development: ```yaml parameters: dataset_path: "data/small_dataset.csv" epochs: 1 batch_size: 16 stack: local_stack ``` Or in code: ```python @pipeline def ml_pipeline(is_dev: bool = False): dataset = "data/small_dataset.csv" if is_dev else "data/full_dataset.csv" epochs = 1 if is_dev else 100 batch_size = 16 if is_dev else 64 load_data(dataset) train_model(epochs=epochs, batch_size=batch_size) ``` Creating different pipeline variants allows for efficient local testing and debugging while maintaining a comprehensive production configuration, enhancing the development workflow. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/retrieve-used-configuration-of-a-run.md === # Extracting Configuration from a Pipeline Run To retrieve the configuration used in a completed pipeline run, you can access the `config` attribute of the pipeline run or a specific step within it. ## Code Example ```python from zenml.client import Client pipeline_run = Client().get_pipeline_run() # General configuration for the pipeline pipeline_run.config # Configuration for a specific step pipeline_run.steps[].config ``` This allows you to easily obtain the configurations utilized during the execution of the pipeline. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/README.md === ZenML allows easy configuration and execution of pipelines using YAML files. These configuration files enable runtime adjustments for parameters, caching behavior, and stack components. Key areas of configuration include: - **What can be configured**: Details on configurable elements. - **Configuration hierarchy**: Structure of configuration settings. - **Autogenerate a template YAML file**: Instructions for creating a template. For more information, refer to the linked sections. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/autogenerate-a-template-yaml-file.md === ### Summary of Documentation on Autogenerating a YAML Configuration Template To create a YAML configuration template for your pipeline, use the `.write_run_configuration_template()` method. This generates a YAML file with all options commented out, allowing you to customize the settings as needed. #### Code Example ```python from zenml import pipeline @pipeline(enable_cache=True) def simple_ml_pipeline(parameter: int): dataset = load_data(parameter=parameter) train_model(dataset) simple_ml_pipeline.write_run_configuration_template(path="") ``` #### Example of Generated YAML Configuration Template ```yaml build: Union[PipelineBuildBase, UUID, NoneType] enable_artifact_metadata: Optional[bool] enable_artifact_visualization: Optional[bool] enable_cache: Optional[bool] enable_step_logs: Optional[bool] extra: Mapping[str, Any] model: name: str save_models_to_registry: bool ... parameters: Optional[Mapping[str, Any]] run_name: Optional[str] schedule: catchup: bool cron_expression: Optional[str] ... settings: docker: apt_packages: List[str] ... resources: cpu_count: Optional[PositiveFloat] gpu_count: Optional[NonNegativeInt] memory: Optional[ConstrainedStrValue] steps: load_data: ... train_model: ... ``` #### Additional Configuration You can specify a stack when generating the template using: ```python simple_ml_pipeline.write_run_configuration_template(stack=) ``` This documentation provides a concise overview of how to generate and customize a YAML configuration template for a ZenML pipeline. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/runtime-configuration.md === ### Summary of ZenML Runtime Configuration Documentation **Overview** ZenML allows configuration of pipeline runtime settings through `BaseSettings`, enabling the customization of stack components and pipelines. Key configuration areas include: - Resource requirements for steps - Containerization process (e.g., Docker image requirements) - Stack component-specific configurations (e.g., experiment names) **Types of Settings** Settings are divided into two categories: 1. **General Settings**: Applicable to all ZenML pipelines. - `DockerSettings`: For Docker configurations. - `ResourceSettings`: For resource specifications. 2. **Stack-Component-Specific Settings**: Used for runtime configurations of specific components. The key format is `` or `.`. Examples include: - `SkypilotAWSOrchestratorSettings` - `KubeflowOrchestratorSettings` - `MLflowExperimentTrackerSettings` - `WandbExperimentTrackerSettings` - `WhylogsDataValidatorSettings` - `SagemakerStepOperatorSettings` - `VertexStepOperatorSettings` - `AzureMLStepOperatorSettings` **Registration-Time vs. Real-Time Settings** Settings passed during `zenml stack-component register` are static, while runtime settings can change for each pipeline run. For instance, `tracking_url` is fixed at registration, while `experiment_name` can vary. Default values can be set during registration, which apply unless overridden at runtime. **Key Specification for Settings** When defining stack-component-specific settings, use the appropriate key format. If only the category is specified (e.g., `step_operator`), settings apply to any flavor of that component in the stack. If incompatible, they will be ignored. **Code Examples** Using settings in Python: ```python @step(step_operator="nameofstepoperator", settings={"step_operator": {"estimator_args": {"instance_type": "m7g.medium"}}}) def my_step(): ... @step(step_operator="nameofstepoperator", settings={"step_operator": SagemakerStepOperatorSettings(instance_type="m7g.medium")}) def my_step(): ... ``` Using YAML: ```yaml steps: my_step: step_operator: "nameofstepoperator" settings: step_operator: estimator_args: instance_type: m7g.medium ``` This documentation provides a comprehensive guide to configuring runtime settings in ZenML, ensuring flexibility and adaptability in pipeline execution. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/configuration-hierarchy.md === ### Configuration Hierarchy in ZenML In ZenML, configuration settings follow a specific hierarchy: 1. **Code Configurations**: Override YAML file configurations. 2. **Step-Level Configurations**: Override pipeline-level configurations. 3. **Attribute Merging**: Dictionaries are merged for attributes. ### Example Code ```python from zenml import pipeline, step from zenml.config import ResourceSettings @step def load_data(parameter: int) -> dict: ... @step(settings={"resources": ResourceSettings(gpu_count=1, memory="2GB")}) def train_model(data: dict) -> None: ... @pipeline(settings={"resources": ResourceSettings(cpu_count=2, memory="1GB")}) def simple_ml_pipeline(parameter: int): ... # Merged configurations train_model.configuration.settings["resources"] # -> cpu_count: 2, gpu_count=1, memory="2GB" simple_ml_pipeline.configuration.settings["resources"] # -> cpu_count: 2, memory="1GB" ``` ### Key Points - Step configurations take precedence over pipeline configurations. - Resource settings can be specified at both the step and pipeline levels, with step settings overriding pipeline defaults. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/how-to-use-config.md === ### Configuration File Usage in ZenML **Best Practice**: Use a YAML configuration file to separate configuration from code, although settings can also be defined directly in code. **Applying Configuration**: Use the `with_options(config_path=)` method to apply configurations to a pipeline. **Example YAML Configuration**: ```yaml enable_cache: False parameters: dataset_name: "best_dataset" steps: load_data: enable_cache: False ``` **Example Python Code**: ```python from zenml import step, pipeline @step def load_data(dataset_name: str) -> dict: ... @pipeline def simple_ml_pipeline(dataset_name: str): load_data(dataset_name) if __name__ == "__main__": simple_ml_pipeline.with_options(config_path=)() ``` **Outcome**: This code runs `simple_ml_pipeline` with caching disabled for `load_data` and sets `dataset_name` to `best_dataset`. ================================================== === File: docs/book/how-to/pipeline-development/use-configuration-files/what-can-be-configured.md === # Configuration Overview This document provides a sample YAML configuration for a ZenML pipeline, highlighting key settings and their purposes. For a complete list of configuration keys, refer to the linked documentation. ## Sample YAML Configuration ```yaml build: dcd6fafb-c200-4e85-8328-428bef98d804 # Docker image ID enable_artifact_metadata: True enable_artifact_visualization: False enable_cache: False enable_step_logs: True extra: any_param: 1 another_random_key: "some_string" model: name: "classification_model" version: production audience: "Data scientists" description: "This classifies hotdogs and not hotdogs" ethics: "No ethical implications" license: "Apache 2.0" limitations: "Only works for hotdogs" tags: ["sklearn", "hotdog", "classification"] parameters: dataset_name: "another_dataset" run_name: "my_great_run" schedule: catchup: true cron_expression: "* * * * *" settings: docker: apt_packages: ["curl"] copy_files: True dockerfile: "Dockerfile" dockerignore: ".dockerignore" environment: ZENML_LOGGING_VERBOSITY: DEBUG parent_image: "zenml-io/zenml-cuda" requirements: ["torch"] skip_build: False resources: cpu_count: 2 gpu_count: 1 memory: "4Gb" steps: train_model: parameters: data_source: "best_dataset" experiment_tracker: "mlflow_production" step_operator: "vertex_gpu" outputs: {} failure_hook_source: {} success_hook_source: {} enable_artifact_metadata: True enable_artifact_visualization: True enable_cache: False enable_step_logs: True extra: {} model: {} settings: docker: {} resources: {} step_operator.sagemaker: estimator_args: instance_type: m7g.medium ``` ## Key Configuration Details ### `enable_XXX` Parameters These boolean flags control various behaviors: - `enable_artifact_metadata`: Attach metadata to artifacts. - `enable_artifact_visualization`: Attach visualizations of artifacts. - `enable_cache`: Use caching. - `enable_step_logs`: Enable step logs tracking. ### `build` ID Specifies the UUID of the Docker image for the pipeline. If provided, Docker image building is skipped. ### Configuring the `model` Defines the ZenML model used in the pipeline: ```yaml model: name: "ModelName" version: "production" description: An example model tags: ["classifier"] ``` ### Pipeline and Step `parameters` Parameters can be defined at both the pipeline and step levels: ```yaml parameters: gamma: 0.01 steps: trainer: parameters: gamma: 0.001 ``` Step parameters take precedence over pipeline parameters. ### Setting the `run_name` Specify a unique `run_name` for each run to avoid conflicts: ```yaml run_name: ``` ### Stack Component Runtime Settings Settings for Docker and resource configurations are defined under `settings`: ```yaml settings: docker: requirements: - pandas resources: cpu_count: 2 gpu_count: 1 memory: "4Gb" ``` ### Step-specific Configuration Certain configurations apply only at the step level: - `experiment_tracker`: Name of the experiment tracker for the step. - `step_operator`: Name of the step operator for the step. - `outputs`: Configuration for output artifacts, including materializer sources. ### Hooks Specify `failure_hook_source` and `success_hook_source` for handling step outcomes. This summary encapsulates the essential configurations and their meanings, enabling effective use of the ZenML framework. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/README.md === # Run Remote Pipelines from Jupyter Notebooks ZenML allows you to define and execute steps and pipelines in Jupyter notebooks remotely. The code from notebook cells is extracted and run as Python modules in Docker containers. To ensure proper execution, the notebook cells must adhere to specific conditions. ## Key Sections: - **Limitations of Defining Steps in Notebook Cells**: Important constraints to consider when defining steps. [Read more](limitations-of-defining-steps-in-notebook-cells.md). - **Run a Single Step from a Notebook**: Instructions on executing a single step. [Read more](run-a-single-step-from-a-notebook.md). For successful integration, ensure your notebook meets the outlined requirements. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/limitations-of-defining-steps-in-notebook-cells.md === # Limitations of Defining Steps in Notebook Cells To run ZenML steps defined in notebook cells remotely (using a remote orchestrator or step operator), the following conditions must be met: - The cell can only contain Python code; Jupyter magic commands or shell commands (starting with `%` or `!`) are not allowed. - The cell **must not** call code from other notebook cells. However, functions or classes imported from Python files are permitted. - The cell **must not** rely on imports from previous cells. It must perform all necessary imports, including ZenML imports like `from zenml import step`. ================================================== === File: docs/book/how-to/pipeline-development/run-remote-notebooks/run-a-single-step-from-a-notebook.md === ### Running a Single Step from a Notebook To execute a single step remotely from a notebook, call the step like a regular Python function. ZenML will create a pipeline with the step and run it on the active stack. Be aware of the [limitations](limitations-of-defining-steps-in-notebook-cells.md) when defining steps in notebook cells. #### Example Code ```python from zenml import step import pandas as pd from sklearn.base import ClassifierMixin from sklearn.svm import SVC from typing import Tuple, Annotated @step(step_operator="") def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc X_train = pd.DataFrame(...) # Replace with actual data y_train = pd.Series(...) # Replace with actual data # Execute the step model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` This code defines a step for training an SVC classifier and demonstrates how to call it directly, creating a pipeline for execution. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/README.md === # Configure Python Environments ZenML deployments involve managing multiple environments. This guide outlines the key environments and their configurations. ## Environments Overview - **Client Environment (Runner Environment)**: Where ZenML pipelines are compiled, typically in a `run.py` script. Types include: - Local development - CI runner in production - ZenML Pro runner - `runner` image orchestrated by the ZenML server ### Key Steps in Client Environment: 1. Compile an intermediate pipeline representation using the `@pipeline` function. 2. Create/trigger pipeline and step build environments if running remotely. 3. Trigger a run in the orchestrator. The `@pipeline` function is called only in the client environment, focusing on compile-time logic. ## ZenML Server Environment The ZenML server environment is a FastAPI application that manages pipelines and metadata, including the ZenML Dashboard. Dependencies should be installed during ZenML deployment, particularly for custom integrations. ## Execution Environments In local runs, the client, server, and execution environments are the same. For remote pipelines, ZenML transfers code to the remote orchestrator by creating Docker images (execution environments) starting from a base image containing ZenML and Python, then adding dependencies. Refer to the [containerize your pipeline](../../../how-to/customize-docker-builds/README.md) guide for managing Docker image configuration. ## Image Builder Environment Execution environments are typically created locally using the Docker client, which requires installation and permissions. ZenML provides image builders, a specialized stack component, to build and push Docker images in a dedicated image builder environment. If no image builder is configured, ZenML defaults to the local image builder, maintaining consistency across builds. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/handling-dependencies.md === # Handling Dependencies in ZenML This documentation addresses common issues with conflicting dependencies when using ZenML with other libraries. ZenML is designed to be stack- and integration-agnostic, allowing flexibility in running pipelines. However, this flexibility can lead to dependency conflicts. ## Installing Dependencies Use the command `zenml integration install ...` to install dependencies for specific integrations. After installing additional dependencies, verify that ZenML requirements are met by running `zenml integration list` and checking for the green tick symbol. ## Suggestions for Resolving Dependency Conflicts ### Use `pip-compile` Utilize `pip-compile` from the [pip-tools package](https://pip-tools.readthedocs.io/) to create a static `requirements.txt` file for consistent dependency management. For an example, refer to the [gitflow repository](https://github.com/zenml-io/zenml-gitflow#-software-requirements-management). ### Use `pip check` Run `pip check` to verify compatibility of your environment's dependencies. This command will list any conflicts, which may affect your use case. ### Known Dependency Issues ZenML integrations may have strict version requirements. For example, ZenML requires `click~=8.0.3` for its CLI. Using a version greater than 8.0.3 may cause issues. ## Manual Dependency Installation You can manually install dependencies instead of using ZenML's integration installation. This is not recommended but can be done at your own risk. The `zenml integration install ...` command executes a `pip install ...` for the specified integration. To manually install dependencies, use: ```bash # Export requirements to a file zenml integration export-requirements --output-file integration-requirements.txt INTEGRATION_NAME # Print requirements to console zenml integration export-requirements INTEGRATION_NAME ``` After exporting, you can modify the requirements as needed. If using a remote orchestrator, update the dependencies in a `DockerSettings` object to ensure compatibility. ================================================== === File: docs/book/how-to/pipeline-development/configure-python-environments/configure-the-server-environment.md === ### Configure the Server Environment The ZenML server environment is configured using environment variables, which must be set prior to deploying your server instance. For a complete list of available environment variables, refer to the [environment variables documentation](../../../reference/environment-variables.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/configuring-a-pipeline-at-runtime.md === ### Runtime Configuration of a Pipeline To run a pipeline with a different configuration, use the `pipeline.with_options` method. There are two primary ways to configure options: 1. Explicitly configure options: ```python with_options(steps={"trainer": {"parameters": {"param1": 1}}}) ``` 2. Pass a YAML file: ```python with_options(config_file="path_to_yaml_file") ``` For detailed options, refer to the [configuration file documentation](../../pipeline-development/use-configuration-files/README.md). **Exception:** To trigger a pipeline from a client or another pipeline, use the `PipelineRunConfiguration` object. More information can be found [here](../../trigger-pipelines/use-templates-python.md#advanced-usage-run-a-template-from-another-pipeline). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/retry-steps.md === ### ZenML Step Retry Configuration ZenML offers a built-in retry mechanism to automatically retry steps upon failure, useful for handling intermittent issues, especially on GPU-backed hardware. You can configure three parameters for retries: - **max_retries:** Maximum retry attempts. - **delay:** Initial delay (in seconds) before the first retry. - **backoff:** Multiplier for the delay after each retry. #### Using the @step Decorator You can specify the retry configuration in your step definition as follows: ```python from zenml.config.retry_config import StepRetryConfig @step( retry=StepRetryConfig( max_retries=3, delay=10, backoff=2 ) ) def my_step() -> None: raise Exception("This is a test exception") ``` **Note:** Infinite retries are not supported. Setting `max_retries` to a large value or omitting it will still enforce an internal maximum to prevent infinite loops. It's advisable to set a reasonable `max_retries` based on your use case. ### See Also: - [Failure/Success Hooks](use-failure-success-hooks.md) - [Configure Pipelines](../../pipeline-development/use-configuration-files/how-to-use-config.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/README.md === ### ZenML Pipeline Documentation Summary **Overview**: Building pipelines in ZenML is straightforward using the `@step` and `@pipeline` decorators. #### Example Code ```python from zenml import pipeline, step @step def load_data() -> dict: return {'features': [[1, 2], [3, 4], [5, 6]], 'labels': [0, 1, 0]} @step def train_model(data: dict) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(): dataset = load_data() train_model(dataset) # Execute the pipeline simple_ml_pipeline() ``` #### Execution and Logging When executed, the pipeline logs its run to the ZenML dashboard, which requires a ZenML server (local or remote). The dashboard displays the Directed Acyclic Graph (DAG) and associated metadata. #### Advanced Features For more advanced interactions with pipelines, refer to the following topics: - Configure pipeline/step parameters - Name and annotate step outputs - Control caching behavior - Customize step invocation IDs - Name pipeline runs - Use failure/success hooks - Hyperparameter tuning - Attach and fetch metadata within steps - Enable or disable log storage - Access secrets in a step For detailed documentation on these features, consult the respective links provided in the original documentation. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/compose-pipelines.md === ### Summary: Reusing Steps Between Pipelines in ZenML ZenML enables the composition of pipelines to avoid code duplication by allowing the extraction of common functionality into separate functions. #### Example Code ```python from zenml import pipeline @pipeline def data_loading_pipeline(mode: str): data = training_data_loader_step() if mode == "train" else test_data_loader_step() return preprocessing_step(data) @pipeline def training_pipeline(): training_data = data_loading_pipeline(mode="train") model = training_step(data=training_data) test_data = data_loading_pipeline(mode="test") evaluation_step(model=model, data=test_data) ``` #### Key Points - The `data_loading_pipeline` is called within `training_pipeline`, functioning as a step in the latter. - Only the parent pipeline is visible in the dashboard. - For triggering a pipeline from another, refer to the advanced usage documentation. #### Additional Resources - Learn more about orchestrators [here](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/using-a-custom-step-invocation-id.md === ## Custom Step Invocation ID in ZenML When invoking a ZenML step in a pipeline, a unique **invocation ID** is generated. This ID can be used for defining the execution order of steps or fetching invocation details post-execution. ### Key Points: - The first invocation of a step uses its name as the invocation ID (e.g., `my_step`). - Subsequent invocations append a suffix (e.g., `my_step_2`, `my_step_3`) to ensure uniqueness. - A custom invocation ID can be specified by passing an `id` parameter, which must be unique across all invocations in the pipeline. ### Example Code: ```python from zenml import pipeline, step @step def my_step() -> None: ... @pipeline def example_pipeline(): my_step() # ID: my_step my_step() # ID: my_step_2 my_step(id="my_custom_invocation_id") # Custom ID ``` This setup allows for flexible management of step invocations within ZenML pipelines. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/get-past-pipeline-step-runs.md === To retrieve past pipeline or step runs in ZenML, use the `get_pipeline` method along with the `last_run` property or by indexing into the runs. ### Example Code: ```python from zenml.client import Client client = Client() # Retrieve a pipeline by its name p = client.get_pipeline("mlflow_train_deploy_pipeline") # Get the latest run of this pipeline latest_run = p.last_run # Access runs by index first_run = p[0] ``` This code snippet demonstrates how to access the latest run and the first run of a specified pipeline. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/use-pipeline-step-parameters.md === ### Summary of Parameterization in ZenML Pipelines **Overview**: Steps and pipelines in ZenML can be parameterized similarly to Python functions. Parameters can be either **artifacts** (outputs from previous steps) or **explicit values** provided during the step invocation. **Key Points**: - **Artifacts**: Used to share data between steps. - **Parameters**: Explicitly defined values that allow for step behavior customization. Only JSON-serializable values (via Pydantic) can be passed as parameters. For non-JSON-serializable objects (e.g., NumPy arrays), use **External Artifacts**. **Example Code**: ```python from zenml import step, pipeline @step def my_step(input_1: int, input_2: int) -> None: pass @pipeline def my_pipeline(): int_artifact = some_other_step() my_step(input_1=int_artifact, input_2=42) ``` **YAML Configuration**: Parameters can also be defined in a YAML configuration file, allowing for easy updates without modifying the code. **YAML Example**: ```yaml parameters: environment: production steps: my_step: parameters: input_2: 42 ``` **Python Code with YAML**: ```python @pipeline def my_pipeline(environment: str): ... if __name__=="__main__": my_pipeline.with_options(config_path="config.yaml")() ``` **Conflict Handling**: If there are conflicting settings (e.g., parameters defined in both YAML and code), ZenML will notify you with details on resolving the conflict. **Caching Behavior**: - **Parameters**: A step is cached only if all parameter values match previous executions. - **Artifacts**: A step is cached only if all input artifacts match previous executions. If any upstream steps are not cached, the step will execute again. ### Additional Resources: - [Use configuration files to set parameters](use-pipeline-step-parameters.md) - [How caching works and how to control it](control-caching-behavior.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/step-output-typing-and-annotation.md === ### Step Output Typing and Annotation in ZenML #### Type Annotations - Type annotations are optional for ZenML step functions but provide several advantages: - **Type Validation**: Ensures correct input types from upstream steps. - **Better Serialization**: ZenML selects the most suitable materializer for outputs with type annotations. Custom materializers can be created if built-in ones are inadequate. **Warning**: The built-in `CloudpickleMaterializer` can handle any object but is not production-ready due to potential version compatibility issues and security risks. #### Code Examples ```python from typing import Tuple from zenml import step @step def square_root(number: int) -> float: return number ** 0.5 @step def divide(a: int, b: int) -> Tuple[int, int]: return a // b, a % b ``` - To enforce type annotations, set `ZENML_ENFORCE_TYPE_ANNOTATIONS=True`. ZenML will raise exceptions for missing annotations. #### Tuple vs Multiple Outputs - A return statement with a tuple literal (e.g., `return (1, 2)`) indicates multiple outputs. Otherwise, it is treated as a single output of type `Tuple`. #### Output Naming - Default output names: - Single output: `output` - Multiple outputs: `output_0`, `output_1`, etc. - Custom output names can be defined using `Annotated`: ```python from typing_extensions import Annotated from typing import Tuple from zenml import step @step def square_root(number: int) -> Annotated[float, "custom_output_name"]: return number ** 0.5 @step def divide(a: int, b: int) -> Tuple[Annotated[int, "quotient"], Annotated[int, "remainder"]]: return a // b, a % b ``` - If no custom names are provided, artifacts are named in the format `{pipeline_name}::{step_name}::output`. #### Additional Resources - For more on output annotation: [return-multiple-outputs-from-a-step.md](../../data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md) - For custom data types: [handle-custom-data-types.md](../../data-artifact-management/handle-data-artifacts/handle-custom-data-types.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/schedule-a-pipeline.md === ### Summary: Scheduling Pipelines in ZenML #### Supported Orchestrators Not all orchestrators support scheduling. The following orchestrators do support it: - **Supported**: Airflow, AzureML, Databricks, HyperAI, Kubeflow, Kubernetes, Sagemaker, Vertex - **Not Supported**: Local, LocalDocker, Skypilot (AWS, Azure, GCP, Lambda), Tekton #### Setting a Schedule To set a schedule for a pipeline, you can use either cron expressions or human-readable notations. Below is an example: ```python from zenml.config.schedule import Schedule from zenml import pipeline from datetime import datetime @pipeline() def my_pipeline(...): ... # Using cron expression schedule = Schedule(cron_expression="5 14 * * 3") # Using human-readable notation schedule = Schedule(start_time=datetime.now(), interval_second=1800) my_pipeline = my_pipeline.with_options(schedule=schedule) my_pipeline() ``` For more scheduling options, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.schedule.Schedule). #### Pausing/Stopping a Schedule The method to pause or stop a scheduled pipeline depends on the orchestrator. For instance, with Kubeflow, you can use its UI. Users should consult their orchestrator's documentation for specific instructions. Note that ZenML schedules the run, but users are responsible for managing the lifecycle of the schedule. Running a pipeline with a schedule multiple times will create unique scheduled pipelines. #### Additional Resources For further information on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/fan-in-fan-out.md === ### Summary of Fan-in and Fan-out Patterns in ZenML The fan-out/fan-in pattern is a pipeline architecture that splits a single step into multiple parallel operations (fan-out) and consolidates the results back into one step (fan-in). This pattern is beneficial for parallel processing, distributed workloads, and data transformations. #### Key Code Components: ```python from zenml import step, get_step_context, pipeline from zenml.client import Client @step def load_step() -> str: return "Hello from ZenML!" @step def process_step(input_data: str) -> str: return input_data @step def combine_step(step_prefix: str, output_name: str) -> None: run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) processed_results = {step_info.name: step_info.outputs[output_name][0].load() for step_name, step_info in run.steps.items() if step_name.startswith(step_prefix)} print(",".join([f"{k}: {v}" for k, v in processed_results.items()])) @pipeline(enable_cache=False) def fan_out_fan_in_pipeline(parallel_count: int) -> None: input_data = load_step() after = [process_step(input_data, id=f"process_{i}") for i in range(parallel_count)] combine_step(step_prefix="process_", output_name="output", after=after) fan_out_fan_in_pipeline(parallel_count=8) ``` #### Use Cases: - Parallel data processing - Distributed model training - Ensemble methods - Batch processing - Data validation across multiple sources - Hyperparameter tuning #### Limitations: 1. Steps may run sequentially if the orchestrator does not support parallel execution (e.g., local orchestrator). 2. The number of steps must be predetermined; dynamic step creation is not supported. This pattern enhances resource utilization and is essential for efficient data processing workflows in ZenML. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/access-secrets-in-a-step.md === # Accessing Secrets in ZenML Steps ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, each identified by a **name** for easy retrieval in pipelines and stacks. For configuration and creation of secrets, refer to the [platform guide on secrets](../../../getting-started/deploying-zenml/secret-management.md). You can access secrets in your steps using the ZenML `Client` API, allowing for secure API queries without hard-coding access keys. ## Example Code ```python from zenml import step from zenml.client import Client from somewhere import authenticate_to_some_api @step def secret_loader() -> None: """Load the example secret from the server.""" secret = Client().get_secret("") authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ) ``` ### Additional Resources - [Creating and managing secrets](../../interact-with-secrets.md) - [Secrets backend in ZenML](../../../getting-started/deploying-zenml/secret-management.md) ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/delete-a-pipeline.md === ### Deleting Pipelines and Pipeline Runs #### Delete a Pipeline You can delete a pipeline using either the CLI or the Python SDK. **CLI Command:** ```shell zenml pipeline delete ``` **Python SDK:** ```python from zenml.client import Client Client().delete_pipeline() ``` **Note:** Deleting a pipeline does not remove its associated runs or artifacts. For bulk deletion, especially for pipelines with the same prefix, use the following script: ```python from zenml.client import Client client = Client() pipelines_list = client.list_pipelines(name="startswith:test_pipeline", size=100) target_pipeline_ids = [p.id for p in pipelines_list.items] if input(f"Found {len(target_pipeline_ids)} pipelines. Delete? (y/n): ").lower() == 'y': for pid in target_pipeline_ids: client.delete_pipeline(pid) print("Deletion complete") else: print("Deletion cancelled") ``` #### Delete a Pipeline Run To delete a pipeline run, use the CLI or the Python SDK. **CLI Command:** ```shell zenml pipeline runs delete ``` **Python SDK:** ```python from zenml.client import Client Client().delete_pipeline_run() ``` This documentation provides essential commands and a script for deleting pipelines and their runs, ensuring clarity and conciseness. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/name-your-pipeline-runs.md === ### Summary of Pipeline Run Naming in ZenML In ZenML, the name of a pipeline run is automatically generated using the current date and time, as shown in the example: ```bash Pipeline run training_pipeline-2023_05_24-12_41_04_576473 has finished in 3.742s. ``` To customize the run name, use the `run_name` parameter in the `with_options()` method: ```python training_pipeline = training_pipeline.with_options( run_name="custom_pipeline_run_name" ) training_pipeline() ``` Run names must be unique. To ensure uniqueness, compute the run name dynamically or use placeholders that ZenML replaces. Placeholders can be set in the `@pipeline` decorator or the `pipeline.with_options` function. Standard placeholders include: - `{date}`: current date (e.g., `2024_11_27`) - `{time}`: current UTC time (e.g., `11_07_09_326492`) Example of using placeholders in a custom run name: ```python training_pipeline = training_pipeline.with_options( run_name="custom_pipeline_run_name_{experiment_name}_{date}_{time}" ) training_pipeline() ``` This approach ensures meaningful and unique naming for each pipeline run. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/use-failure-success-hooks.md === ### Summary: Running Failure and Success Hooks After Step Execution **Overview**: Hooks in ZenML allow actions to be performed after a step's execution, useful for notifications, logging, or resource cleanup. There are two types of hooks: `on_failure` (triggers on step failure) and `on_success` (triggers on step success). #### Defining Hooks Hooks are defined as callback functions accessible within the pipeline repository. An optional `BaseException` argument can be added to failure hooks to capture the specific exception. **Example**: ```python from zenml import step def on_failure(exception: BaseException): print(f"Step failed: {str(exception)}") def on_success(): print("Step succeeded!") @step(on_failure=on_failure) def my_failing_step() -> int: raise ValueError("Error") @step(on_success=on_success) def my_successful_step() -> int: return 1 ``` #### Pipeline-Level Hooks Hooks can also be defined at the pipeline level to apply to all steps, with step-level hooks taking precedence. **Example**: ```python from zenml import pipeline @pipeline(on_failure=on_failure, on_success=on_success) def my_pipeline(...): ... ``` #### Accessing Step Information in Hooks Use `get_step_context()` to access pipeline run or step information within hooks. **Example**: ```python from zenml import step, get_step_context def on_failure(exception: BaseException): context = get_step_context() print(context.step_run.name) print(type(exception)) @step(on_failure=on_failure) def my_step(some_parameter: int = 1): raise ValueError("My exception") ``` #### Using Alerter Component Integrate the Alerter component to notify users on step success or failure. **Example**: ```python from zenml import get_step_context, Client def notify_on_failure() -> None: step_context = get_step_context() alerter = Client().active_stack.alerter if alerter and step_context.pipeline_run.config.extra["notify_on_failure"]: alerter.post(message=build_message(status="failed")) ``` #### OpenAI ChatGPT Failure Hook Utilize the OpenAI integration to generate suggestions for fixing exceptions. Ensure the OpenAI API key is stored in a ZenML secret. **Installation**: ```shell zenml integration install openai zenml secret create openai --api_key= ``` **Usage**: ```python from zenml.integration.openai.hooks import openai_chatgpt_alerter_failure_hook from zenml import step @step(on_failure=openai_chatgpt_alerter_failure_hook) def my_step(...): ... ``` This hook can provide suggestions to help resolve issues that caused the step to fail. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/run-an-individual-step.md === # Summary of ZenML Step Execution ## Running an Individual Step To execute a single step in ZenML, call the step like a normal Python function. ZenML will create an unlisted pipeline to run the step on the active stack. Unlisted pipelines are visible in the "Runs" tab of the dashboard. ### Example Code ```python from zenml import step import pandas as pd from sklearn.base import ClassifierMixin from sklearn.svm import SVC from typing import Tuple, Annotated @step(step_operator="") def svc_trainer( X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001, ) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: """Train a sklearn SVC classifier.""" model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) train_acc = model.score(X_train.to_numpy(), y_train.to_numpy()) print(f"Train accuracy: {train_acc}") return model, train_acc X_train = pd.DataFrame(...) y_train = pd.Series(...) model, train_acc = svc_trainer(X_train=X_train, y_train=y_train) ``` ## Running the Step Function Directly To run the step function without ZenML, use the `entrypoint(...)` method: ### Example Code ```python X_train = pd.DataFrame(...) y_train = pd.Series(...) model, train_acc = svc_trainer.entrypoint(X_train=X_train, y_train=y_train) ``` ## Default Behavior To set the default behavior to run steps without ZenML, set the environment variable `ZENML_RUN_SINGLE_STEPS_WITHOUT_STACK` to `True`. This will allow direct calls to the step function without using the ZenML stack. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/run-pipelines-asynchronously.md === ### Summary: Running Pipelines Asynchronously By default, pipelines run synchronously, meaning the terminal displays logs during execution. To run pipelines asynchronously, you can configure the orchestrator in two ways: 1. **Globally**: Set `synchronous=False` in the orchestrator configuration. ```python from zenml import pipeline @pipeline(settings={"orchestrator": {"synchronous": False}}) def my_pipeline(): ... ``` 2. **YAML Configuration**: Modify the pipeline configuration in a YAML file. ```yaml settings: orchestrator.: synchronous: false ``` For more details on orchestrators, refer to the [orchestrators documentation](../../../component-guide/orchestrators/orchestrators.md). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-caching-behavior.md === ### ZenML Caching Behavior By default, ZenML caches steps in pipelines when code and parameters remain unchanged. #### Caching Configuration - **Step Level Caching:** ```python @step(enable_cache=True) # Cache enabled def load_data(parameter: int) -> dict: ... @step(enable_cache=False) # Cache disabled def train_model(data: dict) -> None: ... ``` - **Pipeline Level Caching:** ```python @pipeline(enable_cache=True) # Cache enabled def simple_ml_pipeline(parameter: int): ... ``` Caching occurs only if code and parameters are identical. You can modify caching settings after initial configuration: ```python my_step.configure(enable_cache=...) my_pipeline.configure(enable_cache=...) ``` For YAML configuration options, refer to the [configuration files documentation](../../pipeline-development/use-configuration-files/). ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/control-execution-order-of-steps.md === # Control Execution Order of Steps in ZenML ZenML determines the execution order of pipeline steps based on data dependencies. For instance, in the example below, `step_3` relies on the outputs of `step_1` and `step_2`, allowing both to run in parallel before `step_3` starts. ```python from zenml import pipeline @pipeline def example_pipeline(): step_1_output = step_1() step_2_output = step_2() step_3(step_1_output, step_2_output) ``` To enforce specific execution order constraints, you can use non-data dependencies by specifying invocation IDs. For example, to ensure `my_step` runs after `other_step`, use: `my_step(after="other_step")`. For multiple dependencies, pass a list: `my_step(after=["other_step", "other_step_2"])`. Refer to the [documentation](using-a-custom-step-invocation-id.md) for details on invocation IDs. ```python from zenml import pipeline @pipeline def example_pipeline(): step_1_output = step_1(after="step_2") step_2_output = step_2() step_3(step_1_output, step_2_output) ``` In this modified pipeline, `step_1` will only start after `step_2` has completed. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/fetching-pipelines.md === ### Summary of Pipeline Run Inspection Documentation #### Overview This documentation covers how to inspect finished pipeline runs and their outputs in ZenML, including accessing artifacts, metadata, and the lineage of pipeline runs. #### Pipeline Hierarchy The hierarchy consists of: - **Pipelines** (1:N) → **Runs** (1:N) → **Steps** (1:N) → **Artifacts** #### Fetching Pipelines - **Get a Specific Pipeline**: ```python from zenml.client import Client pipeline_model = Client().get_pipeline("first_pipeline") ``` - **List All Pipelines**: - **Python**: ```python pipelines = Client().list_pipelines() ``` - **CLI**: ```shell zenml pipeline list ``` #### Pipeline Runs - **Get All Runs of a Pipeline**: ```python runs = pipeline_model.runs ``` - **Get the Last Run**: ```python last_run = pipeline_model.last_run # or: pipeline_model.runs[0] ``` - **Execute a Pipeline**: ```python run = training_pipeline() ``` - **Fetch a Specific Run**: ```python pipeline_run = Client().get_pipeline_run("first_pipeline-2023_06_20-16_20_13_274466") ``` #### Run Information - **Status**: ```python status = run.status # Possible states: initialized, failed, completed, running, cached ``` - **Configuration**: ```python pipeline_config = run.config ``` - **Component-Specific Metadata**: ```python run_metadata = run.run_metadata orchestrator_url = run_metadata["orchestrator_url"].value ``` #### Step Information - **Access Steps of a Run**: ```python steps = run.steps step = run.steps["first_step"] ``` #### Artifacts - **Inspect Output Artifacts**: ```python output = step.outputs["output_name"] my_pytorch_model = output.load() ``` - **Fetch Artifacts Directly**: ```python artifact = Client().get_artifact('iris_dataset') output = artifact.versions['2022'] ``` #### Metadata and Visualizations - **Access Artifact Metadata**: ```python output_metadata = output.run_metadata storage_size_in_bytes = output_metadata["storage_size"].value ``` - **Visualize Artifacts**: ```python output.visualize() ``` #### Fetching Information During Run Execution You can fetch information about previous runs while a pipeline is executing: ```python from zenml import get_step_context from zenml.client import Client @step def my_step(): current_run_name = get_step_context().pipeline_run.name current_run = Client().get_pipeline_run(current_run_name) previous_run = current_run.pipeline.runs[1] ``` #### Code Example A complete code example demonstrating the loading of a trained model: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step from zenml.client import Client @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"]]: model = SVC(gamma=gamma).fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": last_run = training_pipeline() model = last_run.steps["svc_trainer"].outputs["trained_model"].load() ``` This summary retains all critical information while ensuring clarity and conciseness. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/reference-environment-variables-in-configurations.md === ## Reference Environment Variables in Configurations ZenML allows referencing environment variables in configurations using the placeholder syntax `${ENV_VARIABLE_NAME}`. ### In-code Example ```python from zenml import step @step(extra={"value_from_environment": "${ENV_VAR}"}) def my_step() -> None: ... ``` ### Configuration File Example ```yaml extra: value_from_environment: ${ENV_VAR} combined_value: prefix_${ENV_VAR}_suffix ``` This feature enhances flexibility in both code and configuration files. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/tag-your-pipeline-runs.md === # Tagging Pipeline Runs You can specify tags for your pipeline runs in the following ways: 1. **Configuration File**: ```yaml # config.yaml tags: - tag_in_config_file ``` 2. **In Code**: - Using the `@pipeline` decorator: ```python @pipeline(tags=["tag_on_decorator"]) def my_pipeline(): ... ``` - Using the `with_options` method: ```python my_pipeline = my_pipeline.with_options(tags=["tag_on_with_options"]) ``` When you run the pipeline, tags from all specified locations will be merged and applied to the run. ================================================== === File: docs/book/how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md === ### Hyperparameter Tuning with ZenML ZenML allows for hyperparameter tuning through a simple pipeline, demonstrated with a grid search over learning rates. The process involves a `train_step` for training models with different learning rates and a `selection_step` to identify the best-performing hyperparameters. This utilizes the fan-in, fan-out method for pipeline construction. #### Code Overview ```python from typing import Annotated from sklearn.base import ClassifierMixin from zenml import step, pipeline, get_step_context from zenml.client import Client model_output_name = "my_model" @step def train_step(learning_rate: float) -> Annotated[ClassifierMixin, model_output_name]: return ... # Train and return the model. @step def selection_step(step_prefix: str, output_name: str) -> None: run_name = get_step_context().pipeline_run.name run = Client().get_pipeline_run(run_name) trained_models_by_lr = {} for step_name, step_info in run.steps.items(): if step_name.startswith(step_prefix): model = step_info.outputs[output_name][0].load() lr = step_info.config.parameters["learning_rate"] trained_models_by_lr[lr] = model for lr, model in trained_models_by_lr.items(): ... # Evaluate models to find the best one. @pipeline def my_pipeline(step_count: int) -> None: after = [] for i in range(step_count): train_step(learning_rate=i * 0.0001, id=f"train_step_{i}") after.append(f"train_step_{i}") selection_step(step_prefix="train_step_", output_name=model_output_name, after=after) my_pipeline(step_count=4) ``` #### Key Points - The `train_step` function trains a model with a specified learning rate. - The `selection_step` retrieves all models trained in previous steps and evaluates them to determine the best one. - A limitation is that a variable number of artifacts cannot be passed programmatically to a step; instead, the `selection_step` queries artifacts using the ZenML Client. #### Additional Resources For practical examples, refer to the E2E example in the ZenML GitHub repository, specifically: - [`hp_tuning_single_search`](https://github.com/zenml-io/zenml/blob/main/examples/e2e/steps/hp_tuning/hp_tuning_single_search.py) for randomized hyperparameter search. - [`hp_tuning_select_best_model`](https://github.com/zenml-io/zenml/blob/main/examples/e2e/steps/hp_tuning/hp_tuning_select_best_model.py) for selecting the best model based on previous results. ================================================== === File: docs/book/how-to/pipeline-development/training-with-gpus/README.md === # GPU Resource Management in ZenML ## Overview ZenML allows you to scale machine learning pipelines to the cloud using GPU-backed hardware. This involves specifying resource requirements for steps and ensuring the container environment is properly configured for GPU utilization. ## Specifying Resource Requirements To allocate resources for resource-intensive steps, use `ResourceSettings`: ```python from zenml.config import ResourceSettings from zenml import step @step(settings={"resources": ResourceSettings(cpu_count=8, gpu_count=2, memory="8GB")}) def training_step(...) -> ...: # train a model ``` For orchestrators like Skypilot that do not support `ResourceSettings`, use specific orchestrator settings: ```python from zenml import step from zenml.integrations.skypilot.flavors.skypilot_orchestrator_aws_vm_flavor import SkypilotAWSOrchestratorSettings skypilot_settings = SkypilotAWSOrchestratorSettings(cpus="2", memory="16", accelerators="V100:2") @step(settings={"orchestrator": skypilot_settings}) def training_step(...) -> ...: # train a model ``` Refer to each orchestrator's documentation for specific resource support. ## CUDA Configuration To utilize GPU capabilities, ensure your container is CUDA-enabled: 1. **Specify a CUDA-enabled parent image**: Example for PyTorch: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` For TensorFlow, use `tensorflow/tensorflow:latest-gpu`. 2. **Add ZenML as a pip requirement**: ```python docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["zenml==0.39.1", "torchvision"] ) ``` Ensure that the chosen image is compatible with both local and remote environments. ## Resetting CUDA Cache To avoid GPU cache issues, consider resetting the CUDA cache between steps: ```python import gc import torch def cleanup_memory() -> None: while gc.collect(): torch.cuda.empty_cache() @step def training_step(...): cleanup_memory() # train a model ``` ## Multi-GPU Training ZenML supports training across multiple GPUs on a single node. To manage this: - Create a script or function for training logic that runs in parallel across GPUs. - Call this function from within the ZenML step. For further assistance, connect with ZenML support via Slack. This documentation ensures that you can effectively utilize GPU resources in your ZenML pipelines while maintaining performance and efficiency. ================================================== === File: docs/book/how-to/pipeline-development/training-with-gpus/accelerate-distributed-training.md === ### Summary: Distributed Training with Hugging Face's Accelerate in ZenML ZenML integrates with Hugging Face's Accelerate library to facilitate distributed training in machine learning pipelines, allowing the use of multiple GPUs or nodes. #### Using `run_with_accelerate` Decorator To enable distributed execution for training steps, use the `run_with_accelerate` decorator: ```python from zenml import step, pipeline from zenml.integrations.huggingface.steps import run_with_accelerate @run_with_accelerate(num_processes=4, multi_gpu=True) @step def training_step(some_param: int, ...): ... @pipeline def training_pipeline(some_param: int, ...): training_step(some_param, ...) ``` The decorator accepts arguments similar to the `accelerate launch` CLI command. For a complete list of arguments, refer to the [Accelerate CLI documentation](https://huggingface.co/docs/accelerate/en/package_reference/cli#accelerate-launch). #### Configuration Options Key arguments for `run_with_accelerate` include: - `num_processes`: Number of processes for distributed training. - `cpu`: Force training on CPU. - `multi_gpu`: Enable distributed GPU training. - `mixed_precision`: Set mixed precision mode ('no', 'fp16', 'bf16'). #### Important Usage Notes 1. Use the `@` syntax for the decorator on steps; it cannot be used as a function inside the pipeline. 2. Use keyword arguments for calling steps. 3. Misuse will raise a `RuntimeError` with guidance. #### Container Configuration To run steps with Accelerate, ensure your environment is properly set up: 1. **Specify a CUDA-enabled parent image**: ```python from zenml.config import DockerSettings docker_settings = DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 2. **Add Accelerate as a requirement**: ```python docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["accelerate", "torchvision"] ) ``` #### Multi-GPU Training ZenML's Accelerate integration supports training with multiple GPUs on single or multiple nodes, enhancing performance for large datasets or complex models. Ensure your training code is compatible with distributed training. For assistance or more information, connect with ZenML on [Slack](https://zenml.io/slack). ================================================== === File: docs/book/how-to/customize-docker-builds/how-to-use-a-private-pypi-repository.md === ### How to Use a Private PyPI Repository To use a private PyPI repository that requires authentication, follow these steps: 1. **Store Credentials Securely**: Use environment variables for sensitive information. 2. **Configure Package Managers**: Set up `pip` or `poetry` to utilize these credentials during package installations. 3. **Custom Docker Images**: Consider using Docker images pre-configured with the necessary authentication. #### Example Code for Authentication Setup ```python import os from my_simple_package import important_function from zenml.config import DockerSettings from zenml import step, pipeline docker_settings = DockerSettings( requirements=["my-simple-package==0.1.0"], environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ['PYPI_TOKEN']}@my-private-pypi-server.com/{os.environ['PYPI_USERNAME']}/"} ) @step def my_step(): return important_function() @pipeline(settings={"docker": docker_settings}) def my_pipeline(): my_step() if __name__ == "__main__": my_pipeline() ``` **Important Note**: Handle credentials with care and use secure methods for managing and sharing authentication information within your team. ================================================== === File: docs/book/how-to/customize-docker-builds/docker-settings-on-a-pipeline.md === ### Summary: Using Docker Images to Run Your Pipeline #### Overview When running a pipeline with a remote orchestrator, a Dockerfile is dynamically generated to build a Docker image using the ZenML image builder. The Dockerfile includes the following steps: 1. **Base Image**: Starts from a parent image with ZenML installed (default is the official ZenML image). 2. **Dependency Installation**: Automatically installs required pip dependencies based on stack integrations. 3. **Source Files**: Optionally copies source files into the Docker container for execution. 4. **Environment Variables**: Sets user-defined environment variables. For customization, refer to the [DockerSettings object](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.docker_settings.DockerSettings). #### Configuring Docker Settings Use the `DockerSettings` class to customize Docker builds: ```python from zenml.config import DockerSettings ``` **Pipeline Configuration**: Apply settings to all steps. ```python docker_settings = DockerSettings() @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() ``` **Step Configuration**: Allows for specialized Docker images for different steps. ```python @step(settings={"docker": docker_settings}) def my_step() -> None: pass ``` **YAML Configuration**: Define settings in a YAML file. ```yaml settings: docker: ... steps: step_name: settings: docker: ... ``` Refer to the configuration hierarchy [here](../pipeline-development/use-configuration-files/configuration-hierarchy.md). #### Specifying Docker Build Options To pass build options to the image builder: ```python docker_settings = DockerSettings(build_config={"build_options": {...}}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` **MacOS ARM Architecture**: Specify the target platform for local Docker caching: ```python docker_settings = DockerSettings(build_config={"build_options": {"platform": "linux/amd64"}}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` #### Using a Custom Parent Image You can specify a custom pre-built parent image or a Dockerfile. Ensure it has Python, pip, and ZenML installed. **Pre-built Parent Image**: ```python docker_settings = DockerSettings(parent_image="my_registry.io/image_name:tag") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` **Skip Build**: Use the image directly without rebuilding. ```python docker_settings = DockerSettings( parent_image="my_registry.io/image_name:tag", skip_build=True ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` **Warning**: This advanced feature may lead to unintended behavior. Ensure correct inclusion of code files in the specified image. For details, refer to [this guide](./use-a-prebuilt-image.md). ================================================== === File: docs/book/how-to/customize-docker-builds/README.md === ### Using Docker Images to Run Your Pipeline ZenML executes pipeline steps sequentially in the local Python environment. For remote orchestrators or step operators, ZenML builds Docker images to run pipelines in isolated environments. This section covers how to customize the Docker build process. **Key Points:** - **Docker Usage**: Docker images are used for running pipelines in a controlled environment. - **Customization**: Users can control the dockerization process for their pipelines. For more details, refer to the sections on [cloud orchestration](../../user-guide/production-guide/cloud-orchestration.md) and [step operators](../../component-guide/step-operators/step-operators.md). ================================================== === File: docs/book/how-to/customize-docker-builds/which-files-are-built-into-the-image.md === ### ZenML Image Building and File Management ZenML determines the root directory for source files in the following order: 1. If `zenml init` has been executed in the current or parent directory, that directory is used as the root. 2. If not, the parent directory of the executing Python file is used as the source root (e.g., for `python /path/to/file.py`, the root is `/path/to`). You can manage file handling in the root directory using attributes from the `DockerSettings` class: - **`allow_download_from_code_repository`**: If `True` and the files are in a registered code repository without local changes, files are downloaded from the repository instead of being included in the image. - **`allow_download_from_artifact_store`**: If the previous option is `False` or no suitable repository exists, and this is `True`, ZenML archives and uploads code to the artifact store. - **`allow_including_files_in_images`**: If both previous options are `False`, and this is `True`, files are included in the Docker image, necessitating a new image build for code changes. > **Warning**: Setting all attributes to `False` is not recommended and may lead to unintended behavior. You must ensure all files are correctly placed in the Docker images used for pipeline execution. ### File Exclusion and Inclusion - **Excluding Files**: Use a `.gitignore` file to exclude files when downloading from a code repository. - **Including Files**: To exclude files from the Docker image, utilize a `.dockerignore` file: - Place a `.dockerignore` file in the source root directory. - Alternatively, specify a `.dockerignore` file explicitly: ```python docker_settings = DockerSettings(build_config={"dockerignore": "/path/to/.dockerignore"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` This setup helps manage which files are included or excluded during the image building process, optimizing the Docker image size and ensuring the correct files are available for pipeline execution. ================================================== === File: docs/book/how-to/customize-docker-builds/use-a-prebuilt-image.md === ### Summary of ZenML Pipeline Prebuilt Image Documentation This documentation outlines how to skip building a Docker image for ZenML pipelines by using a prebuilt image, which can save time and costs during pipeline execution. #### Key Points: 1. **Pipeline Execution with Prebuilt Image**: - When running a pipeline on a remote Stack, ZenML typically builds a Docker image with project dependencies. - To avoid this, you can use a prebuilt image by configuring the `DockerSettings`. 2. **Configuration**: - Set the `parent_image` attribute and `skip_build` to `True` in `DockerSettings`: ```python docker_settings = DockerSettings( parent_image="my_registry.io/image_name:tag", skip_build=True ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 3. **Image Requirements**: - The prebuilt image must contain all necessary dependencies and optionally your code files if no code repository is registered and `allow_download_from_artifact_store` is `False`. 4. **Stack and Integration Requirements**: - Ensure the image includes all stack and integration dependencies: ```python from zenml.client import Client stack_name = Client().set_active_stack(stack_name) stack_requirements = Client().active_stack.requirements() ``` For integration dependencies: ```python from zenml.integrations.registry import integration_registry from zenml.integrations.constants import HUGGINGFACE, PYTORCH import itertools required_integrations = [PYTORCH, HUGGINGFACE] integration_requirements = set( itertools.chain.from_iterable( integration_registry.select_integration_requirements( integration_name=integration, target_os=OperatingSystemType.LINUX, ) for integration in required_integrations ) ) ``` 5. **Project-Specific and System Packages**: - Include project-specific dependencies in your `Dockerfile`: ```Dockerfile RUN pip install -r FILE ``` - Include system packages: ```Dockerfile RUN apt-get update && apt-get install -y --no-install-recommends YOUR_APT_PACKAGES ``` 6. **Code Files**: - If a code repository is registered, ZenML handles code retrieval. - If not, ensure your code files are included in the image, ideally in the `/app` directory, and that Python, `pip`, and `zenml` are installed. #### Important Notes: - Using a prebuilt image means you won't receive updates to your code or dependencies unless they are included in the image. - Ensure the prebuilt image is accessible to the orchestrator and other components without ZenML's involvement. ================================================== === File: docs/book/how-to/customize-docker-builds/use-your-own-docker-files.md === # Custom Docker Files in ZenML ZenML allows users to specify a custom Dockerfile, build context directory, and build options for dynamically creating a parent Docker image during pipeline execution. The build process is as follows: - **No Dockerfile Specified**: If requirements or environment settings necessitate an image build, ZenML builds a new image; otherwise, it uses the existing `parent_image`. - **Dockerfile Specified**: ZenML builds an image from the specified Dockerfile. If additional requirements necessitate another image, ZenML will build a second image. If not, the initial image is used for the pipeline. ### DockerSettings Configuration The `DockerSettings` object installs requirements in the following order (each step is optional): 1. Packages from the local Python environment. 2. Packages from the `requirements` attribute. 3. Packages from `required_integrations` and stack requirements. *Note: The intermediate image may also be used directly to execute pipeline steps depending on the Docker settings configuration.* ### Example Code ```python docker_settings = DockerSettings( dockerfile="/path/to/dockerfile", build_context_root="/path/to/build/context", parent_image_build_config={ "build_options": ..., "dockerignore": ... } ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` ================================================== === File: docs/book/how-to/customize-docker-builds/how-to-reuse-builds.md === ### Summary: Reusing Builds in ZenML #### Overview This documentation explains how to reuse builds in ZenML to enhance pipeline efficiency. A build encapsulates a pipeline and its stack, including Docker images and optionally the pipeline code. #### What is a Build? A build is a combination of a pipeline and the stack it runs on, containing necessary Docker images and configurations. You can list builds using the CLI: ```bash zenml pipeline builds list --pipeline_id='startswith:ab53ca' ``` To create a build manually: ```bash zenml pipeline build --stack vertex-stack my_module.my_pipeline_instance ``` #### Reusing Builds ZenML automatically reuses existing builds that match the pipeline and stack. You can specify a build ID in the pipeline configuration to force a specific build. Note that reusing a build executes the code in the Docker image, not local changes. To include local code changes, disconnect your code from the build by registering a code repository or using the artifact store. #### Using the Artifact Store ZenML can upload your code to the artifact store by default if no code repository is detected. #### Code Repositories Connecting a git repository can speed up Docker builds. This allows ZenML to build images without source files and download them before execution. If a clean repository state is maintained, ZenML will automatically reuse builds without needing to specify a build ID. Ensure the relevant integrations (e.g., GitHub) are installed: ```sh zenml integration install github ``` #### Detecting Local Code Repositories ZenML checks if the files used in the pipeline are tracked in registered code repositories by computing the source root and verifying its inclusion in a local checkout. #### Tracking Code Versions If a local repository is detected, ZenML stores the current commit reference for the pipeline run, provided the local checkout is clean. This ensures the pipeline runs with the exact code from the repository. To ignore untracked files, set the environment variable: ```bash export ZENML_CODE_REPOSITORY_IGNORE_UNTRACKED_FILES=True ``` #### Best Practices - Ensure the local checkout is clean and the latest commit is pushed to the remote repository for successful file downloads. - For options to enforce or disable file downloading, refer to the relevant documentation. This guide provides essential practices for maximizing build reuse and efficiency in ZenML pipelines. ================================================== === File: docs/book/how-to/customize-docker-builds/specify-pip-dependencies-and-apt-packages.md === # Summary of Specifying Pip Dependencies and Apt Packages The configuration for specifying pip and apt dependencies is applicable only in remote pipelines, not local ones. When using a remote orchestrator, a Dockerfile is generated at runtime to build the Docker image. ## DockerSettings Import `DockerSettings` from `zenml.config` to manage dependencies. ZenML installs all required packages by default, but you can specify additional packages in several ways: 1. **Replicate Local Environment**: ```python docker_settings = DockerSettings(replicate_local_python_environment="pip_freeze") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 2. **Custom Command for Requirements**: ```python docker_settings = DockerSettings(replicate_local_python_environment=[ "poetry", "export", "--extras=train", "--format=requirements.txt" ]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 3. **Specify Requirements in Code**: ```python docker_settings = DockerSettings(requirements=["torch==1.12.0", "torchvision"]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 4. **Use a Requirements File**: ```python docker_settings = DockerSettings(requirements="/path/to/requirements.txt") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 5. **Specify ZenML Integrations**: ```python from zenml.integrations.constants import PYTORCH, EVIDENTLY docker_settings = DockerSettings(required_integrations=[PYTORCH, EVIDENTLY]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 6. **Specify Apt Packages**: ```python docker_settings = DockerSettings(apt_packages=["git"]) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 7. **Disable Automatic Stack Requirements Installation**: ```python docker_settings = DockerSettings(install_stack_requirements=False) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` 8. **Custom Docker Settings for Steps**: ```python docker_settings = DockerSettings(requirements=["tensorflow"]) @step(settings={"docker": docker_settings}) def my_training_step(...): ... ``` ## Installation Order ZenML installs packages in the following order: - Local Python environment packages - Stack requirements (if not disabled) - Required integrations - Specified requirements ## Additional Installer Arguments You can specify additional arguments for the Python package installer: ```python docker_settings = DockerSettings(python_package_installer_args={"timeout": 1000}) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` ## Experimental: Use of `uv` To use `uv` for faster package installation: ```python docker_settings = DockerSettings(python_package_installer="uv") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` Note: `uv` is experimental and may lead to installation errors; switch back to `pip` if needed. For more details on `uv` with PyTorch, refer to the [Astral Docs](https://docs.astral.sh/uv/guides/integration/pytorch/). ================================================== === File: docs/book/how-to/customize-docker-builds/docker-settings-on-a-step.md === ### Summary of Docker Settings Customization in ZenML You can customize Docker settings at the step level in a ZenML pipeline, allowing different steps to use distinct Docker images. By default, all steps inherit the Docker image defined at the pipeline level. To specify a different image for a step, use the `DockerSettings` in the step decorator or within the configuration file. #### Step Decorator Example ```python from zenml import step from zenml.config import DockerSettings @step( settings={ "docker": DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime" ) } ) def training(...): ... ``` #### Configuration File Example ```yaml steps: training: settings: docker: parent_image: pytorch/pytorch:2.2.0-cuda11.8-cudnn8-runtime required_integrations: - gcp - github requirements: - zenml - numpy ``` This customization allows for flexibility in managing dependencies and environments for different steps within a pipeline. ================================================== === File: docs/book/how-to/customize-docker-builds/define-where-an-image-is-built.md === ### ZenML Image Builder Overview ZenML executes pipeline steps sequentially in the local Python environment. For remote orchestrators or step operators, it builds Docker images to run pipelines in isolated environments. By default, execution environments are created locally using the local Docker client, which requires Docker installation and permissions. ZenML provides **image builders**, a specialized stack component, to build and push Docker images in a dedicated image builder environment. If no image builder is configured, ZenML defaults to the local image builder, ensuring consistency across builds. The image builder environment aligns with the client environment. Users do not need to interact directly with image builders in their code. As long as the desired image builder is part of the active ZenML stack, it will be automatically utilized by any component requiring container image builds. ================================================== === File: docs/book/how-to/manage-zenml-server/README.md === # Manage Your ZenML Server This section provides guidelines for upgrading your ZenML server, using it in production, and troubleshooting. It includes: - **Best Practices for Upgrading**: Recommended steps for upgrading your ZenML server. - **Production Usage Tips**: Strategies for effectively using ZenML in a production environment. - **Troubleshooting**: Common issues and their resolutions. - **Migration Guides**: Instructions for transitioning between specific versions of ZenML. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/manage-zenml-server/best-practices-upgrading-zenml.md === ### Best Practices for Upgrading ZenML Upgrading ZenML can be smooth if best practices are followed. Here are key strategies for both server and code upgrades. #### Upgrading Your Server 1. **Data Backups**: - **Database Backup**: Always back up your MySQL database before upgrading. - **Automated Backups**: Set up daily automated backups using services like AWS RDS, Google Cloud SQL, or Azure Database for MySQL. 2. **Upgrade Strategies**: - **Staged Upgrade**: Use two ZenML server instances (old and new) to migrate services gradually. - **Team Coordination**: If multiple teams share an instance, coordinate upgrade timing to reduce disruption. - **Separate ZenML Servers**: Consider dedicated servers for teams needing different upgrade schedules. ZenML Pro supports multi-tenancy for this purpose. 3. **Minimizing Downtime**: - **Upgrade Timing**: Schedule upgrades during low-activity periods. - **Avoid Mid-Pipeline Upgrades**: Be cautious of automated upgrades that may disrupt long-running pipelines. #### Upgrading Your Code 1. **Testing and Compatibility**: - **Local Testing**: Test locally after upgrading (`pip install zenml --upgrade`) and run old pipelines for compatibility. - **End-to-End Testing**: Develop simple tests to ensure compatibility with your pipeline code. Refer to ZenML’s [test suite](https://github.com/zenml-io/zenml/tree/main/tests). - **Artifact Compatibility**: Be cautious with pickle-based materializers. Load older artifacts using: ```python from zenml.client import Client artifact = Client().get_artifact_version('YOUR_ARTIFACT_ID') loaded_artifact = artifact.load() ``` 2. **Dependency Management**: - **Python Version**: Ensure your Python version is compatible with the new ZenML version. Refer to the [installation guide](../../getting-started/installation.md). - **External Dependencies**: Check for compatibility issues with external dependencies, especially if older versions are no longer supported. Review the [release notes](https://github.com/zenml-io/zenml/releases). 3. **Handling API Changes**: - **Changelog Review**: Review the [changelog](https://github.com/zenml-io/zenml/releases) for breaking changes and new syntax. - **Migration Scripts**: Use available [migration scripts](migration-guide/migration-guide.md) for database schema changes. By adhering to these practices, you can minimize risks and ensure a smoother upgrade process for your ZenML server. Adapt these guidelines to fit your specific environment and infrastructure. ================================================== === File: docs/book/how-to/manage-zenml-server/using-zenml-server-in-prod.md === # Using ZenML Server in Production This guide outlines best practices for deploying ZenML server in production environments, focusing on performance, scalability, and reliability. ## Autoscaling Replicas To handle larger pipelines and high traffic, enable autoscaling based on your deployment environment: - **Kubernetes with Helm**: ```yaml autoscaling: enabled: true minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 80 ``` - **ECS**: Use the ECS console to enable autoscaling for your ZenML service, setting minimum and maximum tasks. - **Cloud Run**: Set a minimum instance count to 1 in the Cloud Run console to maintain "warm" instances. - **Docker Compose**: Scale your service using: ```bash docker compose up --scale zenml-server=N ``` ## High Connection Pool Values Increase server performance by adjusting thread pool size: ```yaml zenml: threadPoolSize: 100 ``` Ensure `zenml.database.poolSize` and `zenml.database.maxOverflow` are set to accommodate the thread pool size. ## Scaling the Backing Database Monitor and scale your database based on: - **CPU Utilization**: Scale if consistently above 50%. - **Freeable Memory**: Scale if below 100-200 MB. ## Setting Up Ingress/Load Balancer Securely expose your ZenML server: - **Kubernetes with Helm**: ```yaml zenml: ingress: enabled: true className: "nginx" ``` - **ECS**: Use Application Load Balancers as per AWS documentation. - **Cloud Run**: Utilize Cloud Load Balancing following GCP documentation. - **Docker Compose**: Set up an NGINX reverse proxy. ## Monitoring Implement monitoring tools based on your environment: - **Kubernetes**: Use Prometheus and Grafana. Example query for CPU utilization: ``` sum by(namespace) (rate(container_cpu_usage_seconds_total{namespace=~"zenml.*"}[5m])) ``` - **ECS**: Use CloudWatch for metrics like CPU and memory utilization. - **Cloud Run**: Utilize Cloud Monitoring for metrics in the console. ## Backups Establish a backup strategy to protect critical data: - Automate backups with a retention period (e.g., 30 days). - Export data periodically to external storage (e.g., S3, GCS). - Perform manual backups before server upgrades. ================================================== === File: docs/book/how-to/manage-zenml-server/upgrade-zenml-server.md === ### ZenML Server Upgrade Guide #### Overview Upgrading your ZenML server varies based on the deployment method. Always refer to the [best practices for upgrading ZenML](best-practices-upgrading-zenml.md) before proceeding. Upgrade promptly after a new version release to benefit from improvements and fixes. #### Upgrade Methods **1. Docker** To upgrade using Docker: - Ensure data is persisted (on persistent storage or external MySQL) and optionally back up before upgrading. - Delete the existing container: ```bash docker ps # Find your container ID docker stop docker rm ``` - Deploy the new version of the `zenml-server` image: ```bash docker run -it -d -p 8080:8080 --name zenmldocker/zenml-server: ``` **2. Kubernetes with Helm** - **In-place Upgrade** (no configuration changes): ```bash helm -n upgrade zenml-server oci://public.ecr.aws/zenml/zenml --version --reuse-values ``` - **Upgrade with Configuration Changes**: 1. Extract current configuration: ```bash helm -n get values zenml-server > custom-values.yaml ``` 2. Modify `custom-values.yaml` as needed. 3. Upgrade using the modified values: ```bash helm -n upgrade zenml-server oci://public.ecr.aws/zenml/zenml --version -f custom-values.yaml ``` > **Note**: Avoid changing the container image tag in the Helm chart unless necessary, as it is tested with the default tag. #### Important Notes - Downgrading is unsupported and may cause issues. - The Python client version should match the server version. ================================================== === File: docs/book/how-to/manage-zenml-server/troubleshoot-your-deployed-server.md === # Troubleshooting ZenML Deployment This document provides troubleshooting tips for common issues encountered during ZenML deployment. ## Viewing Logs ### Kubernetes To view logs for the ZenML server in a Kubernetes deployment: 1. List all pods: ```bash kubectl -n get pods ``` 2. If pods aren't running, retrieve logs for all pods: ```bash kubectl -n logs -l app.kubernetes.io/name=zenml ``` 3. For specific container logs (either `zenml-db-init` or `zenml`): ```bash kubectl -n logs -l app.kubernetes.io/name=zenml -c ``` Use `--tail` to limit log lines or `--follow` for real-time logs. ### Docker For Docker deployments, use the following commands based on your deployment method: - **Local Docker CLI**: ```shell zenml logs -f ``` - **Manual Docker Run**: ```shell docker logs zenml -f ``` - **Docker Compose**: ```shell docker compose -p zenml logs -f ``` ## Fixing Database Connection Problems Common MySQL connection issues and solutions: - **Access Denied**: ``` ERROR 1045 (28000): Access denied for user ``` Ensure correct username and password. - **Can't Connect to MySQL**: ``` ERROR 2003 (HY000): Can't connect to MySQL server on ``` Verify the host configuration. Test connection: ```bash mysql -h -u -p ``` For Kubernetes, use `kubectl port-forward` to connect to the MySQL database locally. ## Fixing Database Initialization Problems If encountering `Revision not found` errors after migrating versions, recreate the database: 1. Log in to MySQL: ```bash mysql -h -u -p ``` 2. Drop the existing database: ```sql drop database ; ``` 3. Create a new database: ```sql create database ; ``` 4. Restart the Kubernetes pods or Docker container to reinitialize the database. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-thirty.md === ### Migration Guide: ZenML 0.20.0-0.23.0 to 0.30.0-0.39.1 **Warning:** Migrating to ZenML `0.30.0` involves non-reversible database changes. Downgrading to versions `<=0.23.0` is not possible post-migration. If using an older version, first consult the [0.20.0 Migration Guide](migration-zero-twenty.md) to avoid database migration issues. **Key Changes:** - The `ml-pipelines-sdk` dependency has been removed. - Pipeline runs and artifacts are now stored natively in the ZenML database. **Migration Steps:** 1. Install ZenML `0.30.0`: ```bash pip install zenml==0.30.0 zenml version # Should output 0.30.0 ``` 2. The database migration will occur automatically upon executing any `zenml ...` CLI command after installation. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-twenty.md === # Migration Guide: ZenML 0.13.2 to 0.20.0 **Last Updated: 2023-07-24** ZenML 0.20.0 introduces significant architectural changes that are not backward compatible. This guide outlines the migration process for existing ZenML stacks and pipelines. ## Key Changes: - **Metadata Store**: ZenML now manages its own metadata, eliminating the need for external Metadata Stores. Users must migrate to a ZenML server deployment if using remote stores. - **ZenML Dashboard**: A new dashboard is available for all deployments. - **Profiles Removed**: ZenML Profiles are replaced by Projects. Existing Profiles must be manually migrated. - **Decoupled Configuration**: Stack component configuration is now separate from implementation, requiring updates for custom components. - **Collaborative Features**: The ZenML server allows sharing of stacks and components among users. ## Migration Steps: ### 1. **Backup and Upgrade** - Backup existing metadata stores. - Upgrade ZenML using: ```bash pip install zenml==0.20.0 ``` ### 2. **Migrate Pipeline Runs** Use the CLI command to migrate pipeline runs: - For SQLite: ```bash zenml pipeline runs migrate PATH/TO/LOCAL/STORE/metadata.db ``` - For other databases (MySQL): ```bash zenml pipeline runs migrate DATABASE_NAME \ --database_type=mysql \ --mysql_host=URL/TO/MYSQL \ --mysql_username=MYSQL_USERNAME \ --mysql_password=MYSQL_PASSWORD ``` ### 3. **Migrate Profiles** 1. Update to ZenML 0.20.0 (this invalidates existing Profiles). 2. Connect to a ZenML server: ```bash zenml connect ``` 3. Use the CLI commands to migrate: ```bash zenml profile list zenml profile migrate /path/to/profile ``` ### 4. **Configuration Changes** - Rename `Repository` to `Client` in your code. - Rename `BaseStepConfig` to `BaseParameters`. - Use the new `BaseSettings` class for pipeline configurations. ### 5. **Deprecations** - Remove `enable_xxx` decorators; use settings directly in the `@step` decorator. - Replace `pipeline.with_config(...)` with `pipeline.run(config_path=...)`. ### 6. **Post-Execution Changes** - Use new methods for fetching pipelines and runs: ```python from zenml.post_execution import get_pipelines, get_pipeline ``` ## Important Notes: - The ZenML Dashboard currently only displays information from the `default` Project. - Local ZenML servers cannot track remote runs unless configured for cloud access. - Future changes may include moving the secrets manager out of the stack. For detailed deployment options and further instructions, refer to the [ZenML deployment documentation](../../../getting-started/deploying-zenml/README.md). For bug reports or feature requests, engage with the ZenML community on [Slack](https://zenml.io/slack) or submit a [GitHub Issue](https://github.com/zenml-io/zenml/issues/new/choose). ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-guide.md === ### ZenML Migration Guide Migrations are required for ZenML releases with breaking changes, specifically for minor version increments (e.g., `0.X` to `0.Y`) and major version increments (e.g., `0.1.X` to `0.2.X`). #### Release Type Examples - **No Breaking Changes**: `0.40.2` to `0.40.3` (no migration needed) - **Minor Breaking Changes**: `0.40.3` to `0.41.0` (migration required) - **Major Breaking Changes**: `0.39.1` to `0.40.0` (significant code changes) #### Major Migration Guides Follow these guides sequentially for major version migrations: - [0.13.2 → 0.20.0](migration-zero-twenty.md) - [0.23.0 → 0.30.0](migration-zero-thirty.md) - [0.39.1 → 0.41.0](migration-zero-forty.md) - [0.58.2 → 0.60.0](migration-zero-sixty.md) #### Release Notes For minor breaking changes (e.g., `0.40.3` to `0.41.0`), refer to the [ZenML Release Notes](https://github.com/zenml-io/zenml/releases) for details on changes. ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-sixty.md === ### Migration Guide: ZenML 0.58.2 to 0.60.0 (Pydantic 2) **Overview**: ZenML has upgraded to Pydantic v2, introducing critical updates that may affect user experience due to stricter validation and dependency changes. #### Key Changes: - **Pydantic Upgrade**: ZenML now uses Pydantic v2, which enhances performance and introduces new features. Users may encounter new validation errors. - **SQLModel Upgrade**: Updated from version `0.0.8` to `0.0.18` to ensure compatibility with Pydantic v2, necessitating an upgrade of SQLAlchemy from v1 to v2. Refer to [SQLAlchemy migration guide](https://docs.sqlalchemy.org/en/20/changelog/migration_20.html) for details. #### Integration Changes: - **Airflow**: Removed dependencies due to Airflow's continued use of SQLAlchemy v1. Users must run Airflow in a separate environment. - **AWS**: Updated SageMaker to version `2.172.0` to support `protobuf` 4. - **Evidently**: Updated to version `0.4.16` for Pydantic v2 compatibility. - **Feast**: Removed the extra `redis` dependency for compatibility. - **GCP & Kubeflow**: Upgraded `kfp` dependency to v2, removing Pydantic dependency. Functional changes may occur; see [Kubeflow migration guide](https://www.kubeflow.org/docs/components/pipelines/v2/migration/). - **Great Expectations**: Updated dependency to `great-expectations>=0.17.15,<1.0` for Pydantic v2 support. - **MLflow**: Compatible with both Pydantic versions, but installation order may cause downgrades to v1. - **Label Studio**: Updated to version 1.0, now supporting Pydantic v2. - **Skypilot**: The `skypilot[azure]` integration is deactivated due to incompatibility with `azurecli`. - **TensorFlow**: Requires TensorFlow `>=2.12.0` due to dependency changes. Issues may arise on Python 3.8. - **Tekton**: Updated to use `kfp` v2, aligning with other integrations. #### Recommendations: - Users may encounter dependency issues upon upgrading to ZenML 0.60.0, especially with integrations not supporting Pydantic v2. It is advised to set up a fresh Python environment for a smoother transition. For further assistance, users can reach out via [GitHub](https://github.com/zenml-io/zenml) or [Slack](https://zenml.io/slack-invite). ================================================== === File: docs/book/how-to/manage-zenml-server/migration-guide/migration-zero-forty.md === ### Migration Guide: ZenML 0.39.1 to 0.41.0 ZenML versions 0.40.0 to 0.41.0 introduced a new syntax for defining steps and pipelines. The old syntax is deprecated and will be removed in future versions. #### Overview **Old Syntax Example:** ```python from typing import Optional from zenml.steps import BaseParameters, Output, StepContext, step from zenml.pipelines import pipeline class MyStepParameters(BaseParameters): param_1: int param_2: Optional[float] = None @step def my_step(params: MyStepParameters, context: StepContext) -> Output(int_output=int, str_output=str): result = int(params.param_1 * (params.param_2 or 1)) result_uri = context.get_output_artifact_uri() return result, result_uri @pipeline def my_pipeline(my_step): my_step() step_instance = my_step(params=MyStepParameters(param_1=17)) pipeline_instance = my_pipeline(my_step=step_instance) pipeline_instance.run(schedule=Schedule(...)) ``` **New Syntax Example:** ```python from typing import Annotated, Optional, Tuple from zenml import get_step_context, pipeline, step @step def my_step(param_1: int, param_2: Optional[float] = None) -> Tuple[Annotated[int, "int_output"], Annotated[str, "str_output"]]: result = int(param_1 * (param_2 or 1)) result_uri = get_step_context().get_output_artifact_uri() return result, result_uri @pipeline def my_pipeline(): my_step(param_1=17) my_pipeline = my_pipeline.with_options(enable_cache=False) my_pipeline() ``` ### Key Changes 1. **Defining Steps:** - Old: Use `BaseParameters` for parameters. - New: Directly define parameters in the step function or use `pydantic.BaseModel`. 2. **Calling Steps:** - Old: Use `my_step.entrypoint()`. - New: Call the step directly with `my_step()`. 3. **Defining Pipelines:** - Old: Steps are arguments of the pipeline function. - New: Call steps directly within the pipeline function. 4. **Configuring Pipelines:** - Old: Use `pipeline_instance.configure(...)`. - New: Use `with_options(...)` method. 5. **Running Pipelines:** - Old: Create an instance and call `run(...)`. - New: Call the pipeline directly. 6. **Scheduling Pipelines:** - Old: Set schedule in `run(...)`. - New: Use `with_options(...)` to set the schedule. 7. **Fetching Pipeline Runs:** - Old: Use `get_runs()`. - New: Access `last_run` directly from the pipeline object. 8. **Controlling Execution Order:** - Old: Use `step.after(...)`. - New: Pass `after` argument when calling a step. 9. **Multiple Outputs:** - Old: Use `Output` class. - New: Use `Tuple` with optional custom names. 10. **Accessing Run Information:** - Old: Pass `StepContext` as an argument. - New: Use `get_step_context()` to access run information. For more detailed information on specific topics, refer to the respective sections in the ZenML documentation. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-in-with-your-user-interactive.md === ### ZenML Server Connection Guide To authenticate with the ZenML Server using the ZenML CLI and web-based login, execute: ```bash zenml login https://... ``` This command initiates a browser-based validation process. You can choose to trust your device; if you do not, a 24-hour token is issued. Trusting the device results in a 30-day token. **Note:** Device management for ZenML Pro tenants is not currently supported but is planned for future updates. To view authorized devices, use: ```bash zenml authorized-device list ``` For detailed information on a specific device: ```bash zenml authorized-device describe ``` To enhance security, invalidate a token with: ```bash zenml authorized-device lock ``` ### Summary Steps: 1. Run `zenml login ` to connect. 2. Decide whether to trust the device. 3. List devices with `zenml devices list`. 4. Lock a device with `zenml device lock ...`. ### Important Notice Using the ZenML CLI ensures secure interaction with your ZenML tenants. Regularly manage device trust levels and revoke access by locking devices when necessary, as each token represents potential access to sensitive data and infrastructure. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/README.md === # Connecting to ZenML Once ZenML is deployed, there are several methods to connect to the server. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) For detailed connection methods, refer to the relevant sections in the documentation. ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-with-an-api-token.md === ### Connect with an API Token API tokens authenticate with the ZenML server for temporary automation tasks, valid for a maximum of 1 hour and scoped to your user account. #### Generating an API Token To generate a new API token: 1. Go to the server's Settings page in your ZenML dashboard. 2. Select "API Tokens" from the left sidebar. 3. Click "Create new token." A dialog will display your new API token upon generation. #### Programmatic Access Use the generated API tokens for programmatic access to the ZenML server's REST API. This method is ideal for quick access without using the ZenML CLI or Python client. Detailed documentation is available in the [API reference section](../../../reference/api-reference.md#using-a-short-lived-api-token). ================================================== === File: docs/book/how-to/manage-zenml-server/connecting-to-zenml/connect-with-a-service-account.md === ### ZenML Service Account and API Key Authentication To authenticate to a ZenML server from a non-interactive environment (e.g., CI/CD), create a service account and an API key: ```bash zenml service-account create ``` The API key will be displayed and cannot be retrieved later. Use the API key to connect your ZenML client via: 1. **CLI Method**: ```bash zenml login https://... --api-key ``` 2. **Environment Variables** (recommended for automated environments): ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` Setting these variables allows immediate interaction with the server without needing to run `zenml login`. ### Managing Service Accounts and API Keys - List service accounts: ```bash zenml service-account list ``` - List API keys for a specific service account: ```bash zenml service-account api-key list ``` - Describe a service account or API key: ```bash zenml service-account describe zenml service-account api-key describe ``` ### Key Rotation and Deactivation API keys do not expire but should be rotated regularly for security: - Rotate an API key: ```bash zenml service-account api-key rotate ``` - Retain the old API key for a specified period (e.g., 60 minutes): ```bash zenml service-account api-key rotate --retain 60 ``` Deactivate a service account or API key to prevent authentication: ```bash zenml service-account update --active false zenml service-account api-key update --active false ``` ### Summary of Steps 1. Create a service account and API key: `zenml service-account create`. 2. Connect using the API key: `zenml login --api-key` or set environment variables. 3. List service accounts and API keys. 4. Rotate API keys regularly. 5. Deactivate unused service accounts or API keys. ### Programmatic Access Use a service account's API key to obtain short-lived API tokens for programmatic access to the ZenML REST API, as detailed in the [API reference section](../../../reference/api-reference.md#using-a-service-account-and-an-api-key). ### Security Notice Regularly rotate API keys and deactivate or delete unnecessary service accounts and API keys to protect your data and infrastructure. ================================================== === File: docs/book/how-to/infrastructure-deployment/README.md === # Infrastructure and Deployment Summary This section details the infrastructure setup and deployment processes for ZenML. Key components include: 1. **Infrastructure Requirements**: Outline of necessary hardware and software prerequisites for deploying ZenML. 2. **Deployment Options**: Various methods for deploying ZenML, including: - Local deployment for development and testing. - Cloud-based deployment for scalability and production use. 3. **Configuration**: Steps to configure ZenML, including environment variables and configuration files. 4. **Integration**: Instructions for integrating ZenML with cloud services and other tools, ensuring compatibility and functionality. 5. **Best Practices**: Recommendations for optimizing deployment, including resource allocation and monitoring. 6. **Troubleshooting**: Common issues and solutions encountered during deployment. This summary encapsulates the essential technical information needed for understanding ZenML's infrastructure and deployment processes. ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/README.md === ### Integrate with Infrastructure as Code **Infrastructure as Code (IaC)** is the practice of managing and provisioning infrastructure using code rather than manual processes. This section details how to integrate ZenML with popular IaC tools like [Terraform](https://www.terraform.io/). ![ZenML stack on Terraform Registry](../../../.gitbook/assets/terraform_providers_screenshot.png) ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/terraform-stack-management.md === # Summary: Registering Existing Infrastructure with ZenML - A Guide for Terraform Users ## Overview This guide explains how to integrate ZenML with existing Terraform setups, focusing on advanced users who manage custom Terraform code. It outlines a two-phase approach: Infrastructure Deployment and ZenML Registration. ## Two-Phase Approach 1. **Infrastructure Deployment**: Create cloud resources (e.g., GCP, AWS, Azure). 2. **ZenML Registration**: Register these resources as ZenML stack components. ## Phase 1: Infrastructure Deployment Example of existing GCP infrastructure: ```hcl resource "google_storage_bucket" "ml_artifacts" { name = "company-ml-artifacts" location = "US" } resource "google_artifact_registry_repository" "ml_containers" { repository_id = "ml-containers" format = "DOCKER" } ``` ## Phase 2: ZenML Registration ### Setup the ZenML Provider Configure the ZenML provider: ```hcl terraform { required_providers { zenml = { source = "zenml-io/zenml" } } } provider "zenml" { # Load configuration from environment variables } ``` Generate an API key: ```bash zenml service-account create ``` ### Create Service Connectors Create a service connector for authentication: ```hcl resource "zenml_service_connector" "gcp_connector" { name = "gcp-${var.environment}-connector" type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id service_account_json = file("service-account.json") } } ``` ### Register Stack Components Register components using a generic pattern: ```hcl locals { component_configs = { artifact_store = { type = "artifact_store" flavor = "gcp" configuration = { path = "gs://${google_storage_bucket.ml_artifacts.name}" } } container_registry = { type = "container_registry" flavor = "gcp" configuration = { uri = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.ml_containers.repository_id}" } } orchestrator = { type = "orchestrator" flavor = "vertex" configuration = { project = var.project_id, region = var.region } } } } resource "zenml_stack_component" "components" { for_each = local.component_configs name = "existing-${each.key}" type = each.value.type flavor = each.value.flavor configuration = each.value.configuration connector_id = zenml_service_connector.gcp_connector.id } ``` ### Assemble the Stack Combine components into a stack: ```hcl resource "zenml_stack" "ml_stack" { name = "${var.environment}-ml-stack" components = { for k, v in zenml_stack_component.components : k => v.id } } ``` ## Practical Walkthrough: Registering Existing GCP Infrastructure ### Prerequisites - GCS bucket for artifacts - Artifact Registry repository - Service account for ML operations - Vertex AI enabled ### Step 1: Variables Configuration Define variables in `variables.tf`: ```hcl variable "zenml_server_url" { type = string } variable "zenml_api_key" { type = string, sensitive = true } variable "project_id" { type = string } variable "region" { type = string, default = "us-central1" } variable "environment" { type = string } variable "gcp_service_account_key" { type = string, sensitive = true } ``` ### Step 2: Main Configuration Configure providers and resources in `main.tf`: ```hcl terraform { required_providers { zenml = { source = "zenml-io/zenml" } google = { source = "hashicorp/google" } } } provider "zenml" { server_url = var.zenml_server_url; api_key = var.zenml_api_key } provider "google" { project = var.project_id; region = var.region } resource "google_storage_bucket" "artifacts" { name = "${var.project_id}-zenml-artifacts-${var.environment}" location = var.region } resource "google_artifact_registry_repository" "containers" { location = var.region repository_id = "zenml-containers-${var.environment}" format = "DOCKER" } resource "zenml_service_connector" "gcp" { name = "gcp-${var.environment}" type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id region = var.region service_account_json = var.gcp_service_account_key } } resource "zenml_stack_component" "artifact_store" { name = "gcs-${var.environment}" type = "artifact_store" flavor = "gcp" configuration = { path = "gs://${google_storage_bucket.artifacts.name}/artifacts" } connector_id = zenml_service_connector.gcp.id } resource "zenml_stack" "gcp_stack" { name = "gcp-${var.environment}" components = { artifact_store = zenml_stack_component.artifact_store.id container_registry = zenml_stack_component.container_registry.id orchestrator = zenml_stack_component.orchestrator.id } } ``` ### Step 3: Outputs Configuration Define outputs in `outputs.tf`: ```hcl output "stack_id" { value = zenml_stack.gcp_stack.id } output "stack_name" { value = zenml_stack.gcp_stack.name } output "artifact_store_path" { value = "${google_storage_bucket.artifacts.name}/artifacts" } output "container_registry_uri" { value = "${var.region}-docker.pkg.dev/${var.project_id}/${google_artifact_registry_repository.containers.repository_id}" } ``` ### Step 4: terraform.tfvars Configuration Create `terraform.tfvars`: ```hcl zenml_server_url = "https://your-zenml-server.com" project_id = "your-gcp-project-id" region = "us-central1" environment = "dev" ``` Store sensitive variables in environment variables: ```bash export TF_VAR_zenml_api_key="your-zenml-api-key" export TF_VAR_gcp_service_account_key=$(cat path/to/service-account-key.json) ``` ### Usage Instructions 1. Initialize Terraform: ```bash terraform init ``` 2. Install ZenML integrations: ```bash zenml integration install gcp ``` 3. Review planned changes: ```bash terraform plan ``` 4. Apply configuration: ```bash terraform apply ``` 5. Set the newly created stack as active: ```bash zenml stack set $(terraform output -raw stack_name) ``` 6. Verify configuration: ```bash zenml stack describe ``` ## Best Practices - Use appropriate IAM roles and permissions. - Follow security practices for handling credentials. - Consider Terraform workspaces for multiple environments. - Regularly back up Terraform state files. - Version control Terraform configurations (excluding sensitive files). For more details, visit the [ZenML provider](https://registry.terraform.io/providers/zenml-io/zenml/latest). ================================================== === File: docs/book/how-to/infrastructure-deployment/infrastructure-as-code/best-practices.md === # Best Practices for Using IaC with ZenML ## Overview This documentation outlines best practices for architecting scalable ML infrastructure using ZenML and Terraform, focusing on supporting multiple teams, environments, and compliance standards. ## ZenML Approach ZenML utilizes stack components as abstractions over infrastructure resources, allowing for a component-based architecture that promotes reusability and consistency. ### Part 1: Stack Component Architecture **Problem:** Different teams require varied infrastructure configurations. **Solution:** Break down infrastructure into reusable modules. **Base Infrastructure Example:** ```hcl terraform { required_providers { zenml = { source = "zenml-io/zenml" } google = { source = "hashicorp/google" } } } resource "random_id" "suffix" { byte_length = 6 } module "base_infrastructure" { source = "./modules/base_infra" environment = var.environment project_id = var.project_id region = var.region resource_prefix = "zenml-${var.environment}-${random_id.suffix.hex}" } resource "zenml_service_connector" "base_connector" { name = "${var.environment}-base-connector" type = "gcp" auth_method = "service-account" configuration = { project_id = var.project_id region = var.region service_account_json = module.base_infrastructure.service_account_key } } resource "zenml_stack_component" "artifact_store" { name = "${var.environment}-artifact-store" type = "artifact_store" flavor = "gcp" configuration = { path = "gs://${module.base_infrastructure.artifact_store_bucket}/artifacts" } connector_id = zenml_service_connector.base_connector.id } resource "zenml_stack" "base_stack" { name = "${var.environment}-base-stack" components = { artifact_store = zenml_stack_component.artifact_store.id } } ``` ### Part 2: Environment Management and Authentication **Problem:** Different environments require unique configurations and authentication methods. **Solution:** Use a flexible service connector setup. **Environment-Specific Connector Example:** ```hcl locals { env_config = { dev = { machine_type = "n1-standard-4", gpu_enabled = false, auth_method = "service-account", auth_configuration = { service_account_json = file("dev-sa.json") } } prod = { machine_type = "n1-standard-8", gpu_enabled = true, auth_method = "external-account", auth_configuration = { external_account_json = file("prod-sa.json") } } } } resource "zenml_service_connector" "env_connector" { name = "${var.environment}-connector" type = "gcp" auth_method = local.env_config[var.environment].auth_method dynamic "configuration" { for_each = try(local.env_config[var.environment].auth_configuration, {}) content { key = configuration.key; value = configuration.value } } } ``` ### Part 3: Resource Sharing and Isolation **Problem:** Projects require strict data isolation for compliance. **Solution:** Implement resource scoping with project isolation. **Project Isolation Example:** ```hcl locals { project_paths = { fraud_detection = "projects/fraud_detection/${var.environment}", recommendation = "projects/recommendation/${var.environment}" } } resource "zenml_stack_component" "project_artifact_stores" { for_each = local.project_paths name = "${each.key}-artifact-store" type = "artifact_store" flavor = "gcp" configuration = { path = "gs://${var.shared_bucket}/${each.value}" } connector_id = zenml_service_connector.env_connector.id } resource "zenml_stack" "project_stacks" { for_each = local.project_paths name = "${each.key}-stack" components = { artifact_store = zenml_stack_component.project_artifact_stores[each.key].id } } ``` ### Part 4: Advanced Stack Management Practices 1. **Stack Component Versioning:** ```hcl locals { stack_version = "1.2.0" } resource "zenml_stack" "versioned_stack" { name = "stack-v${local.stack_version}" } ``` 2. **Service Connector Management:** ```hcl resource "zenml_service_connector" "env_connector" { name = "${var.environment}-${var.purpose}-connector" type = var.connector_type auth_method = var.environment == "prod" ? "workload-identity" : "service-account" } ``` 3. **Component Configuration Management:** ```hcl locals { base_configs = { orchestrator = { location = var.region, project = var.project_id } } env_configs = { dev = { orchestrator = { machine_type = "n1-standard-4" } }, prod = { orchestrator = { machine_type = "n1-standard-8" } } } } resource "zenml_stack_component" "configured_component" { name = "${var.environment}-${var.component_type}" type = var.component_type configuration = merge(local.base_configs[var.component_type], try(local.env_configs[var.environment][var.component_type], {})) } ``` 4. **Stack Organization and Dependencies:** ```hcl module "ml_stack" { source = "./modules/ml_stack" depends_on = [module.base_infrastructure, module.security] components = { artifact_store = module.storage.artifact_store_id } } ``` 5. **State Management:** ```hcl terraform { backend "gcs" { prefix = "terraform/state" } } data "terraform_remote_state" "infrastructure" { backend = "gcs" config = { bucket = var.state_bucket, prefix = "terraform/infrastructure" } } ``` ## Conclusion Using ZenML and Terraform for ML infrastructure allows for a flexible, maintainable, and secure environment. The ZenML provider simplifies infrastructure management while adhering to best practices. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/service-connectors-guide.md === # Service Connectors Guide Summary This documentation provides a comprehensive guide for managing Service Connectors in ZenML, enabling connections to external resources. Key sections include: ## Overview - **Service Connectors**: Facilitate authentication and connection to various external resources. - **Terminology**: Introduces essential terms like Service Connector Types, Resource Types, and Resource Names. ## Key Sections 1. **Terminology**: Defines concepts related to Service Connectors, including: - **Service Connector Types**: Templates for specific implementations (e.g., AWS, GCP). - **Resource Types**: Logical classifications of resources (e.g., `kubernetes-cluster`, `docker-registry`). - **Resource Names**: Unique identifiers for resource instances. 2. **Service Connector Types**: - Examples include AWS, GCP, Azure, Kubernetes, and Docker connectors. - Commands to list and describe types: ```sh zenml service-connector list-types zenml service-connector describe-type ``` 3. **Registering Service Connectors**: - Connectors can be registered in multi-type, multi-instance, or single-instance modes. - Example command for a multi-type connector: ```sh zenml service-connector register --type --auto-configure ``` 4. **Connecting Stack Components**: - Stack Components can connect to resources via Service Connectors. - Use interactive CLI for ease: ```sh zenml artifact-store connect -i ``` 5. **Verification**: - Verify the configuration and credentials of Service Connectors. - Commands for verification: ```sh zenml service-connector verify ``` 6. **Local Client Configuration**: - Configure local CLI tools (e.g., `kubectl`, Docker) with credentials from Service Connectors. - Example command: ```sh zenml service-connector login --resource-type --resource-id ``` 7. **Resource Discovery**: - Discover accessible resources via Service Connectors. - Commands for listing resources: ```sh zenml service-connector list-resources zenml service-connector list-resources --resource-type ``` ## Best Practices - Follow security best practices for authentication methods. - Use auto-configuration where applicable to streamline setup. ## End-to-End Examples - Detailed examples for AWS, GCP, and Azure Service Connectors are provided to illustrate the complete workflow from registration to resource access. This guide serves as a foundational resource for effectively managing Service Connectors in ZenML, ensuring secure and efficient connections to external services. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/gcp-service-connector.md === ### GCP Service Connector Overview The ZenML GCP Service Connector enables authentication and access to various GCP resources, including GCS buckets, GKE clusters, and GCR registries. It supports multiple authentication methods: GCP user accounts, service accounts, OAuth 2.0 tokens, and implicit authentication. By default, it issues short-lived OAuth 2.0 tokens for enhanced security. ### Key Features - **Authentication Methods**: Supports implicit authentication, user accounts, service accounts, and OAuth 2.0 tokens. - **Automatic Configuration**: Can auto-detect credentials configured via the GCP CLI. - **Resource Types**: Includes generic GCP resources, GCS buckets, GKE clusters, and Docker registries. ### Prerequisites - Install the GCP Service Connector: - `pip install "zenml[connectors-gcp]"` - `zenml integration install gcp` - GCP CLI is not required for basic functionality but recommended for auto-configuration. ### Resource Types 1. **Generic GCP Resource**: Provides access to any GCP service using OAuth 2.0 tokens. 2. **GCS Bucket**: Requires specific permissions (e.g., `storage.buckets.list`, `storage.objects.create`). 3. **GKE Cluster**: Requires permissions like `container.clusters.list`. 4. **GAR/GCR**: Supports both Google Artifact Registry and legacy Google Container Registry. ### Authentication Methods - **Implicit Authentication**: Automatically discovers credentials from environment variables or local files. - **User Account**: Uses long-lived credentials; generates temporary OAuth 2.0 tokens by default. - **Service Account**: Similar to user account but uses service account keys. - **Impersonation**: Generates temporary credentials by impersonating another service account. - **External Account**: Uses GCP Workload Identity for authentication with AWS IAM or Azure AD credentials. ### Example Commands - **List Service Connector Types**: ```shell zenml service-connector list-types --type gcp ``` - **Register a GCP Service Connector**: ```shell zenml service-connector register gcp-implicit --type gcp --auth-method implicit --auto-configure ``` - **Describe a Service Connector**: ```shell zenml service-connector describe gcp-implicit ``` ### Auto-Configuration Auto-configuration allows the GCP Service Connector to use credentials from the local GCP CLI. For example: ```shell zenml service-connector register gcp-auto --type gcp --auto-configure ``` ### Local Client Provisioning The GCP Service Connector can configure local `gcloud`, `kubectl`, and Docker CLIs with credentials. For example, to configure `kubectl`: ```shell zenml service-connector login gcp-user-account --resource-type kubernetes-cluster --resource-id zenml-test-cluster ``` ### Stack Components Integration The GCP Service Connector can connect various ZenML Stack Components, such as: - GCS Artifact Store to a GCS bucket. - Kubernetes Orchestrator to a GKE cluster. - GCP Container Registry to a GAR or GCR. ### End-to-End Example 1. **Install ZenML and Configure GCP CLI**: ```shell zenml integration install -y gcp gcloud auth application-default login ``` 2. **Register a Multi-Type GCP Service Connector**: ```shell zenml service-connector register gcp-demo-multi --type gcp --auto-configure ``` 3. **Connect Stack Components**: - Register and connect a GCS Artifact Store: ```shell zenml artifact-store register gcs-zenml-bucket-sl --flavor gcp --path=gs://zenml-bucket-sl zenml artifact-store connect gcs-zenml-bucket-sl --connector gcp-demo-multi ``` - Register and connect a Kubernetes Orchestrator: ```shell zenml orchestrator register gke-zenml-test-cluster --flavor kubernetes --synchronous=true --kubernetes_namespace=zenml-workloads zenml orchestrator connect gke-zenml-test-cluster --connector gcp-demo-multi ``` - Register and connect a GCP Container Registry: ```shell zenml container-registry register gcr-zenml-core --flavor gcp --uri=europe-west1-docker.pkg.dev/zenml-core/test zenml container-registry connect gcr-zenml-core --connector gcp-demo-multi ``` This summary encapsulates the essential information about configuring and using the GCP Service Connector in ZenML, including commands and examples for practical implementation. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/README.md === ### ZenML Service Connectors Overview ZenML allows seamless integration with cloud providers and infrastructure services, facilitating the operation of MLOps platforms by managing authentication and authorization complexities. #### Key Concepts: - **Service Connectors**: Abstractions that simplify the connection between ZenML and various external services (e.g., AWS, GCP, Azure, Kubernetes) while implementing security best practices. - **Authentication Complexity**: Managing credentials across multiple services can lead to security risks and operational challenges. Service Connectors mitigate these issues by centralizing credential management and providing short-lived tokens. #### Use Case Example: To connect ZenML to an AWS S3 bucket using the AWS Service Connector, follow these steps: 1. **List Available Service Connector Types**: ```sh zenml service-connector list-types ``` 2. **Register a Service Connector**: Automatically configure the AWS Service Connector using existing AWS CLI credentials: ```sh zenml service-connector register aws-s3 --type aws --auto-configure --resource-type s3-bucket ``` 3. **Connect Stack Components**: Create and connect an S3 Artifact Store to the registered Service Connector: ```sh zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles zenml artifact-store connect s3-zenfiles --connector aws-s3 ``` 4. **Run a Simple Pipeline**: Example pipeline code: ```python from zenml import step, pipeline @step def simple_step_one() -> str: return "Hello World!" @step def simple_step_two(msg: str) -> None: print(msg) @pipeline def simple_pipeline() -> None: message = simple_step_one() simple_step_two(msg=message) if __name__ == "__main__": simple_pipeline() ``` 5. **Execute the Pipeline**: ```sh python run.py ``` #### Alternatives to Service Connectors: 1. **Direct Credential Embedding** (not recommended for security): ```sh zenml artifact-store register s3 --flavor s3 --path=s3://BUCKET_NAME --key=AWS_ACCESS_KEY --secret=AWS_SECRET_KEY ``` 2. **Using ZenML Secrets**: ```sh zenml secret create aws --aws_access_key_id=AWS_ACCESS_KEY --aws_secret_access_key=AWS_SECRET_KEY zenml artifact-store register s3 --flavor s3 --path=s3://BUCKET_NAME --authentication_secret=aws ``` #### Security Best Practices: - Avoid embedding long-lived credentials directly in Stack Components. - Use Service Connectors for better security and management of temporary credentials. - Ensure proper IAM permissions are configured for accessing resources. #### Additional Resources: - [Service Connector Guide](./service-connectors-guide.md) - [Security Best Practices](./best-security-practices.md) - [Docker Service Connector](./docker-service-connector.md) - [Kubernetes Service Connector](./kubernetes-service-connector.md) - [AWS Service Connector](./aws-service-connector.md) - [GCP Service Connector](./gcp-service-connector.md) - [Azure Service Connector](./azure-service-connector.md) This summary encapsulates the essential details of using ZenML Service Connectors, providing a clear path for integration and best practices for secure operations. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/best-security-practices.md === ### Summary of Best Practices for Service Connector Authentication Methods Service Connectors for cloud providers support various authentication methods, but there is no unified standard. This document outlines best practices for selecting authentication methods based on security and usability. #### Key Authentication Methods 1. **Username and Password** - Avoid using primary account passwords for authentication. Use session tokens, API keys, or API tokens instead. - Passwords should not be shared or used for automated workloads. Cloud platforms typically require exchanging passwords for long-lived credentials. 2. **Implicit Authentication** - Provides immediate access to resources using locally stored credentials, environment variables, or configuration files. - Disabled by default due to security risks. Enable by setting `ZENML_ENABLE_IMPLICIT_AUTH_METHODS` to `true`. - Caveats include limited portability and accessibility across different environments. 3. **Long-lived Credentials (API Keys, Account Keys)** - Ideal for production environments. Avoid sharing user credentials; prefer service credentials. - Different cloud providers have their own long-lived credential types: - **AWS:** Account Access Keys, IAM User Access Keys. - **GCP:** User Account Credentials, Service Account Credentials. - Use service credentials for automated processes to enforce the least-privilege principle. 4. **Generating Temporary and Down-scoped Credentials** - Temporary credentials are issued from long-lived credentials, enhancing security by limiting exposure. - Down-scoped credentials restrict permissions to the minimum necessary for specific resources. 5. **Impersonating Accounts and Assuming Roles** - Offers flexibility and control but requires setup of multiple permission-bearing accounts. - Use a primary account with minimal permissions to impersonate secondary accounts with necessary permissions. 6. **Short-lived Credentials** - Temporary credentials can be generated manually or through auto-configuration. They expire after a set time, making them impractical for long-term use but useful for granting temporary access. ### Example Commands - **Registering a GCP Service Connector with Implicit Authentication:** ```sh zenml service-connector register gcp-implicit --type gcp --auth-method implicit --project_id=zenml-core ``` - **Registering a GCP Service Connector with Account Impersonation:** ```sh zenml service-connector register gcp-impersonate-sa --type gcp --auth-method impersonation --service_account_json=@empty-connectors@zenml-core.json --project_id=zenml-core --target_principal=zenml-bucket-sl@zenml-core.iam.gserviceaccount.com --resource-type gcs-bucket --resource-id gs://zenml-bucket-sl ``` - **Registering an AWS Service Connector with Short-lived Credentials:** ```sh AWS_PROFILE=connectors zenml service-connector register aws-sts-token --type aws --auto-configure --auth-method sts-token ``` ### Conclusion When configuring Service Connectors, prioritize security by avoiding primary account passwords, utilizing long-lived and temporary credentials, and implementing least-privilege access through impersonation and role assumption. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/hyperai-service-connector.md === ### HyperAI Service Connector Documentation Summary The ZenML HyperAI Service Connector enables authentication with HyperAI instances for deploying pipeline runs. It provides pre-authenticated Paramiko SSH clients to associated Stack Components. #### Command to List Connector Types ```shell $ zenml service-connector list-types --type hyperai ``` #### Supported Resource Types and Authentication Methods - **Resource Types**: HyperAI instances. - **Authentication Methods**: 1. RSA key 2. DSA (DSS) key 3. ECDSA key 4. ED25519 key **Note**: SSH private keys are distributed to all clients running pipelines, granting unrestricted access to HyperAI instances. #### Configuration Requirements - At least one `hostname` and `username` are required for login. - An optional `ssh_passphrase` can be provided. - Options for configuration: 1. One connector per HyperAI instance with different SSH keys. 2. A reused SSH key for multiple instances, selecting the instance during orchestrator component creation. #### Limitations - The connector does not support auto-discovery of authentication credentials from HyperAI instances. Feedback on this feature can be provided via [Slack](https://zenml.io/slack) or by creating an issue on [GitHub](https://github.com/zenml-io/zenml/issues). #### Usage The HyperAI Service Connector is utilized by the HyperAI Orchestrator to deploy pipeline runs to HyperAI instances. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/aws-service-connector.md === ### Summary of AWS Service Connector Documentation for ZenML The **ZenML AWS Service Connector** enables authentication and access to AWS resources, including S3 buckets, EKS clusters, and ECR registries. It supports multiple authentication methods: long-lived AWS secret keys, IAM roles, STS tokens, and implicit authentication. The connector generates temporary STS tokens with minimal permissions for security and can auto-configure credentials from the AWS CLI. #### Key Features: - **Resource Types**: - **Generic AWS Resource**: Connects to any AWS service using a pre-configured boto3 session. - **S3 Bucket**: Requires specific IAM permissions (e.g., `s3:ListBucket`, `s3:GetObject`). - **EKS Cluster**: Requires permissions like `eks:ListClusters` and `eks:DescribeCluster`. - **ECR Registry**: Requires permissions such as `ecr:DescribeRepositories` and `ecr:PutImage`. - **Authentication Methods**: - **Implicit Authentication**: Uses environment variables or local configuration files; requires enabling for security. - **AWS Secret Key**: Long-lived credentials, not recommended for production. - **AWS STS Token**: Temporary tokens that require regular updates. - **AWS IAM Role**: Assumes a role to generate temporary STS tokens. - **AWS Session Token**: Generates temporary session tokens for IAM users. - **AWS Federation Token**: Generates tokens for federated users, requiring specific permissions. #### Configuration and Usage: - **Installation**: - Install with `pip install "zenml[connectors-aws]"` or `zenml integration install aws`. - **Registering a Service Connector**: ```shell zenml service-connector register --type aws --auth-method --auto-configure ``` - **Example Commands**: - List available service connector types: ```shell zenml service-connector list-types --type aws ``` - Register a connector: ```shell AWS_PROFILE=connectors zenml service-connector register aws-implicit --type aws --auth-method implicit --region=us-east-1 ``` - **Local Client Configuration**: - Configure local AWS CLI, Kubernetes CLI, and Docker CLI using credentials from the Service Connector. #### Stack Components: - Connect S3 Artifact Store, EKS Orchestrator, and ECR Container Registry to AWS resources using a single Service Connector. - Example of connecting an S3 Artifact Store: ```shell zenml artifact-store register s3-zenfiles --flavor s3 --path=s3://zenfiles zenml artifact-store connect s3-zenfiles --connector aws-demo-multi ``` #### End-to-End Example: 1. Configure AWS CLI with IAM user credentials. 2. Register a multi-type AWS Service Connector. 3. List accessible resources (S3 buckets, EKS clusters, ECR registries). 4. Connect Stack Components to the registered Service Connector. 5. Run a simple pipeline to validate the setup. This documentation provides a comprehensive overview of configuring and using the AWS Service Connector with ZenML, ensuring secure and efficient access to AWS resources. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/docker-service-connector.md === ### Docker Service Connector Overview The ZenML Docker Service Connector enables authentication with Docker or OCI container registries and manages Docker clients for those registries. It provides pre-authenticated Python clients to Stack Components linked to it. #### Command to List Docker Service Connector Types ```shell zenml service-connector list-types --type docker ``` #### Supported Resource Types - **Resource Type**: `docker-registry` - **Registry Formats**: - DockerHub: `docker.io` or `https://index.docker.io/v1/` - Generic OCI: `https://host:port/` #### Authentication Methods - **Methods**: Username and password or access token (API tokens recommended). #### Registering a Docker Service Connector ```sh zenml service-connector register dockerhub --type docker -in ``` **Example Command Output**: - Prompts for service connector name, description, type, and authentication details. #### Important Notes - This connector does not generate short-lived credentials; configured credentials are directly used by clients. - It does not support auto-discovery of credentials from local Docker clients. #### Local Client Provisioning To configure the local Docker client: ```sh zenml service-connector login dockerhub ``` **Example Command Output**: - Warns that the password will be stored unencrypted in the Docker config. #### Stack Components Usage The Docker Service Connector can be utilized by all Container Registry stack components for authenticating to remote registries, facilitating image building and publishing without explicit Docker credentials in the environment. #### Future Enhancements - ZenML will support automatic Docker credential configuration in container runtimes (e.g., Kubernetes) in future releases. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/azure-service-connector.md === ### Azure Service Connector Overview The ZenML Azure Service Connector enables authentication and access to Azure resources like Blob storage, AKS Kubernetes clusters, and ACR container registries. It supports automatic configuration and credential detection via the Azure CLI and facilitates specialized authentication for various Azure services. #### Prerequisites - To use the Azure Service Connector, install it via: - `pip install "zenml[connectors-azure]"` for the connector only. - `zenml integration install azure` for the full Azure integration. - Azure CLI installation is recommended for quick setup and auto-configuration, but not mandatory. #### Resource Types 1. **Generic Azure Resource**: Connects to any Azure service using generic azure-identity credentials. 2. **Azure Blob Storage**: Requires permissions like `Storage Blob Data Contributor` and `Reader and Data Access`. Resource names can be in URI format or just the container name. 3. **AKS Kubernetes Cluster**: Requires permissions such as `Azure Kubernetes Service Cluster Admin Role`. Resource names can include resource group context. 4. **ACR Container Registry**: Requires permissions like `AcrPull` and `AcrPush`. Resource names can be in URI format or just the registry name. #### Authentication Methods - **Implicit Authentication**: Uses environment variables or Azure CLI credentials. Enabled by setting `ZENML_ENABLE_IMPLICIT_AUTH_METHODS`. - **Service Principal**: Requires Azure client ID and secret. Recommended for production use. - **Access Token**: Temporary tokens that must be refreshed regularly. Not suitable for Azure Blob storage. #### Example Commands - **List Connector Types**: ```shell zenml service-connector list-types --type azure ``` - **Register Service Connector with Implicit Auth**: ```sh zenml service-connector register azure-implicit --type azure --auth-method implicit --auto-configure ``` - **Register Service Connector with Service Principal**: ```sh zenml service-connector register azure-service-principal --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` #### Local Client Configuration - Local Azure CLI, Kubernetes CLI, and Docker CLI can be configured with credentials from the Azure Service Connector. - Example for Kubernetes CLI: ```sh zenml service-connector login azure-service-principal --resource-type kubernetes-cluster --resource-id= ``` #### Stack Components Usage - Connect Azure Artifact Store, Orchestrator, and Container Registry Stack Components to Azure resources using the Service Connector. - Example of connecting an Azure Blob Storage Artifact Store: ```sh zenml artifact-store register azure-demo --flavor azure --path=az:// zenml artifact-store connect azure-demo --connector azure-service-principal ``` #### End-to-End Example 1. Set up an Azure service principal with necessary permissions. 2. Register a multi-type Azure Service Connector. 3. Connect an Azure Blob Storage Artifact Store, AKS Orchestrator, and ACR Container Registry. 4. Register and set an active stack with the connected components. 5. Run a simple pipeline to validate the setup. This concise summary captures the essential details of configuring the Azure Service Connector with ZenML, including commands and resource types, while omitting verbose explanations. ================================================== === File: docs/book/how-to/infrastructure-deployment/auth-management/kubernetes-service-connector.md === ### Kubernetes Service Connector Overview The ZenML Kubernetes Service Connector enables authentication and connection to Kubernetes clusters. It provides pre-authenticated Kubernetes Python clients to Stack Components and allows configuration of the local Kubernetes CLI (`kubectl`). ### Prerequisites - Install the Kubernetes Service Connector: - `pip install "zenml[connectors-kubernetes]"` for prerequisites only. - `zenml integration install kubernetes` for the full integration. - Local `kubectl` configuration is not required for accessing Kubernetes clusters. ### Command to List Service Connector Types ```shell $ zenml service-connector list-types --type kubernetes ``` ### Resource Types - Supports generic Kubernetes clusters identified by the `kubernetes-cluster` Resource Type. ### Authentication Methods 1. Username and password (not recommended for production). 2. Authentication token (with or without client certificates). For local K3D clusters, an empty token can be used. **Warning**: The Service Connector does not generate short-lived credentials; use API tokens with client certificates when possible. ### Auto-configuration Fetch credentials from the local `kubectl` during registration: ```sh zenml service-connector register kube-auto --type kubernetes --auto-configure ``` **Example Output**: ```text Successfully registered service connector `kube-auto` with access to: ┏━━━━━━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━┓ ┃ RESOURCE TYPE │ RESOURCE NAMES ┃ ┠───────────────────────┼────────────────┨ ┃ 🌀 kubernetes-cluster │ 35.185.95.223 ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━┛ ``` ### Describe Service Connector ```sh zenml service-connector describe kube-auto ``` **Example Output**: ```text Service connector 'kube-auto' of type 'kubernetes' with id '4315e8eb...' is owned by user 'default' and is 'private'. ┏━━━━━━━━━━━━━━━━━━┯━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ PROPERTY │ VALUE ┃ ┠──────────────────┼──────────────────────────────────────┨ ┃ ID │ 4315e8eb... ┃ ┃ NAME │ kube-auto ┃ ┃ AUTH METHOD │ token ┃ ┃ RESOURCE TYPES │ 🌀 kubernetes-cluster ┃ ┃ RESOURCE NAME │ 35.175.95.223 ┃ ┃ OWNER │ default ┃ ┗━━━━━━━━━━━━━━━━━━┷━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ ``` **Configuration Details**: - Server: `https://35.175.95.223` - Token and certificate authority are hidden. **Info**: Credentials may have limited lifetimes, especially with third-party authentication providers. ### Local Client Provisioning Configure the local Kubernetes client with: ```sh zenml service-connector login kube-auto ``` **Example Output**: ```text Updated local kubeconfig with the cluster details. Current context set to '35.185.95.223'. ``` ### Stack Components Use The Kubernetes Service Connector is utilized in Orchestrator and Model Deployer stack components, allowing management of Kubernetes workloads without explicit configuration of `kubectl` contexts and credentials. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md === # Custom Stack Component Flavor in ZenML ## Overview ZenML allows for the creation of custom stack component flavors, enhancing the composability and reusability of MLOps platforms. This guide outlines the key abstractions and steps to implement a custom flavor. ## Component Flavors - **Component Type**: A broad category defining stack component functionality (e.g., `artifact_store`). - **Flavors**: Specific implementations of a component type (e.g., `local`, `s3`). ## Core Abstractions 1. **StackComponent**: Defines core functionality. Example: ```python from zenml.stack import StackComponent class BaseArtifactStore(StackComponent): @abstractmethod def open(self, path, mode="r"): pass @abstractmethod def exists(self, path): pass ``` 2. **StackComponentConfig**: Configures a stack component instance, separating configuration from implementation. ```python from zenml.stack import StackComponentConfig class BaseArtifactStoreConfig(StackComponentConfig): path: str SUPPORTED_SCHEMES: ClassVar[Set[str]] ``` 3. **Flavor**: Combines `StackComponent` and `StackComponentConfig`, defining flavor properties. ```python from zenml.stack import Flavor class LocalArtifactStoreFlavor(Flavor): @property def name(self) -> str: return "local" @property def type(self) -> StackComponentType: return StackComponentType.ARTIFACT_STORE @property def config_class(self): return LocalArtifactStoreConfig @property def implementation_class(self): return LocalArtifactStore ``` ## Implementing a Custom Flavor ### Configuration Class Define the configuration for the custom flavor: ```python from zenml.artifact_stores import BaseArtifactStoreConfig from zenml.utils.secret_utils import SecretField class MyS3ArtifactStoreConfig(BaseArtifactStoreConfig): SUPPORTED_SCHEMES: ClassVar[Set[str]] = {"s3://"} key: Optional[str] = SecretField(default=None) secret: Optional[str] = SecretField(default=None) token: Optional[str] = SecretField(default=None) client_kwargs: Optional[Dict[str, Any]] = None config_kwargs: Optional[Dict[str, Any]] = None s3_additional_kwargs: Optional[Dict[str, Any]] = None ``` ### Implementation Class Implement the abstract methods: ```python import s3fs from zenml.artifact_stores import BaseArtifactStore class MyS3ArtifactStore(BaseArtifactStore): _filesystem: Optional[s3fs.S3FileSystem] = None @property def filesystem(self) -> s3fs.S3FileSystem: if not self._filesystem: self._filesystem = s3fs.S3FileSystem( key=self.config.key, secret=self.config.secret, token=self.config.token, client_kwargs=self.config.client_kwargs, config_kwargs=self.config.config_kwargs, s3_additional_kwargs=self.config.s3_additional_kwargs, ) return self._filesystem def open(self, path, mode="r"): return self.filesystem.open(path=path, mode=mode) def exists(self, path): return self.filesystem.exists(path=path) ``` ### Flavor Class Combine the configuration and implementation: ```python from zenml.artifact_stores import BaseArtifactStoreFlavor class MyS3ArtifactStoreFlavor(BaseArtifactStoreFlavor): @property def name(self): return 'my_s3_artifact_store' @property def implementation_class(self): from ... import MyS3ArtifactStore return MyS3ArtifactStore @property def config_class(self): from ... import MyS3ArtifactStoreConfig return MyS3ArtifactStoreConfig ``` ## Registering the Flavor Register your flavor via the ZenML CLI: ```shell zenml artifact-store flavor register ``` Example: ```shell zenml artifact-store flavor register flavors.my_flavor.MyS3ArtifactStoreFlavor ``` ## Usage After registration, use the custom flavor in stacks: ```shell zenml artifact-store register \ --flavor=my_s3_artifact_store \ --path='some-path' zenml stack register \ --artifact-store ``` ## Best Practices - Execute `zenml init` consistently to avoid resolution issues. - Use the ZenML CLI to check required configuration values. - Test thoroughly before production use. - Keep code clean and well-documented. - Follow language and library best practices. ## Further Learning For more on specific stack components, refer to the ZenML documentation for orchestrators, artifact stores, container registries, and more. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/README.md === ### Managing Stacks & Components #### What is a Stack? A **stack** in ZenML represents the configuration of infrastructure and tooling for pipeline execution. It consists of various components, each responsible for specific tasks, such as: - **Container Registry**: For image management. - **Kubernetes Cluster**: As an orchestrator. - **Artifact Store**: For storing artifacts. - **Experiment Tracker**: e.g., MLflow for tracking experiments. #### Organizing Execution Environments ZenML allows running pipelines across multiple stacks, facilitating testing in different environments. This enables: - Local experimentation. - Transitioning to a staging cloud environment for advanced testing. - Final deployment to a production-grade stack. **Benefits of Separate Stacks**: - Prevents accidental production deployments. - Reduces costs by using less powerful resources in staging. - Controls access by assigning permissions to specific users. #### Managing Credentials for Stacks Most stack components require credentials for infrastructure interaction. The recommended method is using **Service Connectors**, which abstract credentials and enhance security. **Recommended Roles**: - Limit Service Connector creation to individuals with direct access to cloud resources to minimize credential leakage and simplify auditing. **Recommended Workflow**: 1. Designate a limited group to create Service Connectors. 2. Create a connector for development/staging. 3. Create a separate connector for production to protect resources. #### Deploying and Managing Stacks Deploying MLOps stacks involves complexities: - Each tool has specific requirements (e.g., Kubernetes for Kubeflow). - Setting defaults for infrastructure parameters can be challenging. - Standard installations may require additional configurations for security. - Components must have the correct permissions to communicate. - Resource cleanup post-experimentation is crucial to avoid unnecessary costs. #### Key Documentation Links - [Deploy a Cloud Stack](./deploy-a-cloud-stack.md) - [Register a Cloud Stack](./register-a-cloud-stack.md) - [Deploy a Cloud Stack with Terraform](./deploy-a-cloud-stack-with-terraform.md) - [Export and Install Stack Requirements](./export-stack-requirements.md) - [Reference Secrets in Stack Configuration](./reference-secrets-in-stack-configuration.md) - [Implement a Custom Stack Component](./implement-a-custom-stack-component.md) This summary provides an overview of managing stacks and components in ZenML, emphasizing the importance of configuration, security, and deployment practices. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/export-stack-requirements.md === ### Summary of Exporting Stack Requirements To obtain the `pip` requirements for a specific ZenML stack, use the following CLI command: ```bash zenml stack export-requirements --output-file stack_requirements.txt pip install -r stack_requirements.txt ``` This command exports the requirements to a file named `stack_requirements.txt`, which can then be used to install the necessary packages. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/reference-secrets-in-stack-configuration.md === ### Summary: Referencing Secrets in Stack Configuration In ZenML, sensitive information like passwords or tokens can be securely referenced in stack components using secret references. Instead of specifying values directly, use the syntax `{{.}}`. #### Example Usage **Registering a Secret:** ```shell # Create a secret named `mlflow_secret` with username and password zenml secret create mlflow_secret --username=admin --password=abc123 # Reference the secret in an experiment tracker component zenml experiment-tracker register mlflow \ --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} \ ... ``` #### Secret Validation ZenML validates the existence of secrets and keys before running a pipeline to prevent failures. The validation can be controlled using the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: - `NONE`: Disables validation. - `SECRET_EXISTS`: Validates only the existence of secrets. - `SECRET_AND_KEY_EXISTS`: (default) Validates both the existence of the secret and the key-value pair. #### Fetching Secret Values in Steps When using centralized secrets management, secrets can be accessed within steps via the ZenML `Client` API: ```python from zenml import step from zenml.client import Client @step def secret_loader() -> None: """Load the example secret from the server.""" secret = Client().get_secret() authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ) ``` ### Additional Resources - **Interact with Secrets**: Learn how to create, list, and delete secrets using the ZenML CLI and Python SDK. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack.md === ### Deploy a Cloud Stack with a Single Click In ZenML, a **stack** represents the configuration of your infrastructure. Traditionally, creating a stack involves deploying infrastructure components and defining them in ZenML, which can be complex. To simplify this, ZenML offers a **1-click deployment feature** that allows you to deploy necessary infrastructure on your chosen cloud provider easily. #### Prerequisites - You must have a deployed instance of ZenML (not a local server via `zenml login --local`). For setup instructions, refer to the [ZenML deployment guide](../../../getting-started/deploying-zenml/README.md). #### Using the 1-Click Deployment Tool You can deploy a stack via the **dashboard** or **CLI**. ##### Dashboard 1. Navigate to the stacks page and click "+ New Stack". 2. Select "New Infrastructure". 3. Choose your cloud provider (AWS, GCP, Azure) and configure the stack: - **AWS**: Select a region and stack name, then confirm the deployment on the AWS CloudFormation page. - **GCP**: Select a region and stack name, then follow the Cloud Shell prompts to authenticate and deploy using Deployment Manager. - **Azure**: Select a location and stack name, then use the provided `main.tf` file in the Azure Cloud Shell to deploy with Terraform. ##### CLI To create a remote stack, use: ```shell zenml stack deploy -p {aws|gcp|azure} ``` - For **AWS**, follow the prompts to deploy a CloudFormation stack. - For **GCP**, follow the prompts to deploy using Deployment Manager in Cloud Shell. - For **Azure**, use Terraform commands to deploy the stack. #### What Will Be Deployed? The deployment will prepare the following resources based on your cloud provider: - **AWS**: - S3 bucket (Artifact Store) - ECR (Container Registry) - CloudBuild project (Image Builder) - SageMaker permissions - IAM roles and access keys - **GCP**: - GCS bucket (Artifact Store) - GCP Artifact Registry (Container Registry) - Vertex AI and Cloud Build permissions - GCP Service Account - **Azure**: - Resource Group - Azure Storage Account (Artifact Store) - Azure Container Registry (Container Registry) - AzureML Workspace (Orchestrator) - Azure Service Principal This streamlined process allows you to deploy a cloud stack and start running your pipelines in a remote environment with minimal effort. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/register-a-cloud-stack.md === ### Summary of ZenML Stack Registration Documentation **Overview**: ZenML's stack represents the configuration of your infrastructure. Traditionally, creating a stack involves deploying infrastructure and defining stack components with authentication, which can be complex. The **Stack Wizard** simplifies this by allowing users to register a ZenML cloud stack using existing infrastructure. **Deployment Options**: - **1-click Deployment Tool**: For users without existing infrastructure. - **Terraform Modules**: For users who want to manage infrastructure as code. ### Using the Stack Wizard **Access**: The Stack Wizard is available via the CLI and dashboard. #### Dashboard Steps: 1. Navigate to the stacks page and click "+ New Stack". 2. Select "Use existing Cloud" and choose your cloud provider. 3. Choose an authentication method and fill in the required fields. #### CLI Command: To register a remote stack: ```shell zenml stack register -p {aws|gcp|azure} ``` - Provide an existing service connector ID or name with `-sc `, or the wizard will create one. ### Service Connector Configuration - The wizard checks for cloud provider credentials in the local environment. If found, you can use them or opt for manual configuration. ### Authentication Methods #### AWS: - **Options**: AWS Secret Key, STS Token, IAM Role, Session Token, Federation Token. - **Required Fields**: Vary by method, typically include `aws_access_key_id`, `aws_secret_access_key`, and `region`. #### GCP: - **Options**: User Account, Service Account, External Account, OAuth 2.0 Token, Service Account Impersonation. - **Required Fields**: Include `user_account_json`, `service_account_json`, and `project_id`. #### Azure: - **Options**: Service Principal, Access Token. - **Required Fields**: Include `client_secret`, `tenant_id`, and `client_id`. ### Defining Cloud Components You will define three essential components: 1. **Artifact Store** 2. **Orchestrator** 3. **Container Registry** For each component, you can: - Reuse existing components connected via the defined service connector. - Create new components from available resources. ### Conclusion Using the Stack Wizard, you can efficiently register a cloud stack and begin running pipelines in a remote setting. ================================================== === File: docs/book/how-to/infrastructure-deployment/stack-deployment/deploy-a-cloud-stack-with-terraform.md === ### Summary: Deploy a Cloud Stack with Terraform ZenML provides a collection of [Terraform modules](https://registry.terraform.io/modules/zenml-io/zenml-stack) to facilitate the provisioning of cloud resources and their integration with ZenML Stacks, enhancing machine learning infrastructure deployment. #### Prerequisites - A deployed ZenML server instance accessible from your cloud provider. - Create a service account and API key for Terraform access: ```shell zenml service-account create ``` - Install Terraform (version 1.9 or higher) on your machine. - Authenticate with your cloud provider through their CLI or SDK. #### Steps to Use Terraform Modules 1. **Set Environment Variables**: ```shell export ZENML_SERVER_URL="https://your-zenml-server.com" export ZENML_API_KEY="" ``` 2. **Create Terraform Configuration File (`main.tf`)**: ```hcl terraform { required_providers { aws = { source = "hashicorp/aws" } zenml = { source = "zenml-io/zenml" } } } provider "zenml" {} module "zenml_stack" { source = "zenml-io/zenml-stack/" zenml_stack_name = "" orchestrator = "" } output "zenml_stack_id" { value = module.zenml_stack.zenml_stack_id } output "zenml_stack_name" { value = module.zenml_stack.zenml_stack_name } ``` 3. **Run Terraform Commands**: ```shell terraform init terraform apply ``` Confirm changes by typing `yes`. 4. **Install Required Integrations**: ```shell zenml integration install zenml stack set ``` #### Cloud Provider Specifics - **AWS**: Requires AWS CLI authentication. Example configuration includes S3 and ECR setup. - **GCP**: Requires `gcloud` CLI authentication. Example configuration includes GCS and Google Artifact Registry. - **Azure**: Requires Azure CLI authentication. Example configuration includes Azure Storage and ACR. #### Cleanup To remove all resources and the registered ZenML stack: ```shell terraform destroy ``` This concise guide covers the essential steps and configurations needed to deploy a cloud stack using Terraform with ZenML, ensuring users can efficiently set up their machine learning infrastructure. ================================================== === File: docs/book/how-to/configuring-zenml/configuring-zenml.md === ### Configuring ZenML's Default Behavior This guide explains how to configure ZenML's behavior in various situations. Key Points: - Users can adapt ZenML's functionality through configuration settings. - The guide provides instructions for modifying specific aspects of ZenML. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/project-setup-and-management/README.md === # Project Setup and Management This section details the setup and management of ZenML projects, focusing on key processes and configurations. ## Key Components 1. **Project Initialization**: - Use `zenml init` to create a new ZenML project. - This command sets up the necessary directory structure and configuration files. 2. **Configuration**: - ZenML uses a `zenml.yaml` file for project settings. - Key configurations include: - **Artifact Store**: Define where artifacts are stored. - **Metadata Store**: Specify the metadata storage backend. - **Orchestrator**: Choose the orchestration tool (e.g., Airflow, Kubeflow). 3. **Version Control**: - It is recommended to use Git for version control of your ZenML project. - Include `.zenml` directory in your version control to track configurations. 4. **Environment Management**: - Use virtual environments (e.g., `venv`, `conda`) to manage dependencies. - Install ZenML via pip: `pip install zenml`. 5. **Running Pipelines**: - Define pipelines using the ZenML pipeline decorator. - Example: ```python @pipeline def my_pipeline(): step1 = step1_function() step2 = step2_function(step1) ``` 6. **Project Management**: - Use `zenml pipeline` commands to manage and execute pipelines. - Commands include `zenml pipeline run` to execute a pipeline and `zenml pipeline list` to view available pipelines. By following these guidelines, users can effectively set up and manage their ZenML projects, ensuring a streamlined workflow for machine learning operations. ================================================== === File: docs/book/how-to/project-setup-and-management/interact-with-secrets.md === # ZenML Secrets Management Documentation Summary ## Overview of ZenML Secrets ZenML secrets are secure groupings of **key-value pairs** stored in the ZenML secrets store, identified by a **name** for easy reference in pipelines and stacks. ## Creating a Secret ### CLI Commands 1. **Basic Creation**: ```shell zenml secret create --= --= ``` 2. **Using JSON/YAML**: ```shell zenml secret create --values='{"key1":"value2","key2":"value2"}' ``` 3. **Interactive Creation**: ```shell zenml secret create -i ``` 4. **File Input for Large Values**: ```bash zenml secret create --key=@path/to/file.txt zenml secret create --values=@path/to/file.txt ``` ### Additional CLI Commands Commands are available to list, update, and delete secrets. For interactive registration of missing secrets in a stack: ```shell zenml stack register-secrets [] ``` ### Python SDK Using the ZenML client API: ```python from zenml.client import Client client = Client() client.create_secret(name="my_secret", values={"username": "admin", "password": "abc123"}) ``` Other methods include `get_secret`, `update_secret`, `list_secrets`, and `delete_secret`. ## Scoping Secrets Secrets can be scoped to a user, defaulting to the active user. To create a user-scoped secret: ```shell zenml secret create --scope user --= ``` ## Accessing Registered Secrets ### Secret References Components can reference secrets securely without hardcoding sensitive values: ```shell zenml experiment-tracker register mlflow --flavor=mlflow --tracking_username={{mlflow_secret.username}} --tracking_password={{mlflow_secret.password}} ``` ### Validation of Secrets ZenML validates the existence of secrets and keys before running a pipeline. Control validation with the `ZENML_SECRET_VALIDATION_LEVEL` environment variable: - `NONE`: Disables validation. - `SECRET_EXISTS`: Validates only the existence of secrets. - `SECRET_AND_KEY_EXISTS`: (default) Validates both secrets and keys. ### Fetching Secret Values in Steps Access secrets within steps using the ZenML `Client` API: ```python from zenml import step from zenml.client import Client @step def secret_loader() -> None: secret = Client().get_secret() authenticate_to_some_api( username=secret.secret_values["username"], password=secret.secret_values["password"], ) ``` This summary encapsulates the essential information regarding ZenML secrets management, including creation, scoping, access, and validation. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/README.md === # Setting Up a Well-Architected ZenML Project ## Overview This guide outlines best practices for structuring ZenML projects to enhance scalability, maintainability, and team collaboration in machine learning operations (MLOps). ## Importance A well-architected ZenML project is essential for efficient development, deployment, and maintenance of ML models. ## Key Components ### Repository Structure - Organize folders for pipelines, steps, and configurations. - Maintain clear separation of concerns and consistent naming conventions. ### Version Control and Collaboration - Integrate with Git for code management and collaboration. - Accelerate pipeline builds by reusing images and code from the repository. ### Stacks, Pipelines, Models, and Artifacts - **Stacks**: Infrastructure and tool configurations. - **Models**: ML models and metadata. - **Pipelines**: ML workflows. - **Artifacts**: Data and model outputs tracking. ### Access Management and Roles - Define roles (e.g., data scientists, MLOps engineers). - Set up service connectors and manage authorizations. - Use ZenML Pro Teams for role assignment. ### Shared Components and Libraries - Promote code reuse with custom flavors, steps, and shared libraries. - Handle authentication for internal libraries. ### Project Templates - Utilize pre-made and custom templates for consistency in projects. ### Migration and Maintenance - Strategies for migrating legacy code and upgrading ZenML servers. ## Getting Started Explore the guides in this section for detailed information on project setup and management. Regularly review and refine your project to adapt to evolving team needs. Following these guidelines will help create a robust and collaborative MLOps environment. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/connect-your-git-repository.md === ### Summary of ZenML Code Repository Integration **Overview**: Connecting a Git repository to ZenML helps track code versions and speeds up Docker image builds by avoiding unnecessary rebuilds. #### Registering a Code Repository To use a code repository, install the necessary ZenML integration: ```shell zenml integration install ``` Then, register the repository via CLI: ```shell zenml code-repository register --type= [--CODE_REPOSITORY_OPTIONS] ``` #### Available Implementations ZenML supports built-in implementations for GitHub and GitLab, as well as custom repositories. **GitHub Integration**: 1. Install the GitHub integration: ```shell zenml integration install github ``` 2. Register the repository: ```shell zenml code-repository register --type=github \ --owner= --repository= \ --token= ``` - For self-hosted GitHub, include `--api_url=` and `--host=`. **GitLab Integration**: 1. Install the GitLab integration: ```shell zenml integration install gitlab ``` 2. Register the repository: ```shell zenml code-repository register --type=gitlab \ --group= --project= \ --token= ``` - For self-hosted GitLab, include `--instance_url=` and `--host=`. #### Token Management For both GitHub and GitLab, securely store access tokens using ZenML's secrets management: ```shell zenml secret create --pa_token= ``` Then reference the token during registration: ```shell zenml code-repository register ... --token={{.pa_token}} ``` #### Custom Code Repository Development To create a custom code repository, subclass `zenml.code_repositories.BaseCodeRepository` and implement the required methods: ```python class BaseCodeRepository(ABC): @abstractmethod def login(self) -> None: pass @abstractmethod def download_files(self, commit: str, directory: str, repo_sub_directory: Optional[str]) -> None: pass @abstractmethod def get_local_context(self, path: str) -> Optional["LocalRepositoryContext"]: pass ``` Register the custom repository: ```shell zenml code-repository register --type=custom --source=my_module.MyRepositoryClass [--CODE_REPOSITORY_OPTIONS] ``` This integration allows ZenML to track source files and commit hashes for each pipeline run, facilitating efficient development workflows. ================================================== === File: docs/book/how-to/project-setup-and-management/setting-up-a-project-repository/set-up-repository.md === ### Recommended Repository Structure and Best Practices for ZenML #### Project Structure A recommended structure for ZenML projects is as follows: ``` . ├── .dockerignore ├── Dockerfile ├── steps │ ├── loader_step │ │ ├── loader_step.py │ │ └── requirements.txt (optional) │ └── training_step ├── pipelines │ ├── training_pipeline │ │ ├── training_pipeline.py │ │ └── requirements.txt (optional) │ └── deployment_pipeline ├── notebooks │ └── *.ipynb ├── requirements.txt ├── .zen └── run.py ``` - **Steps** and **Pipelines**: Organize steps and pipelines in separate folders. For simpler projects, steps can be kept at the top level. - **Code Repository**: Registering your repository allows ZenML to track code versions and speeds up Docker image builds. #### Steps - Store each step in separate Python files to manage utilities and dependencies effectively. - Use the `logging` module for logging, which will be recorded in the ZenML dashboard: ```python from zenml.logger import get_logger logger = get_logger(__name__) @step def training_data_loader(): logger.info("My logs") ``` #### Pipelines - Keep pipelines in separate Python files and separate execution from definition. - Avoid naming pipelines "pipeline" to prevent conflicts with the imported `pipeline` decorator. - Unique pipeline names are essential for maintaining distinct run histories. #### .dockerignore - Exclude unnecessary files (e.g., data, virtual environments) to optimize Docker image sizes. #### Dockerfile (optional) - ZenML uses an official Docker image by default. Customize with your own `Dockerfile` if needed. #### Notebooks - Organize all notebooks in a dedicated folder. #### .zen Directory - Initialize with `zenml init` to define the project scope. This is crucial for resolving import paths and storing configurations. - Ensure a `.zen` directory exists in the project root for Jupyter notebooks and is highly recommended for Python scripts to avoid import path issues. #### run.py - Place pipeline runners in the root to ensure correct import resolution and define the implicit source root if no `.zen` directory is present. This structure and these practices will help maintain an organized and efficient ZenML project. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/README.md === It seems that there is no documentation text provided for summarization. Please provide the text you would like me to summarize, and I'll be happy to assist you! ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/access-management.md === # Access Management and Roles in ZenML ## Overview This guide outlines user roles and access management in ZenML, essential for project security and efficiency. ## Typical Roles - **Data Scientists**: Develop and run pipelines. - **MLOps Platform Engineers**: Manage infrastructure and stack components. - **Project Owners**: Oversee ZenML deployment and manage user access. Roles can vary by project, but responsibilities remain similar. ## Service Connectors Service connectors integrate cloud services with ZenML, abstracting credentials and configurations. Only MLOps Platform Engineers should manage these connectors due to their infrastructure knowledge. Data Scientists can use connectors for stack components without accessing credentials. ### Example Permissions - **Data Scientist Role**: - Can use connectors to create stack components and run pipelines. - Cannot create, update, delete connectors, or read secret values. - **MLOps Platform Engineer Role**: - Can create, update, delete connectors, and read secret values. RBAC features are available in ZenML Pro. ## Server Upgrade Responsibilities - **Project Owners**: Decide on server upgrades after team consultations. - **MLOps Platform Engineers**: Execute upgrades, ensuring data backup and no service disruption. For detailed upgrade practices, refer to the [Best Practices for Upgrading ZenML Servers](../../../how-to/manage-zenml-server/best-practices-upgrading-zenml.md). ## Pipeline Migration and Maintenance - **Data Scientists**: Own pipeline code and must verify compatibility with new ZenML versions. - **Platform Engineers**: Ensure safe testing environments for new changes. Refer to the [Best Practices for Upgrading ZenML Servers](../../../how-to/manage-zenml-server/best-practices-upgrading-zenml.md) for more information. ## Best Practices for Access Management - Conduct regular audits of user access and permissions. - Implement Role-Based Access Control (RBAC). - Grant least privilege necessary for each role. - Maintain clear documentation of roles and access policies. RBAC is exclusive to ZenML Pro users. Following these guidelines ensures a secure and collaborative ZenML environment. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/stacks-pipelines-models.md === # Organizing Stacks, Pipelines, Models, and Artifacts in ZenML This guide outlines the organization of stacks, pipelines, models, and artifacts in ZenML, which are essential for structuring your ML projects effectively. ## Key Concepts - **Stacks**: Configuration of tools and infrastructure for running pipelines, consisting of components like orchestrators and artifact stores. Stacks enable consistent execution environments across local, staging, and production settings. - **Pipelines**: Sequences of tasks in your ML workflow, automating processes like data preparation and model training. It's advisable to separate pipelines by task (e.g., training vs. inference) for modularity and easier management. - **Models**: Collections of pipelines, artifacts, and metadata representing a project or workspace. Models facilitate data transfer between pipelines and help manage versions through the Model Control Plane. - **Artifacts**: Outputs of pipeline steps that need tracking and reuse, such as datasets and trained models. Each pipeline run generates a new version of artifacts, ensuring traceability. ## Organizing Stacks - A single stack can support multiple pipelines, reducing configuration overhead and promoting reproducibility. - Stacks should be reused across users and pipelines to maintain a consistent environment. ## Organizing Pipelines, Models, and Artifacts - **Pipelines**: Structure your pipelines to cover the entire ML workflow. Use separate pipelines for different tasks to enhance modularity and manageability. - **Models**: Use models to connect related pipelines and facilitate data handover. The Model Control Plane aids in managing model versions and stages. - **Artifacts**: Name artifacts clearly for easy identification and reuse. Log metadata for better organization and visibility. ## Example Workflow 1. Team members create pipelines for feature engineering, training, and inference. 2. They use a shared stack for local testing to iterate quickly. 3. Ensure preprocessing steps are consistent across pipelines. 4. Use a ZenML Model to connect artifacts from training to inference. 5. Track model versions with the Model Control Plane and promote the best-performing model to production. 6. Inference pipelines generate new artifacts, such as prediction datasets. ## Best Practices ### Models - One model per ML use-case. - Group related pipelines and artifacts. - Manage versions using the Model Control Plane. ### Stacks - Separate stacks for different environments. - Share production and staging stacks for consistency. - Simplify local development stacks for rapid iterations. ### Naming and Organization - Consistent naming conventions for resources. - Use tags for organization (e.g., `environment:production`). - Document configurations and dependencies. - Keep code modular and reusable. Following these guidelines will help maintain a scalable and organized MLOps workflow as your project evolves. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/shared-components-for-teams.md === # Shared Libraries and Logic for Teams ## Overview This guide outlines how teams can share code libraries using ZenML to enhance collaboration, standardization, and robustness in projects. It covers what can be shared and how to distribute shared components. ## What Can Be Shared ZenML supports sharing several custom components: ### Custom Flavors - **Definition**: Integrations not built-in with ZenML. - **Steps**: 1. Create in a shared repository. 2. Implement as per ZenML documentation. 3. Register using CLI: ```bash zenml artifact-store flavor register ``` ### Custom Steps - Created in a separate repository and referenced like Python modules. ### Custom Materializers - Common components for sharing. - **Steps**: 1. Create in a shared repository. 2. Implement as per ZenML documentation. 3. Import and use in projects. ## How to Distribute Shared Components ### Shared Private Wheels - **Benefits**: - Easy installation via pip. - Simplified version and dependency management. - Privacy through internal hosting. - **Setup**: 1. Create a private PyPI server (e.g., AWS CodeArtifact). 2. Build code into wheel format. 3. Upload to the server. 4. Configure pip to use the private server. 5. Install packages via pip. ### Using Shared Libraries with `DockerSettings` - Specify shared libraries in a `Dockerfile` using `DockerSettings`. #### Installing Shared Libraries - **Using requirements list**: ```python import os from zenml.config import DockerSettings from zenml import pipeline docker_settings = DockerSettings( requirements=["my-simple-package==0.1.0"], environment={'PIP_EXTRA_INDEX_URL': f"https://{os.environ.get('PYPI_TOKEN', '')}@my-private-pypi-server.com/{os.environ.get('PYPI_USERNAME', '')}/"} ) @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` - **Using a requirements file**: ```python docker_settings = DockerSettings(requirements="/path/to/requirements.txt") @pipeline(settings={"docker": docker_settings}) def my_pipeline(...): ... ``` - **Example of requirements.txt**: ``` --extra-index-url https://YOURTOKEN@my-private-pypi-server.com/YOURUSERNAME/ my-simple-package==0.1.0 ``` ## Best Practices - **Version Control**: Use Git for shared code repositories. - **Access Controls**: Implement security measures for private servers. - **Documentation**: Maintain clear and comprehensive documentation. - **Regular Updates**: Keep libraries updated and communicate changes. - **Continuous Integration**: Set up CI to ensure quality and compatibility. By following these guidelines, teams can effectively share code and libraries, enhancing collaboration and accelerating development within the ZenML framework. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/create-your-own-template.md === ### Creating Your Own ZenML Template To create a ZenML template for standardizing and sharing ML workflows, follow these steps: 1. **Create a Repository**: Set up a new repository to store your template code and configuration files. 2. **Define ML Workflows**: Use existing ZenML templates (e.g., [starter template](https://github.com/zenml-io/template-starter)) as a reference to define your ML steps and pipelines. 3. **Create `copier.yml`**: This configuration file defines template parameters and their default values. Refer to the [Copier documentation](https://copier.readthedocs.io/en/stable/creating/) for details. 4. **Test Your Template**: Generate a new project from your template using the Copier command: ```bash copier copy https://github.com/your-username/your-template.git your-project ``` Replace the URL and project name accordingly. 5. **Use Your Template with ZenML**: Initialize a ZenML project with your template: ```bash zenml init --template https://github.com/your-username/your-template.git ``` For a specific version, use: ```bash zenml init --template https://github.com/your-username/your-template.git --template-tag v1.0.0 ``` Replace `v1.0.0` with your desired git tag. ### Additional Notes - Keep your template updated with best practices. - For practical examples, install the `e2e_batch` template: ```bash mkdir e2e_batch cd e2e_batch zenml init --template e2e_batch --template-with-defaults ``` This guide helps you create a ZenML project template efficiently, enabling quick setup for new ML projects. ================================================== === File: docs/book/how-to/project-setup-and-management/collaborate-with-team/project-templates/README.md === ### ZenML Project Templates Overview ZenML provides project templates to help users quickly understand the framework and build ML pipelines. These templates include essential steps, pipelines, and a simple CLI, covering major use cases. #### Available Project Templates | Project Template [Short name] | Tags | Description | |-------------------------------|------|-------------| | [Starter template](https://github.com/zenml-io/template-starter) [code: `starter`] | `basic`, `scikit-learn` | Basic ML components for starting with ZenML, including parameterized steps, a model training pipeline, and a simple CLI, centered around a scikit-learn use case. | | [E2E Training with Batch Predictions](https://github.com/zenml-io/template-e2e-batch) [code: `e2e_batch`] | `etl`, `hp-tuning`, `model-promotion`, `drift-detection`, `batch-prediction`, `scikit-learn` | A comprehensive template with two pipelines covering data loading, preprocessing, hyperparameter tuning, model training, evaluation, promotion, drift detection, and batch inference. | | [NLP Training Pipeline](https://github.com/zenml-io/template-nlp) [code: `nlp`] | `nlp`, `hp-tuning`, `model-promotion`, `training`, `pytorch`, `gradio`, `huggingface` | A straightforward NLP pipeline for tokenization, training, hyperparameter tuning, evaluation, and deployment of BERT or GPT-2 models, with local testing via Gradio. | #### Collaboration Opportunity ZenML invites users with personal projects to contribute templates. Interested parties can join the [ZenML Slack](https://zenml.io/slack/) for collaboration. #### Using a Project Template To use the templates, install ZenML with the templates extras: ```bash pip install zenml[templates] ``` **Note:** These templates differ from 'Run Templates' used for triggering pipelines. More on Run Templates can be found [here](https://docs.zenml.io/how-to/trigger-pipelines). To generate a project from a template, use the following command: ```bash zenml init --template # Example: zenml init --template e2e_batch ``` For default values, add the `--template-with-defaults` flag: ```bash zenml init --template --template-with-defaults # Example: zenml init --template e2e_batch --template-with-defaults ``` ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-rest-api.md === ### ZenML REST API: Creating and Running a Template **Note:** This feature is available only in [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. #### Triggering a Pipeline via REST API To trigger a pipeline from the REST API, you must have at least one run template for that pipeline. Follow these steps: 1. **Get Pipeline ID:** - Call: `GET /pipelines?name=` - Response includes ``. 2. **Get Template ID:** - Call: `GET /run_templates?pipeline_id=` - Response includes ``. 3. **Run the Pipeline:** - Call: `POST /run_templates//runs` - Include `PipelineRunConfiguration` in the request body. #### Example To re-run a pipeline named `training`, execute the following commands: 1. **Get Pipeline ID:** ```shell curl -X 'GET' \ '/api/v1/pipelines?hydrate=false&name=training' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` 2. **Get Template ID:** ```shell curl -X 'GET' \ '/api/v1/run_templates?hydrate=false&pipeline_id=' \ -H 'accept: application/json' \ -H 'Authorization: Bearer ' ``` 3. **Trigger the Pipeline:** ```shell curl -X 'POST' \ '/api/v1/run_templates//runs' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "steps": {"model_trainer": {"parameters": {"model_type": "rf"}}} }' ``` A successful response indicates the pipeline has been re-triggered with the specified configuration. For more information on obtaining a bearer token, refer to the [API reference](../../../reference/api-reference.md#using-a-bearer-token-to-access-the-api-programmatically). ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-cli.md === ### ZenML CLI: Create a Run Template **Feature Access**: This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. #### Command to Create a Template Use the following command to create a run template with the ZenML CLI: ```bash zenml pipeline create-run-template --name= ``` - ``: Use `run.my_pipeline` if your pipeline is named `my_pipeline` in `run.py`. **Important Note**: Ensure you have an active **remote stack** when executing this command, or specify one using the `--stack` option. ================================================== === File: docs/book/how-to/trigger-pipelines/README.md === ### Trigger a Pipeline (Run Templates) In ZenML, you can execute a pipeline using the pipeline function. Below is a concise example: ```python from zenml import step, pipeline @step def load_data() -> dict: return {'features': [[1, 2], [3, 4], [5, 6]], 'labels': [0, 1, 0]} @step def train_model(data: dict) -> None: print(f"Trained model using {len(data['features'])} data points.") @pipeline def simple_ml_pipeline(): train_model(load_data()) if __name__ == "__main__": simple_ml_pipeline() ``` ### Run Templates Run Templates are pre-defined, parameterized configurations for ZenML pipelines, allowing easy execution from the ZenML dashboard or via the Client/REST API. They serve as customizable blueprints for pipeline runs. **Note:** This feature is exclusive to ZenML Pro users. #### Additional Resources: - Use templates: [Python SDK](use-templates-python.md) - Use templates: [CLI](use-templates-cli.md) - Use templates: [Dashboard](use-templates-dashboard.md) - Use templates: [REST API](use-templates-rest-api.md) ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-dashboard.md === ### ZenML Dashboard: Creating and Running Templates **Note**: This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. #### Creating a Template 1. Navigate to a pipeline run executed on a remote stack (requires a remote orchestrator, artifact store, and container registry). 2. Click `+ New Template`, provide a name, and select `Create`. #### Running a Template - To run a template: - Click `Run a Pipeline` on the main `Pipelines` page, or - Go to a specific template page and select `Run Template`. You will be directed to the `Run Details` page where you can: - Upload a `.yaml` configuration file or - Modify the configuration using the editor. Upon running the template, a new run will execute on the same stack as the original. ================================================== === File: docs/book/how-to/trigger-pipelines/use-templates-python.md === ### ZenML Template Creation and Execution **Note:** This feature is exclusive to [ZenML Pro](https://zenml.io/pro). [Sign up here](https://cloud.zenml.io) for access. #### Create a Template To create a run template using the ZenML client: 1. **From an Existing Pipeline Run:** ```python from zenml.client import Client run = Client().get_pipeline_run() Client().create_run_template(name=, deployment_id=run.deployment_id) ``` **Important:** Select a pipeline run executed on a remote stack (with a remote orchestrator, artifact store, and container registry). 2. **From a Pipeline Definition:** ```python from zenml import pipeline @pipeline def my_pipeline(): ... template = my_pipeline.create_run_template(name=) ``` #### Run a Template To execute a template: ```python from zenml.client import Client template = Client().get_run_template() config = template.config_template # [OPTIONAL] Modify the config here Client().trigger_pipeline(template_id=template.id, run_configuration=config) ``` This triggers a new run on the same stack as the original. #### Advanced Usage: Run a Template from Another Pipeline You can trigger a pipeline from within another pipeline: ```python import pandas as pd from zenml import pipeline, step from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml.artifacts.utils import load_artifact from zenml.client import Client from zenml.config.pipeline_run_configuration import PipelineRunConfiguration @step def trainer(data_artifact_id: str): df = load_artifact(data_artifact_id) @pipeline def training_pipeline(): trainer() @step def load_data() -> pd.DataFrame: ... @step def trigger_pipeline(df: UnmaterializedArtifact): run_config = PipelineRunConfiguration( steps={"trainer": {"parameters": {"data_artifact_id": df.id}}} ) Client().trigger_pipeline("training_pipeline", run_configuration=run_config) @pipeline def loads_data_and_triggers_training(): df = load_data() trigger_pipeline(df) # Triggers the training pipeline ``` For more details, refer to the [PipelineRunConfiguration](https://sdkdocs.zenml.io/latest/core_code_docs/core-config/#zenml.config.pipeline_run_configuration.PipelineRunConfiguration) and [`trigger_pipeline`](https://sdkdocs.zenml.io/latest/core_code_docs/core-client/#zenml.client.Client) documentation, as well as information on Unmaterialized Artifacts [here](../data-artifact-management/complex-usecases/unmaterialized-artifacts.md). ================================================== === File: docs/book/how-to/contribute-to-zenml/README.md === # Contribute to ZenML Thank you for considering contributing to ZenML! We welcome contributions such as new features, documentation improvements, integrations, or bug reports. ## How to Contribute Refer to the [ZenML contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md) for best practices and conventions for contributing features, including custom integrations. ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/how-to/contribute-to-zenml/implement-a-custom-integration.md === ### Summary: Creating an External Integration for ZenML ZenML aims to organize the MLOps landscape by providing numerous integrations with popular tools. This guide outlines how to contribute your own integration to ZenML. #### Step 1: Plan Your Integration Identify the categories your integration belongs to from the [ZenML component guide](../../component-guide/README.md). An integration may span multiple categories, such as cloud integrations (AWS/GCP/Azure) and their components (e.g., container registries, artifact stores). #### Step 2: Create Stack Component Flavors Develop individual stack component flavors corresponding to your chosen categories. Test them as custom flavors before packaging. For example, to register a custom orchestrator flavor: ```shell zenml orchestrator flavor register flavors.my_flavor.MyOrchestratorFlavor ``` Ensure ZenML is initialized at the root of your repository to avoid resolution issues. Verify available flavors with: ```shell zenml orchestrator flavor list ``` Refer to the extensibility documentation or existing integrations like the [MLflow experiment tracker](../../component-guide/experiment-trackers/mlflow.md) for guidance. #### Step 3: Create an Integration Class 1. **Clone Repo**: Clone the [ZenML repository](https://github.com/zenml-io/zenml) and set up your development environment following the [contributing guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). 2. **Create Integration Directory**: Create a new folder in `src/zenml/integrations/` for your integration, structured as follows: ``` /src/zenml/integrations/ / ├── artifact-stores/ ├── flavors/ └── __init__.py ``` 3. **Define Integration Name**: In `zenml/integrations/constants.py`, add: ```python EXAMPLE_INTEGRATION = "" ``` 4. **Create Integration Class**: In `src/zenml/integrations//__init__.py`, subclass the `Integration` class: ```python from zenml.integrations.constants import from zenml.integrations.integration import Integration from zenml.stack import Flavor class ExampleIntegration(Integration): NAME = REQUIREMENTS = [""] @classmethod def flavors(cls) -> List[Type[Flavor]]: from zenml.integrations. import return [] ExampleIntegration.check_installation() ``` Refer to the [MLflow Integration](https://github.com/zenml-io/zenml/blob/main/src/zenml/integrations/mlflow/__init__.py) for an example. 5. **Import Integration**: Ensure your integration is imported in `src/zenml/integrations/__init__.py`. #### Step 4: Create a PR Submit a [PR](https://github.com/zenml-io/zenml/compare) to ZenML for review by core maintainers. Thank you for your contribution! ================================================== === File: docs/book/how-to/control-logging/disable-rich-traceback.md === ### Disabling Rich Traceback Output in ZenML ZenML uses the [`rich`](https://rich.readthedocs.io/en/stable/traceback.html) library for enhanced traceback output by default. To disable this feature, set the following environment variable: ```bash export ZENML_ENABLE_RICH_TRACEBACK=false ``` This change only affects local pipeline runs. To disable rich tracebacks for remote pipeline runs, set the environment variable in the pipeline's environment: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_ENABLE_RICH_TRACEBACK": "false"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure pipeline options my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) ``` This setup ensures plain text traceback output for both local and remote runs. ================================================== === File: docs/book/how-to/control-logging/view-logs-on-the-dasbhoard.md === # Viewing Logs on the Dashboard ZenML captures logs during step execution using a logging handler. Users can utilize the default Python logging module or print statements, which ZenML will log. ```python import logging from zenml import step @step def my_step() -> None: logging.warning("`Hello`") # Using the logging module. print("World.") # Using print statements. ``` Logs are stored in the artifact store of your stack and can be viewed on the dashboard only if the ZenML server has access to this store. Access conditions include: - **Local ZenML Server**: Both local and remote artifact stores may be accessible based on client configuration. - **Deployed ZenML Server**: Logs from runs on a local artifact store are not accessible. Logs from a remote artifact store may be accessible if configured with a service connector. For configuring a remote artifact store with a service connector, refer to the production guide. If configured correctly, logs will be displayed in the dashboard. **Note**: To disable log storage for performance or storage reasons, follow the provided instructions. ================================================== === File: docs/book/how-to/control-logging/README.md === ### Configuring ZenML's Default Logging Behavior ZenML generates different types of logs across its components: 1. **ZenML Server Logs**: Produced by the ZenML Server, similar to any FastAPI server. 2. **Client or Runner Logs**: Generated during pipeline execution, capturing events before, during, and after a pipeline run. 3. **Execution Environment Logs**: Created at the orchestrator level during the execution of each pipeline step, typically using Python's `logging` module. This section outlines how users can manage logging behavior across these environments. ================================================== === File: docs/book/how-to/control-logging/set-logging-verbosity.md === ### Setting Logging Verbosity in ZenML ZenML defaults to a logging verbosity level of `INFO`. To change this, set the environment variable: ```bash export ZENML_LOGGING_VERBOSITY=INFO ``` Available levels are `INFO`, `WARN`, `ERROR`, `CRITICAL`, and `DEBUG`. Note that changes made in the client environment (e.g., local machine) do not affect remote pipeline runs. To set logging verbosity for remote runs, configure the environment variable in your pipeline's environment: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_LOGGING_VERBOSITY": "DEBUG"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure options my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) ``` This setup allows you to control logging verbosity for both local and remote pipeline executions. ================================================== === File: docs/book/how-to/control-logging/enable-or-disable-logs-storing.md === # ZenML Logging Configuration ZenML captures logs during step execution using a default logging handler. Users can utilize either the Python logging module or print statements, which ZenML will store in the artifact store. ## Example Code ```python import logging from zenml import step @step def my_step() -> None: logging.warning("`Hello`") # Using logging module print("World.") # Using print statement ``` Logs can be viewed on the dashboard, but require a connected cloud artifact store with a configured service connector. For more details, refer to the [log viewing documentation](./view-logs-on-the-dasbhoard.md). ## Disabling Log Storage Logs can be disabled in two ways: 1. **Using Decorators**: - Disable logging for a specific step: ```python @step(enable_step_logs=False) def my_step() -> None: ... ``` - Disable logging for the entire pipeline: ```python @pipeline(enable_step_logs=False) def my_pipeline(): ... ``` 2. **Using Environment Variable**: Set the `ZENML_DISABLE_STEP_LOGS_STORAGE` environment variable to `true`. This setting takes precedence over the decorator parameters and must be configured at the orchestrator level: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_DISABLE_STEP_LOGS_STORAGE": "true"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure pipeline options my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) ``` This summary retains critical information regarding logging in ZenML, including how to enable, disable, and view logs, while providing concise code examples. ================================================== === File: docs/book/how-to/control-logging/disable-colorful-logging.md === ### Disabling Colorful Logging in ZenML ZenML uses colorful logging by default for better readability. To disable this feature, set the following environment variable: ```bash ZENML_LOGGING_COLORS_DISABLED=true ``` Setting this variable in the client environment (e.g., local machine) will disable colorful logging for remote pipeline runs as well. To disable it locally while keeping it enabled for remote runs, set the variable in the pipeline run environment: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_LOGGING_COLORS_DISABLED": "false"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure pipeline options my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) ``` This allows for flexible logging configurations based on the execution environment. ================================================== === File: docs/book/how-to/control-logging/set-logging-format.md === ### Summary: Setting the Logging Format in ZenML To change the default logging format in ZenML, set the environment variable `ZENML_LOGGING_FORMAT` using the following command: ```bash export ZENML_LOGGING_FORMAT='%(asctime)s %(message)s' ``` The logging format must adhere to the `%`-string formatting style. For available attributes, refer to the [Python logging documentation](https://docs.python.org/3/library/logging.html#logrecord-attributes). **Important Note:** Changing this variable in the client environment (e.g., local machine) does not affect remote pipeline runs. To configure logging format for remote runs, set the `ZENML_LOGGING_FORMAT` in the pipeline run's environment: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings(environment={"ZENML_LOGGING_FORMAT": "%(asctime)s %(message)s"}) @pipeline(settings={"docker": docker_settings}) def my_pipeline() -> None: my_step() # Alternatively, configure pipeline options my_pipeline = my_pipeline.with_options(settings={"docker": docker_settings}) ``` This setup ensures that the specified logging format is applied to both local and remote pipeline executions. ================================================== === File: docs/book/how-to/model-management-metrics/README.md === # Model Management and Metrics This section details managing models and tracking metrics in ZenML. ## Key Components 1. **Model Management**: - ZenML provides tools for versioning, storing, and deploying models. - Models can be registered, updated, and retrieved using a centralized model registry. 2. **Metrics Tracking**: - Metrics can be logged during training and evaluation phases. - ZenML supports integration with various tracking tools for visualization and analysis. 3. **Model Deployment**: - Models can be deployed to different environments (e.g., cloud, on-premises). - Deployment strategies include A/B testing, canary releases, and rolling updates. 4. **Version Control**: - Each model version is tracked, allowing rollback and comparison of performance metrics. 5. **Integration**: - ZenML integrates with popular ML frameworks and tools for seamless workflow management. ## Example Code Snippet ```python from zenml.model_registry import ModelRegistry # Initialize model registry registry = ModelRegistry() # Register a model registry.register(model_name="my_model", model_version="1.0", model_data=model) # Log metrics registry.log_metrics(model_name="my_model", metrics={"accuracy": 0.95}) ``` This summary encapsulates the essential aspects of model management and metrics tracking in ZenML, ensuring that critical information is preserved for further inquiries. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/grouping-metadata.md === ### Grouping Metadata in the Dashboard To organize metadata in the ZenML dashboard, you can pass a dictionary of dictionaries in the `metadata` parameter. This groups the metadata into cards, enhancing visualization and comprehension. #### Example Code: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize log_metadata( metadata={ "model_metrics": { "accuracy": 0.95, "precision": 0.92, "recall": 0.90 }, "data_details": { "dataset_size": StorageSize(1500000), "feature_columns": ["age", "income", "score"] } }, artifact_name="my_artifact", artifact_version="my_artifact_version", ) ``` In the ZenML dashboard, "model_metrics" and "data_details" will be displayed as separate cards, each with their respective key-value pairs. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/README.md === # Tracking and Comparing Metrics and Metadata in ZenML ## Overview ZenML provides the `log_metadata` function to log and manage metrics and metadata across models, artifacts, steps, and runs through a unified interface. Users can choose to automatically log the same metadata for related entities. ## Logging Metadata ### Basic Usage To log metadata within a step: ```python from zenml import step, log_metadata @step def my_step() -> ...: log_metadata(metadata={"accuracy": 0.91}) ``` This logs `accuracy` for the step, its pipeline run, and optionally its model version. ### Real-World Example An example of logging various metadata in a machine learning pipeline: ```python from zenml import step, pipeline, log_metadata @step def process_engine_metrics() -> float: log_metadata({ "engine_temperature": 3650, # Kelvin "fuel_consumption_rate": 245, # kg/s "thrust_efficiency": 0.92, }) return 0.92 @step def analyze_flight_telemetry(efficiency: float) -> None: log_metadata({ "altitude": 220000, # meters "velocity": 7800, # m/s "fuel_remaining": 2150, # kg "mission_success_prob": 0.9985, }) @pipeline def telemetry_pipeline(): efficiency = process_engine_metrics() analyze_flight_telemetry(efficiency) ``` This data can be visualized in the ZenML Pro dashboard. ## Visualizing and Comparing Metadata (Pro) Use ZenML's Experiment Comparison tool to analyze and compare metrics across runs. This feature is available in the ZenML Pro dashboard. ### Comparison Views 1. **Table View**: Compare metadata across runs with automatic change tracking. 2. **Parallel Coordinates Plot**: Visualize relationships between different metrics. The tool supports comparisons of up to 20 pipeline runs and any numerical metadata logged. ### Additional Use-Cases The `log_metadata` function can target specific entities (model, artifact, step, or run). For more details, refer to: - [Log metadata to a step](attach-metadata-to-a-step.md) - [Log metadata to a run](attach-metadata-to-a-run.md) - [Log metadata to an artifact](attach-metadata-to-an-artifact.md) - [Log metadata to a model](attach-metadata-to-a-model.md) **Note**: Older methods like `log_model_metadata`, `log_artifact_metadata`, and `log_step_metadata` are deprecated. Use `log_metadata` for future implementations. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-run.md === ### Attach Metadata to a Run in ZenML In ZenML, you can log metadata to a pipeline run using the `log_metadata` function, which accepts a dictionary of key-value pairs. Values can be any JSON-serializable type, including ZenML custom types like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Run When logging metadata from within a pipeline step, use `log_metadata` to attach metadata with the key format `step_name::metadata_key`. This allows for consistent metadata keys across different steps during execution. ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ ClassifierMixin, ArtifactConfig(name="sklearn_classifier", is_model_artifact=True) ]: """Train a model and log run-level metadata.""" classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... log_metadata( metadata={ "run_metrics": { "accuracy": accuracy, "precision": precision, "recall": recall } } ) return classifier ``` #### Manually Logging Metadata You can also log metadata to a specific pipeline run without a step, using identifiers like the run ID: ```python from zenml import log_metadata log_metadata( metadata={"post_run_info": {"some_metric": 5.0}}, run_id_name_or_prefix="run_id_name_or_prefix" ) ``` #### Fetching Logged Metadata To retrieve logged metadata, use the ZenML Client: ```python from zenml.client import Client client = Client() run = client.get_pipeline_run("run_id_name_or_prefix") print(run.run_metadata["metadata_key"]) ``` **Note:** The fetched value for a specific key will always reflect the latest entry. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-pipeline.md === ### Fetch Metadata During Pipeline Composition #### Pipeline Configuration Using `PipelineContext` To access pipeline configuration during composition, use the `zenml.get_pipeline_context()` function to retrieve the `PipelineContext`. **Example Code:** ```python from zenml import get_pipeline_context, pipeline @pipeline( extra={ "complex_parameter": [ ("sklearn.tree", "DecisionTreeClassifier"), ("sklearn.ensemble", "RandomForestClassifier"), ] } ) def my_pipeline(): context = get_pipeline_context() after = [] search_steps_prefix = "hp_tuning_search_" for i, model_search_configuration in enumerate(context.extra["complex_parameter"]): step_name = f"{search_steps_prefix}{i}" cross_validation( model_package=model_search_configuration[0], model_class=model_search_configuration[1], id=step_name ) after.append(step_name) select_best_model(search_steps_prefix=search_steps_prefix, after=after) ``` For more details on the attributes and methods available in `PipelineContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.pipelines.pipeline_context.PipelineContext). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/fetch-metadata-within-steps.md === ### Accessing Meta Information in Real-Time within Your Pipeline #### Fetch Metadata within Steps To access information about the currently running pipeline or step, use the `zenml.get_step_context()` function to retrieve the `StepContext`: ```python from zenml import step, get_step_context @step def my_step(): step_context = get_step_context() pipeline_name = step_context.pipeline.name run_name = step_context.pipeline_run.name step_name = step_context.step_run.name ``` You can also determine where the outputs will be stored and which Materializer class will be used: ```python from zenml import step, get_step_context @step def my_step(): step_context = get_step_context() uri = step_context.get_output_artifact_uri() # Output storage URI materializer = step_context.get_output_materializer() # Materializer for output ``` For more details on the attributes and methods available in `StepContext`, refer to the [SDK Docs](https://sdkdocs.zenml.io/latest/core_code_docs/core-new/#zenml.steps.step_context.StepContext). ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-an-artifact.md === ### Summary: Attaching Metadata to Artifacts in ZenML In ZenML, metadata enhances artifacts by providing context and details like size and performance metrics, which are viewable in the ZenML dashboard. #### Logging Metadata for Artifacts Artifacts are outputs from pipeline steps (e.g., datasets, models). To log metadata, use the `log_metadata` function with the artifact name, version, or ID. Metadata can be any JSON-serializable value, including ZenML types like `Uri`, `Path`, `DType`, and `StorageSize`. **Example of Logging Metadata:** ```python import pandas as pd from zenml import step, log_metadata from zenml.metadata.metadata_types import StorageSize @step def process_data_step(dataframe: pd.DataFrame) -> pd.DataFrame: processed_dataframe = ... log_metadata( metadata={ "row_count": len(processed_dataframe), "columns": list(processed_dataframe.columns), "storage_size": StorageSize(processed_dataframe.memory_usage().sum()) }, infer_artifact=True, ) return processed_dataframe ``` #### Selecting the Artifact for Metadata Logging 1. **Using `infer_artifact`**: Automatically infers the output artifact of the step. 2. **Name and Version**: If both are provided, metadata attaches to the specific artifact version. 3. **Artifact Version ID**: Directly fetches and attaches to the specified artifact version. #### Fetching Logged Metadata Use the ZenML Client to fetch logged metadata: ```python from zenml.client import Client client = Client() artifact = client.get_artifact_version("my_artifact", "my_version") print(artifact.run_metadata["metadata_key"]) ``` *Note: The fetched value reflects the latest entry for the specified key.* #### Grouping Metadata in the Dashboard To group metadata into cards in the ZenML dashboard, pass a dictionary of dictionaries to the `metadata` parameter: ```python log_metadata( metadata={ "model_metrics": { "accuracy": 0.95, "precision": 0.92, "recall": 0.90 }, "data_details": { "dataset_size": StorageSize(1500000), "feature_columns": ["age", "income", "score"] } }, artifact_name="my_artifact", artifact_version="version", ) ``` In the dashboard, `model_metrics` and `data_details` will appear as separate cards. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-step.md === ### Summary: Attaching Metadata to a Step in ZenML In ZenML, you can log metadata to a specific step using the `log_metadata` function, which accepts a dictionary of key-value pairs. This metadata can include any JSON-serializable values, such as custom classes like `Uri`, `Path`, `DType`, and `StorageSize`. #### Logging Metadata Within a Step When `log_metadata` is called within a step, it attaches the metadata to the currently executing step and its pipeline run, making it suitable for logging metrics available during execution. **Example: Logging Metadata in a Step** ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... log_metadata(metadata={"evaluation_metrics": {"accuracy": accuracy, "precision": precision, "recall": recall}}) return classifier ``` **Note:** If a pipeline execution is cached, the cached step run will copy the original step's metadata, excluding any manually generated metadata post-execution. #### Manually Logging Metadata After Execution You can log metadata after a step's execution using identifiers for the pipeline, step, and run. **Example: Manual Metadata Logging** ```python from zenml import log_metadata log_metadata(metadata={"additional_info": {"a_number": 3}}, step_name="step_name", run_id_name_or_prefix="run_id_name_or_prefix") # or log_metadata(metadata={"additional_info": {"a_number": 3}}, step_id="step_id") ``` #### Fetching Logged Metadata To retrieve logged metadata, use the ZenML Client: **Example: Fetching Metadata** ```python from zenml.client import Client client = Client() step = client.get_pipeline_run("pipeline_id").steps["step_name"] print(step.run_metadata["metadata_key"]) ``` **Note:** Fetching metadata with a specific key will return the latest entry. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/attach-metadata-to-a-model.md === ### Summary: Attaching Metadata to a Model in ZenML ZenML allows logging metadata for models, providing context beyond individual artifact details. This metadata can include evaluation results, deployment information, or customer-specific details, aiding in model management and performance interpretation across versions. #### Logging Metadata for Models To log metadata, use the `log_metadata` function, which attaches key-value pairs, including metrics and JSON-serializable values like `Uri`, `Path`, and `StorageSize`. **Example:** ```python from typing import Annotated import pandas as pd from sklearn.base import ClassifierMixin from sklearn.ensemble import RandomForestClassifier from zenml import step, log_metadata, ArtifactConfig @step def train_model(dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig(name="sklearn_classifier")]: """Train a model and log metadata.""" classifier = RandomForestClassifier().fit(dataset) accuracy, precision, recall = ... log_metadata( metadata={ "evaluation_metrics": { "accuracy": accuracy, "precision": precision, "recall": recall } }, infer_model=True, ) return classifier ``` In this example, metadata is associated with the model, summarizing various pipeline steps and artifacts. #### Selecting Models with `log_metadata` ZenML provides options for attaching metadata to model versions: 1. **Using `infer_model`**: Automatically infers the model from the step context. 2. **Model Name and Version**: Specify both to attach metadata to a specific version. 3. **Model Version ID**: Directly provide it to fetch and attach metadata to that version. #### Fetching Logged Metadata To retrieve attached metadata, use the ZenML Client: ```python from zenml.client import Client client = Client() model = client.get_model_version("my_model", "my_version") print(model.run_metadata["metadata_key"]) ``` **Note**: Fetching metadata by a specific key returns the latest entry. This documentation provides a concise guide on logging and retrieving model metadata in ZenML, ensuring effective model management and performance tracking. ================================================== === File: docs/book/how-to/model-management-metrics/track-metrics-metadata/logging-metadata.md === ### Summary: Tracking Your Metadata with ZenML ZenML supports special metadata types to capture specific information. Key types include: - **Uri**: Represents a dataset source URI. - **Path**: Specifies the filesystem path to a script. - **DType**: Describes data types of columns. - **StorageSize**: Indicates the size of processed data in bytes. #### Example Code: ```python from zenml import log_metadata from zenml.metadata.metadata_types import StorageSize, DType, Uri, Path log_metadata( metadata={ "dataset_source": Uri("gs://my-bucket/datasets/source.csv"), "preprocessing_script": Path("/scripts/preprocess.py"), "column_types": { "age": DType("int"), "income": DType("float"), "score": DType("int") }, "processed_data_size": StorageSize(2500000) }, ) ``` This example demonstrates how to log metadata using these types, ensuring consistency and interpretability in metadata format. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/load-artifacts-from-model.md === # Summary: Loading Artifacts from a Model This documentation outlines how to load artifacts from a model in a two-pipeline project, where the first pipeline handles training and the second performs batch inference using the trained model artifacts. ## Key Points - **Model Artifact Loading**: Artifacts can be passed between pipelines. The timing of loading these artifacts is crucial. - **Pipeline Context**: Use `get_pipeline_context().model` to access the model context during pipeline execution. This context is not evaluated during pipeline compilation, as the production model version may change. - **Delayed Materialization**: Calls like `model.get_model_artifact("trained_model")` are stored for delayed materialization, which occurs during the step execution. ### Example Code 1. **Using Pipeline Context**: ```python from typing_extensions import Annotated from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages import pandas as pd from sklearn.base import ClassifierMixin @step def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: return pd.Series(model.predict(data)) @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) def do_predictions(): model = get_pipeline_context().model inference_data = load_data() predict(model=model.get_model_artifact("trained_model"), data=inference_data) if __name__ == "__main__": do_predictions() ``` 2. **Using Client Methods**: ```python from zenml.client import Client @pipeline def do_predictions(): model = Client().get_model_version("iris_classifier", ModelStages.PRODUCTION) inference_data = load_data() predict(model=model.get_model_artifact("trained_model"), data=inference_data) ``` In both examples, the model artifact is loaded only during the step execution, ensuring that the correct version is utilized. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/model-versions.md === # Model Versions Overview Model versions track different iterations of your training process, providing functionality to manage the ML lifecycle. You can associate model versions with stages (e.g., production, staging) and link them to non-technical artifacts like datasets or business data. Model versions are created automatically during training, but can also be explicitly named via the `version` argument in the `Model` object. ## Explicitly Naming Model Versions To explicitly name a model version: ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="1.0.5") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training happens here ``` If the model version exists, it becomes active in the pipeline context. Be cautious when creating new pipelines or fetching existing ones. ## Using Name Templates for Model Versions For semantic versioning, use templated names in the `version` and/or `name` arguments: ```python from zenml import Model, step, pipeline model = Model(name="{team}_my_model", version="experiment_with_phi_3_{date}_{time}") @step(model=model) def llm_trainer(...) -> ...: ... @pipeline(model=model, substitutions={"team": "Team_A"}) def training_pipeline(...): # training happens here ``` This produces unique, readable model versions like `experiment_with_phi_3_2024_08_30_12_42_53`. Substitutions can be set in the `@pipeline` decorator, `pipeline.with_options`, `@step` decorator, or `step.with_options`. ### Standard Substitutions - `{date}`: Current date (e.g., `2024_11_27`) - `{time}`: Current time in UTC (e.g., `11_07_09_326492`) ## Fetching Model Versions by Stage Assign stages to model versions (e.g., `production`, `staging`) for semantic retrieval: ```shell zenml model version update MODEL_NAME --stage=STAGE ``` Fetch a model version by its stage: ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="production") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training happens here ``` ## Autonumbering of Versions ZenML auto-generates version numbers if none is specified. For example: ```python from zenml import Model, step model = Model(name="my_model", version="even_better_version") @step(model=model) def svc_trainer(...) -> ...: ... ``` If `really_good_version` was the 5th version, `even_better_version` becomes the 6th: ```python from zenml import Model earlier_version = Model(name="my_model", version="really_good_version").number # == 5 updated_version = Model(name="my_model", version="even_better_version").number # == 6 ``` This ensures proper tracking of model iterations. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/README.md === # Use the Model Control Plane A `Model` in ZenML is an entity that consolidates pipelines, artifacts, metadata, and essential business data, encapsulating the business logic of ML products. It can be viewed as a "project" or "workspace." **Key Points:** - The technical model, which contains the model file(s) with weights and parameters, is a primary artifact associated with a ZenML Model. Other relevant artifacts include training data and production predictions. - Models are first-class citizens in ZenML, managed through a unified API and the ZenML Pro dashboard. - Models capture lineage information and support version staging, allowing for business rule-based promotion of model versions (e.g., from `Development` to `Production`). - The Model Control Plane provides a centralized interface for managing models, integrating pipelines, artifacts, and business data with the technical model. For a comprehensive example, refer to the [starter guide](../../../user-guide/starter-guide/track-ml-models.md). ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/register-a-model.md === # Model Registration in ZenML Models can be registered in ZenML through various methods: explicitly via CLI or Python SDK, or implicitly during a pipeline run. ZenML Pro users have access to a dashboard for model registration. ## Explicit Registration ### CLI Registration Use the following command to register a model via the CLI: ```bash zenml model register iris_logistic_regression --license=... --description=... ``` For more options, run `zenml model register --help`. You can also add tags using the `--tag` option. ### Dashboard Registration ZenML Pro users can register models directly from the cloud dashboard interface. ### Python SDK Registration Register a model using the Python SDK as follows: ```python from zenml import Model from zenml.client import Client Client().create_model( name="iris_logistic_regression", license="Copyright (c) ZenML GmbH 2023", description="Logistic regression model trained on the Iris dataset.", tags=["regression", "sklearn", "iris"], ) ``` ## Implicit Registration Models can also be registered implicitly during a pipeline run by specifying a `Model` object in the `@pipeline` decorator. Here’s an example of a training pipeline: ```python from zenml import pipeline from zenml import Model @pipeline( enable_cache=False, model=Model( name="demo", license="Apache", description="Show case Model Control Plane.", ), ) def train_and_promote_model(): ... ``` Running this pipeline creates a new model version and maintains connections to the associated artifacts. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/linking-model-binaries-data-to-models.md === # Linking Model Binaries/Data to Models in ZenML ZenML allows linking artifacts generated during pipeline runs to models, enabling lineage tracking and transparency in data and model usage for training, evaluation, and inference. ## Configuring the Model at the Pipeline Level You can link artifacts by configuring the `model` parameter in the `@pipeline` or `@step` decorator: ```python from zenml import Model, pipeline model = Model(name="my_model", version="1.0.0") @pipeline(model=model) def my_pipeline(): ... ``` This links all artifacts from the pipeline run to the specified model. ## Saving Intermediate Artifacts To save intermediate work, use the `save_artifact` utility. If the step has the Model context configured, it will automatically link to it: ```python from zenml import step, Model from zenml.artifacts.utils import save_artifact import pandas as pd from typing_extensions import Annotated from zenml.artifacts.artifact_config import ArtifactConfig @step(model=Model(name="MyModel", version="1.2.42")) def trainer(trn_dataset: pd.DataFrame) -> Annotated[ClassifierMixin, ArtifactConfig("trained_model")]: for epoch in epochs: checkpoint = model.train(epoch) save_artifact(data=checkpoint, name="training_checkpoint", version=f"1.2.42_{epoch}") return model ``` ## Explicitly Linking Artifacts To link an artifact to a model outside of a step, use the `link_artifact_to_model` function: ```python from zenml import step, Model, link_artifact_to_model, save_artifact from zenml.client import Client @step def f_() -> None: new_artifact = save_artifact(data="Hello, World!", name="manual_artifact") link_artifact_to_model(artifact_version_id=new_artifact.id, model=Model(name="MyModel", version="0.0.42")) existing_artifact = Client().get_artifact_version(name_id_or_prefix="existing_artifact") link_artifact_to_model(artifact_version_id=existing_artifact.id, model=Model(name="MyModel", version="0.2.42")) ``` This allows for flexible linking of artifacts to models, enhancing traceability and management of model versions. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/connecting-artifacts-via-a-model.md === ### Summary: Structuring an MLOps Project In MLOps, managing artifacts, models, and pipelines is crucial for project structure. An MLOps project typically consists of multiple pipelines, such as: - **Feature Engineering Pipeline**: Prepares raw data for training. - **Training Pipeline**: Trains models using data from the feature engineering pipeline. - **Inference Pipeline**: Runs predictions using the trained model. - **Deployment Pipeline**: Deploys the trained model to a production endpoint. The design of these pipelines can vary based on project requirements, and they often need to share information (artifacts, models, metadata) between them. #### Common Patterns for Artifact Exchange 1. **Artifact Exchange via `Client`**: - Use the ZenML Client to facilitate data transfer between pipelines. - Example code: ```python from zenml import pipeline from zenml.client import Client @pipeline def feature_engineering_pipeline(): train_data, test_data = prepare_data() @pipeline def training_pipeline(): client = Client() train_data = client.get_artifact_version(name="iris_training_dataset") test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023") sklearn_classifier = model_trainer(train_data) model_evaluator(model, sklearn_classifier) ``` *Note: Artifacts are not materialized in memory but referenced in the artifact store.* 2. **Artifact Exchange via `Model`**: - Use ZenML Model as a reference point for artifacts. - The training pipeline (`train_and_promote`) produces models, which are promoted based on accuracy. The inference pipeline (`do_predictions`) uses the latest promoted model without needing artifact IDs. - Example code: ```python from zenml import step, get_step_context @step(enable_cache=False) def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: model = get_step_context().model.get_model_artifact("trained_model") predictions = pd.Series(model.predict(data)) return predictions ``` - Alternatively, resolve the artifact at the pipeline level: ```python from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages @step def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: return pd.Series(model.predict(data)) @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) def do_predictions(): model = get_pipeline_context().model.get_model_artifact("trained_model") inference_data = load_data() predict(model=model, data=inference_data) if __name__ == "__main__": do_predictions() ``` Both artifact exchange methods are valid; the choice depends on user preference. For further details on project repository structure, refer to the best practices section. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/associate-a-pipeline-with-a-model.md === ### Summary of Documentation on Associating a Pipeline with a Model To associate a pipeline with a model in ZenML, use the following Python code: ```python from zenml import pipeline from zenml import Model from zenml.enums import ModelStages @pipeline( model=Model( name="ClassificationModel", # Unique model name tags=["MVP", "Tabular"], # Tags for filtering version=ModelStages.LATEST # Specify model version or stage ) ) def my_pipeline(): ... ``` This code associates the pipeline with the specified model. If the model already exists, a new version is created. To attach the pipeline to an existing model version, specify the version explicitly. Model configuration can also be moved to a configuration file, as shown below: ```yaml model: name: text_classifier description: A breast cancer classifier tags: ["classifier", "sgd"] ``` This allows for better organization and management of model settings. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/delete-a-model.md === ### Deleting Models in ZenML **Overview**: Deleting a model or its specific version removes all links to artifacts and pipeline runs, along with associated metadata. #### Deleting All Versions of a Model - **CLI Command**: ```shell zenml model delete ``` - **Python SDK**: ```python from zenml.client import Client Client().delete_model() ``` #### Deleting a Specific Version of a Model - **CLI Command**: ```shell zenml model version delete ``` - **Python SDK**: ```python from zenml.client import Client Client().delete_model_version() ``` This documentation provides the necessary commands and code snippets for deleting models and their versions using both CLI and Python SDK. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/promote-a-model.md === # Model Promotion in ZenML ## Stages ZenML Model versions can progress through various lifecycle stages, which serve as metadata to indicate their state. The stages include: - **staging**: Ready for production. - **production**: Actively running in production. - **latest**: Represents the most recent version; cannot be promoted to this stage. - **archived**: No longer relevant, typically after moving from another stage. ### Promotion Methods Models can be promoted using the following methods: #### 1. CLI Promotion Use the CLI command: ```bash zenml model version update iris_logistic_regression --stage=... ``` #### 2. Cloud Dashboard Promotion via the ZenML Pro dashboard is forthcoming. #### 3. Python SDK Promotion The most common method. Example: ```python from zenml import Model from zenml.enums import ModelStages MODEL_NAME = "iris_logistic_regression" model = Model(name=MODEL_NAME, version="1.2.3") model.set_stage(stage=ModelStages.PRODUCTION) latest_model = Model(name=MODEL_NAME, version=ModelStages.LATEST) latest_model.set_stage(stage=ModelStages.STAGING) ``` In a pipeline context, retrieve the model from the step context: ```python from zenml import get_step_context, step, pipeline from zenml.enums import ModelStages @step def promote_to_staging(): model = get_step_context().model model.set_stage(ModelStages.STAGING, force=True) @pipeline(...) def train_and_promote_model(): ... promote_to_staging(after=["train_and_evaluate"]) ``` ## Fetching Model Versions by Stage To load the correct model version, specify the stage in the version parameter: ```python from zenml import Model, step, pipeline model = Model(name="my_model", version="production") @step(model=model) def svc_trainer(...) -> ...: ... @pipeline(model=model) def training_pipeline(...): # training logic here ``` This configuration ensures the specified model is used across steps and the pipeline. ================================================== === File: docs/book/how-to/model-management-metrics/model-control-plane/load-a-model-in-code.md === # Summary of ZenML Model Loading Documentation ## Loading a Model in Code ### 1. Load the Active Model in a Pipeline You can access the active model and its metadata in a pipeline using the following code: ```python from zenml import step, pipeline, get_step_context, Model @pipeline(model=Model(name="my_model")) def my_pipeline(): ... @step def my_step(): mv = get_step_context().model # Get model from active step context print(mv.run_metadata["metadata_key"].value) # Get metadata output = mv.get_artifact("my_dataset", "my_version") # Fetch artifact output.run_metadata["accuracy"].value ``` ### 2. Load Any Model via the Client You can also load models using the `Client`. Here’s how to get a staging model version: ```python from zenml import step from zenml.client import Client from zenml.enums import ModelStages @step def model_evaluator_step(): try: staging_zenml_model = Client().get_model_version( model_name_or_id="", model_version_name_or_number_or_id=ModelStages.STAGING, ) except KeyError: staging_zenml_model = None ``` This documentation provides methods to load models in ZenML, either through an active pipeline context or via the Client API. ================================================== === File: docs/book/how-to/advanced-topics/README.md === # Advanced Topics in ZenML This section addresses advanced features and configurations within ZenML, focusing on enhancing the functionality and customization of the framework. ### Key Features 1. **Custom Components**: Users can create custom components to extend ZenML's capabilities, allowing for tailored data processing and model training. 2. **Pipelines**: ZenML supports complex pipeline configurations, enabling the orchestration of multiple steps in machine learning workflows. 3. **Integrations**: ZenML integrates with various tools and platforms, facilitating seamless connections with cloud services, data storage, and ML frameworks. 4. **Metadata Management**: Users can manage metadata effectively, ensuring traceability and reproducibility of experiments. 5. **Versioning**: ZenML supports version control for pipelines and components, allowing users to track changes and revert to previous versions if necessary. ### Configuration - **Configuration Files**: Users can define configurations in YAML files, specifying parameters for pipelines, components, and integrations. - **Environment Variables**: ZenML allows the use of environment variables for sensitive information, enhancing security and flexibility. ### Example Code Snippet ```python from zenml.pipelines import pipeline @pipeline def my_pipeline(): # Define pipeline steps step1 = custom_component_1() step2 = custom_component_2(step1) # Run the pipeline my_pipeline.run() ``` ### Conclusion Advanced configurations in ZenML empower users to create robust, scalable, and customizable machine learning workflows, enhancing productivity and collaboration. ================================================== === File: docs/book/how-to/data-artifact-management/README.md === # Data and Artifact Management in ZenML This section details the management of data and artifacts within ZenML. Key points include: - **Data Management**: ZenML facilitates the handling of datasets throughout the machine learning lifecycle, ensuring reproducibility and version control. - **Artifact Management**: Artifacts, such as models and metrics, are tracked and stored for easy retrieval and analysis. ZenML supports various artifact stores. - **Versioning**: Both data and artifacts can be versioned, allowing users to revert to previous states and maintain consistency across experiments. - **Integration**: ZenML integrates with multiple data sources and storage solutions, enabling seamless data ingestion and artifact storage. - **Pipeline Support**: Data and artifacts are managed within pipelines, ensuring that each step has access to the necessary resources. - **Code Example**: ```python from zenml import pipeline @pipeline def my_pipeline(): # Data ingestion and processing steps pass ``` This summary encapsulates the essential aspects of data and artifact management in ZenML, focusing on functionality, integration, and practical usage. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/types-of-visualizations.md === ### Types of Visualizations in ZenML ZenML automatically saves and displays visualizations for various data types in the ZenML dashboard. These visualizations can also be viewed in Jupyter notebooks using the `artifact.visualize()` method. **Default Visualization Examples:** - **Statistical Representation:** Visualizes a Pandas DataFrame as a PNG image. - **Drift Detection Reports:** Generated by tools like Evidently, Great Expectations, and whylogs. - **Hugging Face Datasets Viewer:** Displays datasets in an embedded HTML iframe. **Visualization Outputs:** - Dashboard view: ![ZenML Artifact Visualizations](../../../.gitbook/assets/artifact_visualization_dashboard.png) - Jupyter notebook output: ![output.visualize() Output](../../../.gitbook/assets/artifact_visualization_evidently.png) - Hugging Face datasets viewer: ![output.visualize() output for the Hugging Face datasets viewer](../../../.gitbook/assets/artifact_visualization_huggingface.gif) ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/README.md === --- icon: chart-scatter description: Configuring ZenML for data visualizations in the dashboard. --- # Visualize Artifacts ZenML allows easy association of visualizations with data and artifacts. ![ZenML Artifact Visualizations](../../../.gitbook/assets/artifact_visualization_dashboard.png)
ZenML Scarf
================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/creating-custom-visualizations.md === # Creating Custom Visualizations in ZenML ZenML allows you to associate custom visualizations with artifacts if they are of supported types: - **HTML:** Embedded HTML visualizations (e.g., data validation reports) - **Image:** Visualizations of image data (e.g., Pillow images) - **CSV:** Tables (e.g., pandas DataFrame output) - **Markdown:** Markdown strings or pages - **JSON:** JSON strings or objects ## Adding Custom Visualizations You can add custom visualizations in three ways: 1. **Special Return Types:** If you have HTML, Markdown, CSV, or JSON data, cast them to a specific class in your step. 2. **Custom Materializers:** Define visualization logic for specific data types by building a custom materializer. 3. **Custom Return Type Class:** Create a custom return type class with a corresponding materializer. ### Visualization via Special Return Types Return data from your step by casting to one of the following types: - `zenml.types.HTMLString` - `zenml.types.MarkdownString` - `zenml.types.CSVString` - `zenml.types.JSONString` #### Example: ```python from zenml.types import CSVString @step def my_step() -> CSVString: return CSVString("a,b,c\n1,2,3") ``` ### Visualization via Materializers To visualize artifacts of a certain data type automatically, override the `save_visualizations()` method in a custom materializer. #### Example: Matplotlib Figure Visualization 1. **Custom Class:** ```python from pydantic import BaseModel class MatplotlibVisualization(BaseModel): figure: Any ``` 2. **Materializer:** ```python class MatplotlibMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MatplotlibVisualization,) def save_visualizations(self, data: MatplotlibVisualization) -> Dict[str, VisualizationType]: visualization_path = os.path.join(self.uri, "visualization.png") with fileio.open(visualization_path, 'wb') as f: data.figure.savefig(f, format='png', bbox_inches='tight') return {visualization_path: VisualizationType.IMAGE} ``` 3. **Step:** ```python @step def create_matplotlib_visualization() -> MatplotlibVisualization: fig, ax = plt.subplots() ax.plot([1, 2, 3, 4], [1, 4, 2, 3]) ax.set_title('Sample Plot') return MatplotlibVisualization(figure=fig) ``` ### Workflow When using the step in a pipeline: 1. The step returns a `MatplotlibVisualization`. 2. ZenML invokes the `MatplotlibMaterializer` and calls `save_visualizations()`. 3. The figure is saved as a PNG in the artifact store. 4. The dashboard displays the PNG when viewing the artifact. For additional examples, refer to the Hugging Face datasets materializer for dataset visualizations. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/disabling-visualizations.md === ### Disabling Visualizations To disable artifact visualization, set `enable_artifact_visualization` to `False` at the pipeline or step level: ```python @step(enable_artifact_visualization=False) def my_step(): ... @pipeline(enable_artifact_visualization=False) def my_pipeline(): ... ``` This configuration prevents visualizations from being generated for the specified step or pipeline. ================================================== === File: docs/book/how-to/data-artifact-management/visualize-artifacts/visualizations-in-dashboard.md === ### Displaying Visualizations in the Dashboard #### Accessing Visualizations To display visualizations on the ZenML dashboard, the ZenML server must access the artifact store where visualizations are stored. #### Configuring a Service Connector - Users must configure a **service connector** to allow the ZenML server to connect to the artifact store. - For detailed instructions, refer to the [service connector documentation](../../infrastructure-deployment/auth-management/README.md). - Example: See the [AWS S3 artifact store documentation](../../../component-guide/artifact-stores/s3.md). **Note:** When using the default/local artifact store with a deployed ZenML, the server cannot access local files, and visualizations will not appear. Use a service connector with a remote artifact store for visualization access. #### Configuring Artifact Stores If visualizations from a pipeline run are missing, ensure the ZenML server has the necessary dependencies and permissions for the artifact store. For more details, consult the [custom artifact store documentation](../../../component-guide/artifact-stores/custom.md#enabling-artifact-visualizations-with-custom-artifact-stores). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/README.md === It seems that the documentation text you intended to provide is missing. Please share the text you would like summarized, and I'll be happy to help! ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/unmaterialized-artifacts.md === ### Summary: Skipping Materialization of Artifacts in ZenML #### Overview In ZenML, a pipeline is data-centric, with steps defined by their inputs and outputs, which interact with an **artifact store**. **Materializers** manage the serialization and deserialization of artifacts as they are passed between steps. However, there are scenarios where you may want to skip materialization and use a reference to an artifact instead. **Warning:** Skipping materialization can have unintended consequences for downstream tasks. Only do this if necessary. #### Using Unmaterialized Artifacts An unmaterialized artifact is represented by `zenml.materializers.UnmaterializedArtifact`, which includes a `uri` property pointing to its storage location. To use an unmaterialized artifact in a step, specify `UnmaterializedArtifact` as the type. **Example Code:** ```python from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml import step @step def my_step(my_artifact: UnmaterializedArtifact): pass ``` #### Pipeline Example In the following pipeline, `s1` and `s2` produce identical artifacts. `s3` consumes materialized artifacts, while `s4` uses unmaterialized artifacts, allowing direct access to their URIs. **Pipeline Structure:** ``` s1 -> s3 s2 -> s4 ``` **Example Code:** ```python from typing_extensions import Annotated from typing import Dict, List, Tuple from zenml.artifacts.unmaterialized_artifact import UnmaterializedArtifact from zenml import pipeline, step @step def step_1() -> Tuple[Annotated[Dict[str, str], "dict_"], Annotated[List[str], "list_"]]: return {"some": "data"}, [] @step def step_2() -> Tuple[Annotated[Dict[str, str], "dict_"], Annotated[List[str], "list_"]]: return {"some": "data"}, [] @step def step_3(dict_: Dict, list_: List) -> None: assert isinstance(dict_, dict) assert isinstance(list_, list) @step def step_4(dict_: UnmaterializedArtifact, list_: UnmaterializedArtifact) -> None: print(dict_.uri) print(list_.uri) @pipeline def example_pipeline(): step_3(*step_1()) step_4(*step_2()) example_pipeline() ``` For further details on using `UnmaterializedArtifact`, refer to additional examples in the documentation. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/registering-existing-data.md === # Summary of ZenML Artifact Registration Documentation ## Overview This documentation explains how to register external data as ZenML artifacts for future use, including folders and files, as well as managing model checkpoints during training with PyTorch Lightning. ## Registering Existing Data ### 1. Register Existing Folder as a ZenML Artifact You can register a folder containing data as a ZenML artifact. The following code demonstrates how to create a folder, add a file, and register it: ```python import os from uuid import uuid4 from pathlib import Path from zenml.client import Client from zenml import register_artifact prefix = Client().active_stack.artifact_store.path folder_path = os.path.join(prefix, f"my_test_folder_{uuid4()}") os.mkdir(folder_path) with open(os.path.join(folder_path, "test_file.txt"), "w") as f: f.write("test") register_artifact(folder_path, name="my_folder_artifact") # Load and verify the artifact temp_artifact_folder_path = Client().get_artifact_version("my_folder_artifact").load() assert isinstance(temp_artifact_folder_path, Path) assert os.path.isdir(temp_artifact_folder_path) with open(os.path.join(temp_artifact_folder_path, "test_file.txt"), "r") as f: assert f.read() == "test" ``` ### 2. Register Existing File as a ZenML Artifact Similarly, you can register a single file as an artifact: ```python import os from uuid import uuid4 from pathlib import Path from zenml.client import Client from zenml import register_artifact prefix = Client().active_stack.artifact_store.path file_path = os.path.join(prefix, f"my_test_file_{uuid4()}.txt") with open(file_path, "w") as f: f.write("test") register_artifact(file_path, name="my_file_artifact") # Load and verify the artifact temp_artifact_file_path = Client().get_artifact_version("my_file_artifact").load() assert isinstance(temp_artifact_file_path, Path) with open(temp_artifact_file_path, "r") as f: assert f.read() == "test" ``` ## Registering Checkpoints of a PyTorch Lightning Training Run ### Register All Checkpoints To register all checkpoints during a training run, use the following code: ```python from zenml.client import Client from zenml import register_artifact from pytorch_lightning import Trainer from pytorch_lightning.callbacks import ModelCheckpoint from uuid import uuid4 prefix = Client().active_stack.artifact_store.path default_root_dir = os.path.join(prefix, uuid4().hex) trainer = Trainer( default_root_dir=default_root_dir, callbacks=[ModelCheckpoint(every_n_epochs=1, save_top_k=-1)] ) trainer.fit(model) register_artifact(default_root_dir, name="all_my_model_checkpoints") ``` ### Register Checkpoints as Separate Artifact Versions To register each checkpoint as a separate artifact version, extend the `ModelCheckpoint` class: ```python from zenml.client import Client from zenml import register_artifact, get_step_context from pytorch_lightning.callbacks import ModelCheckpoint class ZenMLModelCheckpoint(ModelCheckpoint): def __init__(self, artifact_name: str, every_n_epochs: int = 1, save_top_k: int = -1): zenml_model = get_step_context().model self.artifact_name = artifact_name self.default_root_dir = os.path.join(Client().active_stack.artifact_store.path, str(zenml_model.version)) super().__init__(every_n_epochs=every_n_epochs, save_top_k=save_top_k) def on_train_epoch_end(self, trainer, pl_module): super().on_train_epoch_end(trainer, pl_module) register_artifact(os.path.join(self.dirpath, self.filename_format.format(epoch=trainer.current_epoch)), self.artifact_name) ``` ## Full Example of a PyTorch Lightning Training Pipeline The following code snippet illustrates a complete pipeline for training a model with checkpoints registered as artifacts: ```python from zenml import step, pipeline from pytorch_lightning import Trainer, LightningModule from torch.utils.data import DataLoader from torchvision.datasets import MNIST from torchvision.transforms import ToTensor @step def get_data() -> DataLoader: return DataLoader(MNIST(os.getcwd(), download=True, transform=ToTensor())) @step def get_model() -> LightningModule: # Define and return the model pass @step def train_model(model: LightningModule, train_loader: DataLoader): chkpt_cb = ZenMLModelCheckpoint(artifact_name="my_model_ckpts") trainer = Trainer(callbacks=[chkpt_cb]) trainer.fit(model, train_loader) @pipeline def train_pipeline(): train_loader = get_data() model = get_model() train_model(model, train_loader) if __name__ == "__main__": train_pipeline() ``` This documentation provides a comprehensive guide on registering artifacts in ZenML, including handling checkpoints during model training with PyTorch Lightning. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/manage-big-data.md === # Scaling Strategies for Big Data in ZenML ## Overview This documentation outlines strategies for scaling ZenML pipelines to manage large datasets effectively. It categorizes datasets by size and provides specific techniques for optimization and processing. ## Dataset Size Thresholds 1. **Small datasets (up to a few GB)**: Handled in-memory with pandas. 2. **Medium datasets (up to tens of GB)**: Require chunking or out-of-core processing. 3. **Large datasets (hundreds of GB or more)**: Necessitate distributed processing frameworks. ## Strategies for Small Datasets 1. **Efficient Data Formats**: Use formats like Parquet instead of CSV. ```python import pyarrow.parquet as pq class ParquetDataset(Dataset): def __init__(self, data_path: str): self.data_path = data_path def read_data(self) -> pd.DataFrame: return pq.read_table(self.data_path).to_pandas() def write_data(self, df: pd.DataFrame): pq.write_table(pa.Table.from_pandas(df), self.data_path) ``` 2. **Data Sampling**: Implement sampling methods. ```python class SampleableDataset(Dataset): def sample_data(self, fraction: float = 0.1) -> pd.DataFrame: return self.read_data().sample(frac=fraction) @step def analyze_sample(dataset: SampleableDataset) -> Dict[str, float]: sample = dataset.sample_data() return {"mean": sample["value"].mean(), "std": sample["value"].std()} ``` 3. **Optimize Pandas Operations**: Use efficient operations. ```python @step def optimize_processing(df: pd.DataFrame) -> pd.DataFrame: df['new_column'] = df['column1'] + df['column2'] df['mean_normalized'] = df['value'] - np.mean(df['value']) return df ``` ## Strategies for Medium Datasets ### Chunking for CSV Datasets Implement chunking in Dataset classes. ```python class ChunkedCSVDataset(Dataset): def __init__(self, data_path: str, chunk_size: int = 10000): self.data_path = data_path self.chunk_size = chunk_size def read_data(self): for chunk in pd.read_csv(self.data_path, chunksize=self.chunk_size): yield chunk @step def process_chunked_csv(dataset: ChunkedCSVDataset) -> pd.DataFrame: return pd.concat([process_chunk(chunk) for chunk in dataset.read_data()]) ``` ### Leveraging Data Warehouses Use data warehouses like Google BigQuery for distributed processing. ```python @step def process_big_query_data(dataset: BigQueryDataset) -> BigQueryDataset: client = bigquery.Client() query = f"SELECT column1, AVG(column2) as avg_column2 FROM `{dataset.table_id}` GROUP BY column1" job_config = bigquery.QueryJobConfig(destination=f"{dataset.project}.{dataset.dataset}.processed_data") client.query(query, job_config=job_config).result() return BigQueryDataset(table_id=result_table_id) ``` ## Strategies for Very Large Datasets ### Using Distributed Computing Frameworks #### Apache Spark Initialize Spark within a ZenML step. ```python from pyspark.sql import SparkSession @step def process_with_spark(input_data: str) -> None: spark = SparkSession.builder.appName("ZenMLSparkStep").getOrCreate() df = spark.read.csv(input_data, header=True) result = df.groupBy("column1").agg({"column2": "mean"}) result.write.csv("output_path", header=True, mode="overwrite") spark.stop() ``` #### Ray Use Ray for distributed processing. ```python import ray @step def process_with_ray(input_data: str) -> None: ray.init() @ray.remote def process_partition(partition): return processed_partition data = load_data(input_data) partitions = split_data(data) results = ray.get([process_partition.remote(part) for part in partitions]) combined_results = combine_results(results) save_results(combined_results, "output_path") ray.shutdown() ``` #### Dask Integrate Dask for parallel computing. ```python import dask.dataframe as dd @step def create_dask_dataframe(): return dd.from_pandas(pd.DataFrame({'A': range(1000), 'B': range(1000, 2000)}), npartitions=4) @step def process_dask_dataframe(df: dd.DataFrame) -> dd.DataFrame: return df.map_partitions(lambda x: x ** 2) @pipeline def dask_pipeline(): df = create_dask_dataframe() processed = process_dask_dataframe(df) ``` #### Numba Use Numba for JIT compilation to speed up computations. ```python from numba import jit @jit(nopython=True) def numba_function(x): return x * x + 2 * x - 1 @step def apply_numba_function(data: np.ndarray) -> np.ndarray: return numba_function(data) ``` ## Important Considerations 1. **Environment Setup**: Ensure necessary frameworks are installed. 2. **Resource Management**: Coordinate resource allocation between frameworks and ZenML. 3. **Error Handling**: Implement error handling for resource cleanup. 4. **Data I/O**: Use intermediate storage for large datasets. 5. **Scaling**: Ensure infrastructure supports the scale of computation. ## Choosing the Right Scaling Strategy Consider dataset size, processing complexity, infrastructure, update frequency, and team expertise when selecting a scaling strategy. Start simple and evolve as needed to maintain efficient workflows in ZenML. For more details on custom Dataset classes, refer to [custom dataset classes](datasets.md). ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/passing-artifacts-between-pipelines.md === ### Structuring an MLOps Project An MLOps project typically consists of multiple pipelines, including: - **Feature Engineering Pipeline**: Prepares raw data for training. - **Training Pipeline**: Trains models using data from the feature engineering pipeline. - **Inference Pipeline**: Runs batch predictions on the trained model. - **Deployment Pipeline**: Deploys the trained model to a production endpoint. The structure of these pipelines can vary based on project requirements, and sharing artifacts (models, metadata) between them is often necessary. #### Pattern 1: Artifact Exchange via `Client` In this pattern, the ZenML Client facilitates the exchange of artifacts between pipelines. For example, a feature engineering pipeline produces datasets that are then used by a training pipeline. **Code Example:** ```python from zenml import pipeline from zenml.client import Client @pipeline def feature_engineering_pipeline(): dataset = load_data() train_data, test_data = prepare_data() @pipeline def training_pipeline(): client = Client() train_data = client.get_artifact_version(name="iris_training_dataset") test_data = client.get_artifact_version(name="iris_testing_dataset", version="raw_2023") sklearn_classifier = model_trainer(train_data) model_evaluator(model, sklearn_classifier) ``` *Note: Artifacts are referenced, not materialized in memory during pipeline compilation.* #### Pattern 2: Artifact Exchange via `Model` This pattern uses the ZenML Model as a reference point for artifact exchange. For instance, a training pipeline (`train_and_promote`) produces models, and an inference pipeline (`do_predictions`) uses the latest promoted model without needing to know artifact IDs. **Code Example:** ```python from zenml import step, get_step_context @step(enable_cache=False) def predict(data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: model = get_step_context().model.get_model_artifact("trained_model") predictions = pd.Series(model.predict(data)) return predictions ``` *Note: Caching should be disabled to avoid unexpected results.* Alternatively, artifacts can be resolved at the pipeline level: **Code Example:** ```python from typing_extensions import Annotated from zenml import get_pipeline_context, pipeline, Model from zenml.enums import ModelStages import pandas as pd from sklearn.base import ClassifierMixin @step def predict(model: ClassifierMixin, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: return pd.Series(model.predict(data)) @pipeline(model=Model(name="iris_classifier", version=ModelStages.PRODUCTION)) def do_predictions(): model = get_pipeline_context().model inference_data = load_data() predict(model=model.get_model_artifact("trained_model"), data=inference_data) if __name__ == "__main__": do_predictions() ``` Both approaches are valid; the choice depends on user preference. ================================================== === File: docs/book/how-to/data-artifact-management/complex-usecases/datasets.md === # Custom Dataset Classes and Complex Data Flows in ZenML ## Overview This documentation covers the use of custom Dataset classes and Materializers in ZenML for managing complex data flows in machine learning projects. Custom Dataset classes encapsulate data loading, processing, and saving logic for various data sources. ### Custom Dataset Classes Custom Dataset classes are beneficial when: 1. Handling multiple data sources (e.g., CSV, databases). 2. Managing complex data structures. 3. Implementing custom data processing. ### Implementation Example A base `Dataset` class is defined, with implementations for CSV and BigQuery data sources: ```python from abc import ABC, abstractmethod import pandas as pd from google.cloud import bigquery from typing import Optional class Dataset(ABC): @abstractmethod def read_data(self) -> pd.DataFrame: pass class CSVDataset(Dataset): def __init__(self, data_path: str, df: Optional[pd.DataFrame] = None): self.data_path = data_path self.df = df def read_data(self) -> pd.DataFrame: if self.df is None: self.df = pd.read_csv(self.data_path) return self.df class BigQueryDataset(Dataset): def __init__(self, table_id: str, project: Optional[str] = None): self.table_id = table_id self.project = project self.client = bigquery.Client(project=self.project) def read_data(self) -> pd.DataFrame: query = f"SELECT * FROM `{self.table_id}`" return self.client.query(query).to_dataframe() def write_data(self) -> None: job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE") job = self.client.load_table_from_dataframe(self.df, self.table_id, job_config=job_config) job.result() ``` ### Custom Materializers Materializers handle serialization and deserialization of artifacts. Custom Materializers are created for the Dataset classes: ```python from zenml.materializers import BaseMaterializer from zenml.io import fileio from zenml.enums import ArtifactType import json import tempfile import pandas as pd class CSVDatasetMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (CSVDataset,) ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA def load(self, data_type: Type[CSVDataset]) -> CSVDataset: with tempfile.NamedTemporaryFile(delete=False, suffix='.csv') as temp_file: with fileio.open(os.path.join(self.uri, "data.csv"), "rb") as source_file: temp_file.write(source_file.read()) return CSVDataset(temp_file.name) def save(self, dataset: CSVDataset) -> None: df = dataset.read_data() with tempfile.NamedTemporaryFile(delete=False, suffix='.csv') as temp_file: df.to_csv(temp_file.name, index=False) with open(temp_file.name, "rb") as source_file: with fileio.open(os.path.join(self.uri, "data.csv"), "wb") as target_file: target_file.write(source_file.read()) class BigQueryDatasetMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (BigQueryDataset,) ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA def load(self, data_type: Type[BigQueryDataset]) -> BigQueryDataset: with fileio.open(os.path.join(self.uri, "metadata.json"), "r") as f: metadata = json.load(f) return BigQueryDataset(table_id=metadata["table_id"], project=metadata["project"]) def save(self, bq_dataset: BigQueryDataset) -> None: metadata = {"table_id": bq_dataset.table_id, "project": bq_dataset.project} with fileio.open(os.path.join(self.uri, "metadata.json"), "w") as f: json.dump(metadata, f) if bq_dataset.df is not None: bq_dataset.write_data() ``` ### Pipeline Structure A flexible pipeline can be structured to handle both CSV and BigQuery datasets: ```python from zenml import step, pipeline @step(output_materializer=CSVDatasetMaterializer) def extract_data_local(data_path: str = "data/raw_data.csv") -> CSVDataset: return CSVDataset(data_path) @step(output_materializer=BigQueryDatasetMaterializer) def extract_data_remote(table_id: str) -> BigQueryDataset: return BigQueryDataset(table_id) @step def transform(dataset: Dataset) -> pd.DataFrame: df = dataset.read_data() return df.copy() # Apply transformations @pipeline def etl_pipeline(mode: str = "develop"): raw_data = extract_data_local() if mode == "develop" else extract_data_remote(table_id="project.dataset.raw_table") return transform(raw_data) ``` ### Best Practices 1. **Common Base Class**: Use the `Dataset` base class for consistent handling of data sources. 2. **Specialized Steps**: Create separate steps for loading different datasets while standardizing underlying steps. 3. **Flexible Pipelines**: Use configuration parameters to adapt pipelines to various data sources. 4. **Modular Step Design**: Design steps for specific tasks to promote code reuse and maintainability. By following these practices, ZenML pipelines can efficiently manage complex data flows and adapt to changing requirements, leveraging custom Dataset classes throughout machine learning workflows. For scaling strategies, refer to [scaling strategies for big data](manage-big-data.md). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/get-arbitrary-artifacts-in-a-step.md === ### Summary of Documentation Artifacts in ZenML can be accessed not only from direct upstream steps but also from other pipelines. This is facilitated through the ZenML client, allowing for the retrieval of previously created artifacts. **Key Code Example:** ```python from zenml.client import Client from zenml import step @step def my_step(): client = Client() output = client.get_artifact_version("my_dataset", "my_version") accuracy = output.run_metadata["accuracy"].value ``` This code snippet demonstrates how to fetch an artifact by its name and version, enabling the use of artifacts stored in the artifact store from various sources. ### Additional Resources - [Managing artifacts](../../../user-guide/starter-guide/manage-artifacts.md): Information on the `ExternalArtifact` type and artifact transfer between steps. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/artifacts-naming.md === ### ZenML Artifact Naming Overview In ZenML, naming artifacts in pipelines is crucial for tracking outputs, especially when reusing steps with different inputs. Artifacts can be named either statically or dynamically, leveraging type annotations and versioning. #### Naming Strategies 1. **Static Naming**: Defined as string literals. ```python @step def static_single() -> Annotated[str, "static_output_name"]: return "null" ``` 2. **Dynamic Naming**: Generated at runtime using string templates. - **Standard Placeholders**: - `{date}`: Current date (e.g., `2024_11_18`) - `{time}`: Current time (e.g., `11_07_09_326492`) ```python @step def dynamic_single_string() -> Annotated[str, "name_{date}_{time}"]: return "null" ``` - **Custom Placeholders**: Passed via the `substitutions` parameter. ```python @step(substitutions={"custom_placeholder": "some_substitute"}) def dynamic_single_string() -> Annotated[str, "name_{custom_placeholder}_{time}"]: return "null" ``` - **Using `with_options`**: ```python @step def extract_data(source: str) -> Annotated[str, "{stage}_dataset"]: return "my data" @pipeline def extraction_pipeline(): extract_data.with_options(substitutions={"stage": "train"})(source="s3://train") extract_data.with_options(substitutions={"stage": "test"})(source="s3://test") ``` **Substitution Scope**: - Defined in `@pipeline`, `pipeline.with_options`, `@step`, or `step.with_options`. 3. **Multiple Output Handling**: Combine naming strategies for multiple artifacts. ```python @step def mixed_tuple() -> Tuple[ Annotated[str, "static_output_name"], Annotated[str, "name_{date}_{time}"], ]: return "static_namer", "str_namer" ``` #### Caching Behavior When caching is enabled, the names of output artifacts remain consistent across runs, regardless of whether they are static or dynamic. ```python @step(substitutions={"custom_placeholder": "resolution"}) def demo() -> Tuple[ Annotated[int, "name_{date}_{time}"], Annotated[int, "name_{custom_placeholder}"], ]: return 42, 43 @pipeline def my_pipeline(): demo() if __name__ == "__main__": run_without_cache = my_pipeline.with_options(enable_cache=False)() run_with_cache = my_pipeline.with_options(enable_cache=True)() assert set(run_without_cache.steps["demo"].outputs.keys()) == set( run_with_cache.steps["demo"].outputs.keys() ) ``` This code demonstrates that both runs produce consistent output artifact names, ensuring reliable tracking and management of artifacts in ZenML pipelines. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/handle-custom-data-types.md === ### Summary of ZenML Materializers Documentation **Overview**: ZenML pipelines are data-centric, where steps are connected through their inputs and outputs. **Materializers** manage how artifacts are serialized and deserialized when stored and retrieved from the artifact store. #### Built-In Materializers ZenML includes several built-in materializers for common data types, which operate automatically: - **BuiltInMaterializer**: Handles `bool`, `float`, `int`, `str`, `None` (storage: `.json`) - **BytesInMaterializer**: Handles `bytes` (storage: `.txt`) - **BuiltInContainerMaterializer**: Handles `dict`, `list`, `set`, `tuple` (storage: Directory) - **NumpyMaterializer**: Handles `np.ndarray` (storage: `.npy`) - **PandasMaterializer**: Handles `pd.DataFrame`, `pd.Series` (storage: `.csv` or `.gzip` with `parquet`) - **PydanticMaterializer**: Handles `pydantic.BaseModel` (storage: `.json`) - **ServiceMaterializer**: Handles `zenml.services.service.BaseService` (storage: `.json`) - **StructuredStringMaterializer**: Handles various string types (storage: `.csv`, `.html`, `.md`) **Warning**: The `CloudpickleMaterializer` can handle any object but is not production-ready due to compatibility issues across Python versions and potential security risks. #### Integration Materializers ZenML provides integration-specific materializers that can be activated by installing the respective integration. Examples include: - **BentoMaterializer**: Handles `bentoml.Bento` (storage: `.bento`) - **DeepchecksResultMaterializer**: Handles `deepchecks.CheckResult`, `deepchecks.SuiteResult` (storage: `.json`) - **LightGBMBoosterMaterializer**: Handles `lgbm.Booster` (storage: `.txt`) **Note**: For Docker-based orchestrators, specify required integrations in the `DockerSettings`. #### Custom Materializers To create a custom materializer: 1. Define a class inheriting from `BaseMaterializer`. 2. Set `ASSOCIATED_TYPES` to the custom data type. 3. Implement `load()` and `save()` methods for serialization and deserialization. **Example**: ```python class MyMaterializer(BaseMaterializer): ASSOCIATED_TYPES = (MyObj,) ASSOCIATED_ARTIFACT_TYPE = ArtifactType.DATA def load(self, data_type: Type[MyObj]) -> MyObj: with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'r') as f: return MyObj(f.read()) def save(self, my_obj: MyObj) -> None: with self.artifact_store.open(os.path.join(self.uri, 'data.txt'), 'w') as f: f.write(my_obj.name) ``` #### Configuring Steps to Use Custom Materializers You can specify the materializer at the decorator level or using the `configure()` method: ```python @step(output_materializers=MyMaterializer) def my_first_step() -> MyObj: return MyObj("my_object") ``` For multiple outputs, use a dictionary: ```python @step(output_materializers={"1": MyMaterializer1, "2": MyMaterializer2}) def my_first_step() -> Tuple[Annotated[MyObj1, "1"], Annotated[MyObj2, "2"]]: return MyObj1(), MyObj2() ``` #### Global Materializer Configuration To set a custom materializer globally, register it in the materializer registry: ```python materializer_registry.register_and_overwrite_type(key=pd.DataFrame, type_=FastPandasMaterializer) ``` #### Developing a Custom Materializer Implement the `BaseMaterializer` interface, defining: - `ASSOCIATED_TYPES`: Data types handled. - `ASSOCIATED_ARTIFACT_TYPE`: Type of artifact (e.g., `ArtifactType.DATA`). - `load()` and `save()` methods for artifact handling. #### Example Pipeline A simple pipeline using a custom object and materializer: ```python @step def my_first_step() -> MyObj: return MyObj("my_object") @step def my_second_step(my_obj: MyObj) -> None: logging.info(f"Object: {my_obj.name}") @pipeline def first_pipeline(): output_1 = my_first_step() my_second_step(output_1) first_pipeline() ``` **Pro-tip**: Use `self.artifact_store` for compatibility across different artifact stores. This concise overview captures the essential details about using materializers in ZenML, including built-in options, custom implementations, and configuration methods. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/delete-an-artifact.md === ### Delete an Artifact Artifacts cannot be deleted directly to avoid breaking the ZenML database due to dangling references. However, artifacts not referenced by any pipeline runs can be deleted using the following command: ```shell zenml artifact prune ``` By default, this command removes artifacts from the underlying artifact store and the database. You can modify this behavior with the `--only-artifact` and `--only-metadata` flags. If you encounter errors when pruning artifacts (often due to local storage issues), you can use the `--ignore-errors` flag to continue the process, although warnings will still appear in the terminal. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/return-multiple-outputs-from-a-step.md === ### Summary of Documentation on Using `Annotated` for Multiple Outputs The `Annotated` type in ZenML allows steps to return multiple outputs with specific names, enhancing artifact retrieval and dashboard readability. #### Code Example ```python from typing import Annotated, Tuple import pandas as pd from zenml import step from sklearn.model_selection import train_test_split @step def clean_data(data: pd.DataFrame) -> Tuple[ Annotated[pd.DataFrame, "x_train"], Annotated[pd.DataFrame, "x_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: x = data.drop("target", axis=1) y = data["target"] return train_test_split(x, y, test_size=0.2, random_state=42) ``` #### Key Points - **Functionality**: The `clean_data` function processes a pandas DataFrame, splitting it into training and testing sets for features (`x_train`, `x_test`) and target variables (`y_train`, `y_test`). - **Output Naming**: Each output is annotated for easy identification and retrieval in the pipeline. - **Dashboard Display**: Named outputs improve the clarity of the pipeline's dashboard. This concise approach facilitates better management of data artifacts in machine learning workflows. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/README.md === ### ZenML Step Outputs and Data Handling Step outputs in ZenML are stored in the artifact store, enabling caching, lineage, and auditability. Using type annotations enhances transparency, facilitates data passing between steps, and allows for serialization/deserialization (termed 'materialize' in ZenML). #### Code Example ```python @step def load_data(parameter: int) -> Dict[str, Any]: training_data = [[1, 2], [3, 4], [5, 6]] labels = [0, 1, 0] return {'features': training_data, 'labels': labels} @step def train_model(data: Dict[str, Any]) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(parameter: int): dataset = load_data(parameter) # Output from load_data train_model(dataset) # Input to train_model ``` ### Summary - **Steps**: `load_data` returns a dictionary of training data and labels; `train_model` processes this data to train a model. - **Pipeline**: `simple_ml_pipeline` connects the two steps, demonstrating data flow in ZenML. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/tagging.md === ### Organizing Data with Tags in ZenML ZenML allows users to organize machine learning artifacts and models using tags, enhancing workflow efficiency and asset discoverability. #### Assigning Tags to Artifacts To tag artifact versions in a step or pipeline, use the `tags` property of `ArtifactConfig`: **Python SDK:** ```python from zenml import step, ArtifactConfig @step def training_data_loader() -> ( Annotated[pd.DataFrame, ArtifactConfig(tags=["sklearn", "pre-training"])] ): ... ``` **CLI:** ```shell # Tag the artifact zenml artifacts update iris_dataset -t sklearn # Tag the artifact version zenml artifacts versions update iris_dataset raw_2023 -t sklearn ``` Tags like `sklearn` and `pre-training` will be assigned to all artifacts created by this step. ZenML Pro users can tag artifacts directly in the cloud dashboard. #### Assigning Tags to Models Models can also be tagged for semantic organization. Tags can be specified as key-value pairs during model version creation. **Python SDK:** ```python from zenml.models import Model # Define tags tags = ["experiment", "v1", "classification-task"] # Create a model version with tags model = Model( name="iris_classifier", version="1.0.0", tags=tags, ) @pipeline(model=model) def my_pipeline(...): ... ``` To create or register a new model with tags: ```python from zenml.client import Client Client().create_model( name="iris_logistic_regression", tags=["classification", "iris-dataset"], ) Client().create_model_version( model_name_or_id="iris_logistic_regression", name="2", tags=["version-1", "experiment-42"], ) ``` **CLI:** ```shell # Tag an existing model zenml model update iris_logistic_regression --tag "classification" # Tag a specific model version zenml model version update iris_logistic_regression 2 --tag "experiment3" ``` ### Important Notes - During a pipeline run, a model may be implicitly created without the tags from the `Model` class. Tags can be managed using the SDK or ZenML Pro UI. ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/artifact-versioning.md === ### ZenML Data Storage Overview ZenML integrates data versioning and lineage tracking into its core functionality. When a pipeline runs, it automatically tracks and manages artifacts, allowing users to view the lineage of artifact creation and interact with them via a dashboard. This functionality aids in gaining insights, streamlining experimentation, and ensuring reproducibility in machine learning workflows. #### Artifact Creation and Caching - Each pipeline execution checks for changes in inputs, outputs, parameters, or configurations. - New or modified steps create a unique directory in the [Artifact Store](../../../component-guide/artifact-stores/artifact-stores.md) with a unique ID, while unchanged steps may be cached to save time and resources. - ZenML allows tracing artifacts back to their origins, providing transparency and traceability essential for reliable machine learning projects, especially in team environments. - For managing artifact names, versions, and properties, refer to the [artifact versioning documentation](../../../user-guide/starter-guide/manage-artifacts.md). #### Saving and Loading Artifacts with Materializers - [Materializers](handle-custom-data-types.md) handle serialization and deserialization of artifacts, ensuring consistent storage and retrieval from the artifact store. - Each materializer stores data in unique directories, and ZenML provides built-in materializers for common data types, using `cloudpickle` for those without a default. - Custom materializers can be created by extending the `BaseMaterializer` class for specific serialization needs. - **Warning**: The built-in `CloudpickleMaterializer` is not production-ready due to compatibility issues across Python versions and potential security risks from uploading arbitrary objects. Custom materializers are recommended for robust and secure artifact handling. - ZenML utilizes the `fileio` system for saving and loading artifacts, simplifying interactions with various data formats and enabling artifact caching and lineage tracking. An example of a default materializer is available [here](https://github.com/zenml-io/zenml/blob/main/src/zenml/materializers/numpy_materializer.py). ================================================== === File: docs/book/how-to/data-artifact-management/handle-data-artifacts/load-artifacts-into-memory.md === # Loading Artifacts into Memory ZenML pipeline steps typically consume artifacts produced by one another. However, when integrating external data (artifacts from non-ZenML sources), the `ExternalArtifact` class is recommended. For exchanging data between ZenML pipelines, late materialization is essential due to the compilation and execution phases of ZenML pipelines. This allows for the passing of artifacts and their metadata that may not exist at the time of compilation. ## Use Cases for Artifact Exchange 1. Grouping data products using ZenML Models. 2. Utilizing the ZenML Client to manage artifacts. **Recommendation:** Use models to group and access artifacts across pipelines. For loading artifacts from a ZenML Model, refer to the relevant documentation. ## Client Methods for Artifact Exchange If the Model Control Plane is not in use, artifacts can still be exchanged with late materialization. Below is a revised version of the `do_predictions` pipeline code: ```python from typing import Annotated from zenml import step, pipeline from zenml.client import Client import pandas as pd from sklearn.base import ClassifierMixin @step def predict(model1: ClassifierMixin, model2: ClassifierMixin, model1_metric: float, model2_metric: float, data: pd.DataFrame) -> Annotated[pd.Series, "predictions"]: predictions = pd.Series(model1.predict(data)) if model1_metric < model2_metric else pd.Series(model2.predict(data)) return predictions @step def load_data() -> pd.DataFrame: # Load inference data ... @pipeline def do_predictions(): model_42 = Client().get_artifact_version("trained_model", version="42") metric_42 = model_42.run_metadata["MSE"].value model_latest = Client().get_artifact_version("trained_model") metric_latest = model_latest.run_metadata["MSE"].value inference_data = load_data() predict(model1=model_42, model2=model_latest, model1_metric=metric_42, model2_metric=metric_latest, data=inference_data) if __name__ == "__main__": do_predictions() ``` In this code, the `predict` step compares models based on their MSE metrics to ensure predictions are made using the best model. The `load_data` step loads the necessary inference data. The calls to `Client().get_artifact_version` and accessing `run_metadata` are evaluated at execution time, ensuring that the latest artifact versions are used. ================================================== === File: docs/book/how-to/popular-integrations/README.md === ### ZenML Integrations Guide ZenML integrates seamlessly with popular tools in the data science and machine learning ecosystem. This guide outlines how to set up these integrations effectively. #### Key Points: - **Integration Purpose**: ZenML allows users to utilize their preferred tools within its framework. - **Ecosystem Compatibility**: Designed to work with various data science and machine learning tools. For visual reference, an image related to ZenML is included, but specific details about the image are not necessary for the integration process. This guide serves as a resource for users looking to enhance their workflows by leveraging ZenML's compatibility with existing tools. ================================================== === File: docs/book/how-to/popular-integrations/skypilot.md === ### Summary of ZenML SkyPilot VM Orchestrator Documentation **Overview:** The ZenML SkyPilot VM Orchestrator enables provisioning and management of VMs across cloud providers (AWS, GCP, Azure, Lambda Labs) for ML pipelines, enhancing cost efficiency and GPU availability. **Prerequisites:** - Install ZenML SkyPilot integration for your cloud provider: ```bash zenml integration install skypilot_ ``` - Docker must be installed and running. - A remote artifact store and container registry in your ZenML stack. - A remote ZenML deployment. - Permissions to provision VMs on the cloud provider. - A service connector configured for authentication (not required for Lambda Labs). **Configuration Steps:** *For AWS, GCP, Azure:* 1. Install SkyPilot integration and connectors. 2. Register a service connector with necessary credentials. 3. Register the orchestrator and connect it to the service connector. 4. Register and activate a stack with the orchestrator. ```bash zenml service-connector register -skypilot-vm -t --auto-configure zenml orchestrator register --flavor vm_ zenml orchestrator connect --connector -skypilot-vm zenml stack register -o ... --set ``` *For Lambda Labs:* 1. Install SkyPilot Lambda integration. 2. Register a secret with the Lambda Labs API key. 3. Register the orchestrator using the API key secret. 4. Register and activate a stack with the orchestrator. ```bash zenml secret create lambda_api_key --scope user --api_key= zenml orchestrator register --flavor vm_lambda --api_key={{lambda_api_key.api_key}} zenml stack register -o ... --set ``` **Running a Pipeline:** Once configured, run ZenML pipelines using the SkyPilot VM Orchestrator, where each step executes in a Docker container on a provisioned VM. **Additional Configuration:** You can customize the orchestrator using cloud-specific `Settings` objects: ```python from zenml.integrations.skypilot_.flavors.skypilot_orchestrator__vm_flavor import SkypilotOrchestratorSettings skypilot_settings = SkypilotOrchestratorSettings( cpus="2", memory="16", accelerators="V100:2", use_spot=True, region=, ) @pipeline(settings={"orchestrator": skypilot_settings}) ``` Resource settings can also be configured per step: ```python high_resource_settings = SkypilotOrchestratorSettings(...) @step(settings={"orchestrator": high_resource_settings}) def resource_intensive_step(): ... ``` For further details and advanced options, refer to the [full SkyPilot VM Orchestrator documentation](../../component-guide/orchestrators/skypilot-vm.md). ================================================== === File: docs/book/how-to/popular-integrations/kubeflow.md === ### Summary of Kubeflow Orchestrator Documentation The ZenML Kubeflow Orchestrator enables running ML pipelines on Kubeflow without needing to write Kubeflow code. #### Prerequisites To use the Kubeflow Orchestrator, ensure you have: - ZenML `kubeflow` integration: `zenml integration install kubeflow` - Docker installed and running - (Optional) `kubectl` installed - A Kubernetes cluster with Kubeflow Pipelines - A remote artifact store and container registry in your ZenML stack - A remote ZenML server deployed in the cloud - (Optional) Kubernetes context name for the remote cluster #### Configuring the Orchestrator You can configure the orchestrator in two ways: 1. **Using a Service Connector** (recommended for cloud-managed clusters): ```bash zenml orchestrator register --flavor kubeflow zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator connect --connector zenml stack update -o ``` 2. **Using `kubectl`**: ```bash zenml orchestrator register --flavor=kubeflow --kubernetes_context= zenml stack update -o ``` #### Running a Pipeline To run a ZenML pipeline: ```python python your_pipeline.py ``` This creates a Kubernetes pod for each pipeline step, viewable in the Kubeflow UI. #### Additional Configuration Configure the orchestrator with `KubeflowOrchestratorSettings`: ```python from zenml.integrations.kubeflow.flavors.kubeflow_orchestrator_flavor import KubeflowOrchestratorSettings kubeflow_settings = KubeflowOrchestratorSettings( client_args={}, user_namespace="my_namespace", pod_settings={ "affinity": {...}, "tolerations": [...] } ) @pipeline(settings={"orchestrator": kubeflow_settings}) ``` #### Multi-Tenancy Deployments For multi-tenant setups, specify `kubeflow_hostname`: ```bash zenml orchestrator register --flavor=kubeflow --kubeflow_hostname= ``` Provide credentials in the settings: ```python kubeflow_settings = KubeflowOrchestratorSettings( client_username="admin", client_password="abc123", user_namespace="namespace_name" ) @pipeline(settings={"orchestrator": kubeflow_settings}) ``` For more details, refer to the full [Kubeflow Orchestrator documentation](../../component-guide/orchestrators/kubeflow.md). ================================================== === File: docs/book/how-to/popular-integrations/azure-guide.md === # Azure Stack Setup for ZenML Pipelines ## Overview This guide provides steps to set up a minimal production stack on Azure for running ZenML pipelines. Key requirements include an active Azure account, ZenML installed, and the ZenML Azure integration. ## Prerequisites - Active Azure account - ZenML installed - ZenML Azure integration: `zenml integration install azure` ## Steps to Set Up Azure Stack ### 1. Create Service Principal 1. Go to Azure Portal > App Registrations. 2. Click `+ New registration`, name it, and register. 3. Note the Application ID and Tenant ID. 4. Under `Certificates & secrets`, create a client secret and note the value. ### 2. Create Resource Group and AzureML Instance 1. Go to Azure Portal > Resource Groups > `+ Create`. 2. Create a new resource group. 3. In the resource group overview, click `+ Create`, select `Azure Machine Learning`, and create the workspace. ### 3. Create Role Assignments 1. In the resource group, go to `Access control (IAM)` > `+ Add` a role assignment. 2. Assign the following roles: - AzureML Compute Operator - AzureML Data Scientist - AzureML Registry User 3. Select your registered app by its ID for each role. ### 4. Create ZenML Azure Service Connector ```bash zenml service-connector register azure_connector --type azure \ --auth-method service-principal \ --client_secret= \ --tenant_id= \ --client_id= ``` ### 5. Create Stack Components #### Artifact Store (Azure Blob Storage) 1. Create a container in the AzureML workspace's storage account. ```bash zenml artifact-store register azure_artifact_store -f azure \ --path= \ --connector azure_connector ``` #### Orchestrator (AzureML) ```bash zenml orchestrator register azure_orchestrator -f azureml \ --subscription_id= \ --resource_group= \ --workspace= \ --connector azure_connector ``` #### Container Registry (Azure Container Registry) ```bash zenml container-registry register azure_container_registry -f azure \ --uri= \ --connector azure_connector ``` ### 6. Create ZenML Stack ```shell zenml stack register azure_stack \ -o azure_orchestrator \ -a azure_artifact_store \ -c azure_container_registry \ --set ``` ### 7. Run a ZenML Pipeline Define a simple pipeline: ```python from zenml import pipeline, step @step def hello_world() -> str: return "Hello from Azure!" @pipeline def azure_pipeline(): hello_world() if __name__ == "__main__": azure_pipeline() ``` Save as `run.py` and execute: ```shell python run.py ``` ## Next Steps - Explore ZenML's production guide for best practices. - Investigate ZenML integrations with other tools. - Join the ZenML community for support and networking. ================================================== === File: docs/book/how-to/popular-integrations/gcp-guide.md === # Minimal GCP Stack Setup Guide This guide provides steps to set up a minimal production stack on Google Cloud Platform (GCP) for ZenML. ### 1. Choose a GCP Project Select or create a Google Cloud project in the console. Ensure a billing account is attached. ```bash gcloud projects create --billing-project= ``` ### 2. Enable GCloud APIs Enable the following APIs in your GCP project: - Cloud Functions API - Cloud Run Admin API - Cloud Build API - Artifact Registry API - Cloud Logging API ### 3. Create a Dedicated Service Account Create a service account with the following roles: - AI Platform Service Agent - Storage Object Admin ### 4. Create a JSON Key for Your Service Account Generate a JSON key for the service account. ```bash export JSON_KEY_FILE_PATH= ``` ### 5. Create a Service Connector in ZenML Authenticate ZenML with GCP using the service account. ```bash zenml integration install gcp \ && zenml service-connector register gcp_connector \ --type gcp \ --auth-method service-account \ --service_account_json=@${JSON_KEY_FILE_PATH} \ --project_id= ``` ### 6. Create Stack Components #### Artifact Store Create a GCS bucket and register it as an artifact store. ```bash export ARTIFACT_STORE_NAME=gcp_artifact_store zenml artifact-store register ${ARTIFACT_STORE_NAME} --flavor gcp --path=gs:// zenml artifact-store connect ${ARTIFACT_STORE_NAME} -i ``` #### Orchestrator Register Vertex AI as the orchestrator. ```bash export ORCHESTRATOR_NAME=gcp_vertex_orchestrator zenml orchestrator register ${ORCHESTRATOR_NAME} --flavor=vertex --project= --location=europe-west2 zenml orchestrator connect ${ORCHESTRATOR_NAME} -i ``` #### Container Registry Register the GCP container registry. ```bash export CONTAINER_REGISTRY_NAME=gcp_container_registry zenml container-registry register ${CONTAINER_REGISTRY_NAME} --flavor=gcp --uri= zenml container-registry connect ${CONTAINER_REGISTRY_NAME} -i ``` ### 7. Create Stack Register the stack with the created components. ```bash export STACK_NAME=gcp_stack zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set ``` ### Cleanup To delete the project and all associated resources: ```bash gcloud project delete ``` ### Best Practices - **IAM and Least Privilege**: Grant minimum permissions necessary for ZenML pipelines. - **GCP Resource Labeling**: Use consistent labels for resource management. ```bash gcloud storage buckets update gs://your-bucket-name --update-labels=project=zenml,environment=production ``` - **Cost Management**: Use GCP's Cost Management tools to monitor spending. ```bash gcloud billing budgets create --billing-account=BILLING_ACCOUNT_ID --display-name="ZenML Monthly Budget" --budget-amount=1000 --threshold-rule=percent=90 ``` - **Backup Strategy**: Regularly backup data and enable versioning on GCS. ```bash gsutil versioning set on gs://your-bucket-name ``` By following these steps and best practices, you can efficiently set up and manage your GCP stack for ZenML projects. ================================================== === File: docs/book/how-to/popular-integrations/kubernetes.md === ### ZenML Kubernetes Orchestrator Documentation Summary The ZenML Kubernetes Orchestrator enables the deployment of ML pipelines on a Kubernetes cluster without the need for Kubernetes coding. It serves as a lightweight alternative to more complex orchestrators like Airflow or Kubeflow. #### Prerequisites To use the Kubernetes Orchestrator, ensure you have: - ZenML `kubernetes` integration installed: `zenml integration install kubernetes` - Docker installed and running - `kubectl` installed - A remote artifact store and container registry in your ZenML stack - A deployed Kubernetes cluster - (Optional) A configured `kubectl` context pointing to the cluster #### Deploying the Orchestrator A Kubernetes cluster is required to run the orchestrator. Various deployment methods exist depending on the cloud provider or custom infrastructure. Refer to the [cloud guide](../../user-guide/cloud-guide/cloud-guide.md) for more details. #### Configuring the Orchestrator You can configure the orchestrator in two ways: 1. **Using a Service Connector** (recommended for cloud-managed clusters): ```bash zenml orchestrator register --flavor kubernetes zenml service-connector list-resources --resource-type kubernetes-cluster -e zenml orchestrator connect --connector zenml stack register -o ... --set ``` 2. **Using `kubectl` context**: ```bash zenml orchestrator register --flavor=kubernetes --kubernetes_context= zenml stack register -o ... --set ``` #### Running a Pipeline To run a ZenML pipeline using the Kubernetes Orchestrator, execute: ```bash python your_pipeline.py ``` This command will create a Kubernetes pod for each pipeline step. You can manage the pods using `kubectl` commands. For more advanced configurations and details, consult the [full Kubernetes Orchestrator documentation](../../component-guide/orchestrators/kubernetes.md). ================================================== === File: docs/book/how-to/popular-integrations/mlflow.md === # MLflow Experiment Tracker with ZenML ## Overview The ZenML MLflow Experiment Tracker integration allows logging and visualizing pipeline information using MLflow without additional code. ## Prerequisites - Install ZenML MLflow integration: ```bash zenml integration install mlflow -y ``` - MLflow deployment: local or remote with proxied artifact storage. ## Configuring the Experiment Tracker ### 1. Local Deployment - Suitable for local ZenML runs; no extra configuration needed. ```bash zenml experiment-tracker register mlflow_experiment_tracker --flavor=mlflow zenml stack register custom_stack -e mlflow_experiment_tracker ... --set ``` ### 2. Remote Deployment - Requires authentication configuration (recommended: ZenML secrets). ```bash zenml secret create mlflow_secret --username= --password= zenml experiment-tracker register mlflow \ --flavor=mlflow \ --tracking_username={{mlflow_secret.username}} \ --tracking_password={{mlflow_secret.password}} \ ... ``` ## Using the Experiment Tracker To log information in a pipeline step: 1. Enable the experiment tracker with the `@step` decorator. 2. Use MLflow logging or auto-logging. ```python import mlflow @step(experiment_tracker="") def train_step(...): mlflow.tensorflow.autolog() mlflow.log_param(...) mlflow.log_metric(...) mlflow.log_artifact(...) ``` ## Viewing Results To find the MLflow experiment URL for a ZenML run: ```python last_run = client.get_pipeline("").last_run trainer_step = last_run.get_step("") tracking_url = trainer_step.run_metadata["experiment_tracker_url"].value ``` ## Additional Configuration Further configure the experiment tracker using `MLFlowExperimentTrackerSettings`: ```python from zenml.integrations.mlflow.flavors.mlflow_experiment_tracker_flavor import MLFlowExperimentTrackerSettings mlflow_settings = MLFlowExperimentTrackerSettings( nested=True, tags={"key": "value"} ) @step( experiment_tracker="", settings={"experiment_tracker": mlflow_settings} ) ``` For more details, refer to the full [MLflow Experiment Tracker documentation](../../component-guide/experiment-trackers/mlflow.md). ================================================== === File: docs/book/how-to/popular-integrations/aws-guide.md === # AWS Stack Setup for ZenML Pipelines This guide provides a streamlined process to set up a minimal production stack on AWS for running ZenML pipelines. ## Prerequisites - Active AWS account with permissions for S3, SageMaker, ECR, and ECS. - ZenML installed. - AWS CLI installed and configured. ## Steps to Set Up ### 1. Set Up Credentials and Local Environment 1. **Choose AWS Region**: Select your deployment region (e.g., `us-east-1`). 2. **Create IAM Role**: - Get your AWS account ID: ```shell aws sts get-caller-identity --query Account --output text ``` - Create `assume-role-policy.json`: ```json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam:::root", "Service": "sagemaker.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } ``` - Create the IAM role: ```shell aws iam create-role --role-name zenml-role --assume-role-policy-document file://assume-role-policy.json ``` - Attach necessary policies: ```shell aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam attach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess ``` 3. **Install ZenML AWS Integration**: ```shell zenml integration install aws s3 -y ``` ### 2. Create a Service Connector in ZenML Register an AWS Service Connector: ```shell zenml service-connector register aws_connector \ --type aws \ --auth-method iam-role \ --role_arn= \ --region= \ --aws_access_key_id= \ --aws_secret_access_key= ``` ### 3. Create Stack Components #### Artifact Store (S3) 1. Create an S3 bucket: ```shell aws s3api create-bucket --bucket your-bucket-name ``` 2. Register the S3 Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f s3 --path=s3://your-bucket-name --connector aws_connector ``` #### Orchestrator (SageMaker Pipelines) 1. Create a SageMaker domain (if not already created). 2. Register the SageMaker Pipelines orchestrator: ```shell zenml orchestrator register sagemaker-orchestrator --flavor=sagemaker --region= --execution_role= ``` #### Container Registry (ECR) 1. Create an ECR repository: ```shell aws ecr create-repository --repository-name zenml --region ``` 2. Register the ECR container registry: ```shell zenml container-registry register ecr-registry --flavor=aws --uri=.dkr.ecr..amazonaws.com --connector aws_connector ``` ### 4. Create the Stack Register the stack: ```shell export STACK_NAME=aws_stack zenml stack register ${STACK_NAME} -o ${ORCHESTRATOR_NAME} \ -a ${ARTIFACT_STORE_NAME} -c ${CONTAINER_REGISTRY_NAME} --set ``` ### 5. Run a Pipeline Define and execute a ZenML pipeline: ```python from zenml import pipeline, step @step def hello_world() -> str: return "Hello from SageMaker!" @pipeline def aws_sagemaker_pipeline(): hello_world() if __name__ == "__main__": aws_sagemaker_pipeline() ``` Run the pipeline: ```shell python run.py ``` ## Cleanup To avoid charges, delete unused resources: ```shell aws s3 rm s3://your-bucket-name --recursive aws s3api delete-bucket --bucket your-bucket-name aws sagemaker delete-domain --domain-id aws ecr delete-repository --repository-name zenml --force aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryFullAccess aws iam detach-role-policy --role-name zenml-role --policy-arn arn:aws:iam::aws:policy/AmazonSageMakerFullAccess aws iam delete-role --role-name zenml-role ``` ## Conclusion This guide outlines the essential steps to set up an AWS stack for ZenML, enabling scalable machine learning workflows. Key benefits include scalability, reproducibility, collaboration, and flexibility. For further exploration, consider ZenML's production guide and community resources. ## Best Practices - **Use IAM Roles**: Follow the least privilege principle. - **Resource Tagging**: Implement a consistent tagging strategy for billing and management. - **Cost Management**: Use AWS Cost Explorer and Budgets for monitoring. - **Warm Pools**: Enable warm pools in SageMaker for faster pipeline executions. - **Backup Strategy**: Regularly back up critical data and configurations. ================================================== === File: docs/book/getting-started/core-concepts.md === # ZenML Core Concepts Summary **ZenML** is an open-source MLOps framework designed for creating portable, production-ready MLOps pipelines, facilitating collaboration among data scientists, ML engineers, and MLOps developers. It categorizes its core concepts into three threads: **Development**, **Execution**, and **Management**. ## 1. Development ### Steps - Steps are functions marked with the `@step` decorator. - Example: ```python @step def step_1() -> str: return "world" ``` - Steps can accept typed inputs and produce outputs. ```python @step(enable_cache=False) def step_2(input_one: str, input_two: str) -> str: return f"{input_one} {input_two}" ``` ### Pipelines - A pipeline is a series of steps defined using decorators or classes. - Steps can use outputs from previous steps or direct values. ```python @pipeline def my_pipeline(): output_step_one = step_1() step_2(input_one="hello", input_two=output_step_one) ``` - Execute the pipeline with: ```python if __name__ == "__main__": my_pipeline() ``` ### Artifacts - Artifacts are data passed through steps, tracked and stored in an artifact store. - They are serialized/deserialized by **Materializers**. ### Models - Models represent training outputs and associated metadata, managed through the ZenML API. ### Materializers - Materializers handle the serialization/deserialization of artifacts, extending the `BaseMaterializer` class for custom types. ### Parameters & Settings - Steps accept parameters, which are tracked for reproducibility. Settings configure runtime aspects of pipelines. ### Model Versions - A model can have multiple versions, linking all entities to a centralized view. ## 2. Execution ### Stacks & Components - A **Stack** is a collection of components for executing pipelines (e.g., orchestrators, artifact stores). - Default local stacks include an orchestrator and an artifact store for easy experimentation. ### Orchestrator - Coordinates the execution of steps in a pipeline, managing dependencies. ### Artifact Store - Houses all artifacts, enabling versioning and caching for efficiency. ### Flavor - Base abstractions for stack components allow for tailored solutions, with built-in and custom flavors available. ### Stack Switching - Easily switch between local and remote stacks with a CLI command, facilitating production-grade solutions. ## 3. Management ### ZenML Server - Required for remote stack components, managing pipelines, steps, and models. ### Server Deployment - Deploy ZenML Server via the **ZenML Pro SaaS** or self-hosted environments. ### Metadata Tracking - The server tracks metadata for pipeline runs, aiding in troubleshooting. ### Secrets - The server securely stores sensitive data (e.g., credentials) using various backend options. ### Collaboration - ZenML Server enables team structures for sharing resources and enhancing collaboration among MLOps teams. ### Dashboard - Provides a visual interface for managing pipelines and stacks, facilitating collaboration. ### VS Code Extension - Allows interaction with ZenML stacks and pipelines directly from VS Code, enhancing workflow efficiency. This summary captures the essential technical details and concepts of ZenML, enabling effective understanding and usage of the framework. ================================================== === File: docs/book/getting-started/installation.md === # ZenML Installation Guide ## Installation **ZenML** is a Python package installable via `pip`: ```shell pip install zenml ``` **Supported Python Versions:** 3.9, 3.10, 3.11, 3.12. ## Dashboard Installation To use the ZenML web dashboard locally, install optional server dependencies: ```shell pip install "zenml[server]" ``` **Recommendation:** Use a virtual environment (e.g., `virtualenvwrapper`, `pyenv-virtualenv`). ## MacOS Installation (Apple Silicon) Set the following environment variable for local server connections: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` This is not needed if using ZenML as a client. ## Nightly Builds Install nightly builds (unstable) with: ```shell pip install zenml-nightly ``` ## Verifying Installation Check installation success via: Bash: ```bash zenml version ``` Python: ```python import zenml print(zenml.__version__) ``` ## Docker Usage ZenML is available as a Docker image: Start ZenML in a bash environment: ```shell docker run -it zenmldocker/zenml /bin/bash ``` Run the ZenML server: ```shell docker run -it -d -p 8080:8080 zenmldocker/zenml-server ``` ## Deploying the Server To run ZenML with the dashboard locally: ```shell pip install "zenml[server]" zenml login --local # opens the dashboard locally ``` For advanced features, deploy a centrally-accessible ZenML server. Options include [self-hosting](deploying-zenml/README.md) or registering for a free [ZenML Pro](https://cloud.zenml.io/signup?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link) account. ================================================== === File: docs/book/getting-started/system-architectures.md === # ZenML System Architecture Overview This guide outlines the deployment options for ZenML, including ZenML OSS (self-hosted), ZenML Pro (SaaS or self-hosted), and their components. ## ZenML OSS (Self-hosted) - **ZenML OSS Server**: FastAPI app managing metadata for pipelines, artifacts, and stacks. In ZenML Pro, this is referred to as a "Tenant." - **OSS Metadata Store**: Stores all tenant metadata, including ML tracking and versioning information. - **OSS Dashboard**: ReactJS app displaying pipelines and runs. - **Secrets Store**: Secure storage for secrets and credentials, accessible by ZenML Pro API. ZenML OSS is free under the Apache 2.0 license. For deployment details, refer to the [deployment guide](./deploying-zenml/README.md). ## ZenML Pro (SaaS or Self-hosted) ### Components: - **ZenML Pro Control Plane**: Central management for all tenants. - **Pro Dashboard**: Enhanced dashboard built on the OSS dashboard. - **Pro Metadata Store**: PostgreSQL database for roles, permissions, and tenant management. - **Pro Add-ons**: Python modules for additional functionality. - **Identity Provider**: Supports flexible authentication, integrating with Auth0 for cloud deployments and custom OIDC for self-hosted. ZenML Pro enhances productivity and can be integrated with existing ZenML OSS deployments. ### ZenML Pro SaaS Architecture - All services are hosted by ZenML, with customer secrets managed by the ZenML Pro Control Plane. ML metadata is stored on ZenML infrastructure, while actual ML data artifacts are stored on the customer's cloud. #### Hybrid SaaS Option - Customer secrets are stored on their side, connecting their secret store to the ZenML server. ### ZenML Pro Self-Hosted Architecture - All services, data, and secrets are hosted on the customer's cloud for maximum security. For more information on ZenML Pro, sign up for a [free trial](https://cloud.zenml.io/?utm_source=docs&utm_medium=referral_link&utm_campaign=cloud_promotion&utm_content=signup_link). ================================================== === File: docs/book/getting-started/deploying-zenml/README.md === # Deploying ZenML ## Overview Deploying ZenML to a production environment provides benefits such as: 1. **Scalability**: Handles large-scale workloads for faster results. 2. **Reliability**: High availability and fault tolerance minimize downtime. 3. **Collaboration**: Facilitates teamwork and model iteration. ## Components A ZenML deployment includes: - **FastAPI server** with SQLite or MySQL database - **Python Client** for server interaction - **ReactJS dashboard** (open-source companion) - (Optional) **ZenML Pro API + Database + Dashboard** For detailed architecture, refer to the [system architecture documentation](../system-architectures.md). ### ZenML Python Client The ZenML client is a Python package for server interaction, installable via `pip`. It provides: - Command-line interface (`zenml` CLI) for managing stacks and secrets. - Framework for authoring and deploying pipelines. - Access to metadata via the Python SDK for custom automation. Full documentation for the SDK and HTTP API is available [here](https://sdkdocs.zenml.io/latest/). ## Deployment Scenarios Initially, ZenML runs locally with an SQLite database for pipelines and configurations. Use `zenml login --local` to set up a local server. For production, deploy the ZenML server centrally to enable cloud stack components and team collaboration. ## Deployment Options 1. **Managed Deployment**: Utilize ZenML Pro for a managed server (tenant) with secure data handling and metadata tracking. 2. **Self-hosted Deployment**: Deploy ZenML in your environment using methods like Docker, Helm, or HuggingFace Spaces. Both options provide distinct advantages based on organizational needs. ### Deployment Guides Refer to the following guides for deployment strategies: - [Deploying ZenML using ZenML Pro](../zenml-pro/README.md) - [Deploy with Docker](./deploy-with-docker.md) - [Deploy with Helm](./deploy-with-helm.md) - [Deploy with HuggingFace Spaces](./deploy-using-huggingface-spaces.md) ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-using-huggingface-spaces.md === ### Deploying ZenML on Hugging Face Spaces **Overview**: Hugging Face Spaces allows quick deployment of ZenML for ML projects without infrastructure overhead. For production, enable [persistent storage](https://huggingface.co/docs/hub/en/spaces-storage) to avoid data loss. **Deployment Steps**: 1. **Create a Space**: Click [here](https://huggingface.co/new-space?template=zenml/zenml) to set up your ZenML app. Specify: - **Owner**: Your account or organization. - **Space Name**. - **Visibility**: Set to 'Public' for local connections. 2. **Select Machine**: Choose a higher-tier paid CPU instance to avoid auto-shutdown. 3. **Customize Appearance**: Modify the `README.md` file in "Files and Versions" for title, emojis, and colors. Refer to the [Hugging Face configuration guide](https://huggingface.co/docs/hub/spaces-config-reference) for details. 4. **Initialization**: After creation, wait for the 'Building' status to switch to 'Running'. If the ZenML login UI is not visible, refresh the page. Use the "Embed this Space" option to get the **Direct URL**: `https://-.hf.space`. 5. **Connect Locally**: Use the Direct URL with the following command (after installing ZenML): ```shell zenml login '' ``` This URL can also be used to access the ZenML dashboard in fullscreen. **Configuration Options**: - By default, ZenML uses an SQLite database. For persistence, modify the `Dockerfile` in the Space's root directory. Refer to [advanced configuration options](deploy-with-docker.md#advanced-server-configuration-options) for details. - For secrets management, use Hugging Face's 'Repository secrets' in your `Dockerfile`. If using a cloud secrets backend, update your ZenML server password via the Dashboard to secure access. **Troubleshooting**: Access logs via the "Open Logs" button for insights. For further assistance, contact support on the [Slack channel](https://zenml.io/slack/). **Upgrading ZenML**: The default space updates automatically to the latest ZenML version. To manually update, use 'Factory reboot' in the 'Settings' tab (note: this will wipe existing data unless using a MySQL persistent database). To revert to an earlier version, change the `FROM` statement in the `Dockerfile`. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-helm.md === ### Summary: Deploying ZenML in a Kubernetes Cluster with Helm #### Overview ZenML can be deployed in a Kubernetes cluster using a Helm chart. The chart is available on the [ArtifactHub repository](https://artifacthub.io/packages/helm/zenml/zenml) and includes templates, default values, and installation instructions. #### Prerequisites - Kubernetes cluster - Recommended: MySQL-compatible database (version 8.0+) - Installed and configured [Kubernetes client](https://kubernetes.io/docs/tasks/tools/#kubectl) - Installed [Helm](https://helm.sh/docs/intro/install/) - Optional: External Secrets Manager (e.g., AWS Secrets Manager, GCP Secrets Manager) #### ZenML Helm Configuration - Review the [`values.yaml` file](https://artifacthub.io/packages/helm/zenml/zenml?modal=values) for customizable settings. - Collect database and secrets management information for Helm chart configuration. ##### Database Information If using an external MySQL database: - Hostname and port - Username and password (create a dedicated user with restricted privileges) - Database name (ZenML will create it if it doesn't exist) - Optional: SSL certificates for secure connections ##### Secrets Management Information For external secrets management: - **AWS**: Region, access key ID, secret access key - **GCP**: Project ID, service account with access - **Azure**: Key Vault name, tenant ID, client ID, client secret - **HashiCorp Vault**: Vault server URL, access token #### Optional Cluster Services - **Ingress**: Use `nginx-ingress` for HTTP exposure and TLS. - **cert-manager**: For automatic TLS certificate management. #### ZenML Helm Installation 1. **Configure the Helm Chart**: ```bash helm pull oci://public.ecr.aws/zenml/zenml --version --untar ``` Create a `custom-values.yaml` from `values.yaml` and modify necessary configurations (e.g., database URL, SSL certificates, Ingress settings). 2. **Install the Helm Chart**: ```bash helm -n install zenml-server . --create-namespace --values custom-values.yaml ``` 3. **Activate ZenML Server**: Access the server URL in a browser to create an admin account and configure settings. #### Deployment Scenarios - **Minimal Deployment**: Uses SQLite and ClusterIP (not exposed to the internet). ```yaml zenml: ingress: enabled: false ``` Access via port-forwarding: ```bash kubectl -n zenml-server port-forward svc/zenml-server 8080:8080 zenml login http://localhost:8080 ``` - **Basic Deployment with Local Database**: Uses Ingress with TLS. Install `cert-manager` and `nginx-ingress`: ```bash helm repo add jetstack https://charts.jetstack.io helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress --create-namespace ``` Create a `ClusterIssuer` for Let's Encrypt: ```bash kubectl apply -f - < privateKeySecretRef: name: letsencrypt-staging solvers: - http01: ingress: class: nginx EOF ``` Configure Helm values for Ingress: ```yaml zenml: ingress: enabled: true annotations: cert-manager.io/cluster-issuer: "letsencrypt-staging" tls: enabled: true ``` #### Secrets Store Configuration - Default is SQL database; can be changed to AWS, GCP, Azure, HashiCorp Vault, or custom implementations. - Example for AWS: ```yaml zenml: secretsStore: enabled: true type: aws aws: authMethod: secret-key authConfig: region: us-east-1 aws_access_key_id: aws_secret_access_key: ``` #### Backup and Recovery - Automated database backups occur before upgrades. - Backup strategies include `disabled`, `in-memory`, `database`, and `dump-file`. #### Custom CA Certificates and Proxy Configuration - Custom CA certificates can be injected directly or referenced from Kubernetes secrets. - Proxy settings can be configured for external connections. This summary provides a concise overview of deploying ZenML in a Kubernetes cluster using Helm, covering prerequisites, configuration, installation, deployment scenarios, secrets management, and backup strategies. ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-docker.md === ### Summary: Deploying ZenML in a Docker Container **ZenML Server Container Image**: The ZenML server can be deployed using the Docker image `zenmldocker/zenml-server`. It supports deployment via Docker, docker-compose, or serverless platforms like Cloud Run. #### Local Deployment To quickly deploy ZenML locally, use the ZenML CLI: ```bash zenml login --local --docker ``` This command sets up a local ZenML server with a shared SQLite database. #### Configuration Options For custom deployments, configure the following environment variables: - **ZENML_STORE_URL**: Connect to an SQLite or MySQL database. - SQLite: `sqlite:////path/to/zenml.db` - MySQL: `mysql://username:password@host:port/database` - **SSL Variables** (for MySQL with SSL): - **ZENML_STORE_SSL_CA** - **ZENML_STORE_SSL_CERT** - **ZENML_STORE_SSL_KEY** - **ZENML_STORE_SSL_VERIFY_SERVER_CERT** (default: `False`) - **Logging and Rate Limiting**: - **ZENML_LOGGING_VERBOSITY**: Set log level (`NOTSET`, `ERROR`, `WARN`, `INFO`, `DEBUG`, `CRITICAL`). - **ZENML_SERVER_RATE_LIMIT_ENABLED**: Enable rate limiting for the `LOGIN` endpoint. - **ZENML_SERVER_LOGIN_RATE_LIMIT_MINUTE**: Requests allowed per minute (default: `5`). - **ZENML_SERVER_LOGIN_RATE_LIMIT_DAY**: Requests allowed per day (default: `1000`). If no `ZENML_STORE_*` variables are set, an SQLite database is created at `/zenml/.zenconfig/local_stores/default_zen_store/zenml.db`. #### Secret Store Configuration The default secret store is the SQL database. To use an external service (AWS, GCP, Azure, HashiCorp Vault), set the following: - **ZENML_SECRETS_STORE_TYPE**: Specify the type (e.g., `aws`, `gcp`, `azure`, `hashicorp`, `custom`). - **ZENML_SECRETS_STORE_AUTH_METHOD**: Authentication method (e.g., `secret-key`, `service-account`). - **ZENML_SECRETS_STORE_AUTH_CONFIG**: JSON configuration for authentication. **Important**: For encryption, set **ZENML_SECRETS_STORE_ENCRYPTION_KEY**. #### Running the ZenML Server To run the ZenML server with Docker: ```bash docker run -it -d -p 8080:8080 --name zenml zenmldocker/zenml-server ``` This starts a server using a temporary SQLite database. **Persistent Database**: To persist the SQLite database: ```bash mkdir zenml-server docker run -it -d -p 8080:8080 --name zenml \ --mount type=bind,source=$PWD/zenml-server,target=/zenml/.zenconfig/local_stores/default_zen_store \ zenmldocker/zenml-server ``` **MySQL Database**: To use MySQL: ```bash docker run --name mysql -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password mysql:8.0 ``` Connect ZenML to MySQL: ```bash docker run -it -d -p 8080:8080 --name zenml \ --add-host host.docker.internal:host-gateway \ --env ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml \ zenmldocker/zenml-server ``` #### Docker Compose For multi-container setups, create a `docker-compose.yml`: ```yaml version: "3.9" services: mysql: image: mysql:8.0 environment: - MYSQL_ROOT_PASSWORD=password zenml: image: zenmldocker/zenml-server environment: - ZENML_STORE_URL=mysql://root:password@host.docker.internal/zenml depends_on: - mysql ``` Run with: ```bash docker compose -p zenml up -d ``` #### Backup and Recovery ZenML supports automated backups before migrations. Configure the backup strategy with `ZENML_STORE_BACKUP_STRATEGY` (e.g., `in-memory`, `database`, `dump-file`). #### Troubleshooting Check logs with: - For CLI deployments: `zenml logs -f` - For manual Docker deployments: `docker logs zenml -f` - For Docker Compose: `docker compose -p zenml logs -f` This guide provides essential details for deploying and managing a ZenML server in a Docker container, including configuration options, deployment methods, and backup strategies. ================================================== === File: docs/book/getting-started/deploying-zenml/secret-management.md === ### Secret Store Configuration and Management #### Centralized Secrets Store ZenML offers a centralized secrets management system for secure registration and management of secrets. Metadata (name, ID, owner, scope) is stored in the ZenML server database, while actual secret values are managed through the ZenML Secrets Store. In local deployments, secrets are stored in a local SQLite database. For remote servers, secrets are stored in the configured secrets management back-end, accessed via the ZenML server API. **Supported Secrets Store Back-Ends:** - Default SQL database - AWS Secrets Manager - GCP Secret Manager - Azure Key Vault - HashiCorp Vault - Custom implementations #### Configuration and Deployment Configuration occurs at deployment, requiring selection of a back-end and authentication mechanism. The ZenML secrets store uses the same authentication methods as the ZenML Service Connector. Adhere to the principle of least privilege for credentials. The secrets store can be updated and redeployed at any time, following a documented migration strategy to ensure minimal downtime. #### Backup Secrets Store A secondary Secrets Store can be configured for high availability, backup, and disaster recovery. The primary store is accessed first; if it fails, the backup is used. Use the `zenml secret backup` CLI command to back up secrets, and `zenml secret restore` to restore them. **Important Note:** Ensure the backup store is in a different location than the primary to avoid issues. #### Secrets Migration Strategy Changing the secrets storage provider/location requires manual migration of existing secrets. The migration process involves: 1. Configuring the ZenML server to use the new store as secondary. 2. Redeploying the server. 3. Using `zenml secret backup` to transfer secrets from the primary to the secondary store. 4. Configuring the new store as primary and removing the old one. 5. Redeploying the server. This strategy is unnecessary if only credentials or authentication methods change without altering the storage location. For more details on deployment and configuration, refer to the ZenML deployment guide. ================================================== === File: docs/book/getting-started/deploying-zenml/custom-secret-stores.md === ### Custom Secret Stores Overview The custom secrets store in ZenML is responsible for managing secret values while the metadata is stored in an SQL database. The interface for all secrets store back-ends is defined in the `zenml.zen_stores.secrets_stores.secrets_store_interface` module. #### SecretsStoreInterface The `SecretsStoreInterface` is an abstract base class that all ZenML secrets stores must implement. Key methods include: ```python class SecretsStoreInterface(ABC): @abstractmethod def _initialize(self) -> None: """Initialize the secrets store.""" @abstractmethod def store_secret_values(self, secret_id: UUID, secret_values: Dict[str, str]) -> None: """Store secret values for a new secret.""" @abstractmethod def get_secret_values(self, secret_id: UUID) -> Dict[str, str]: """Retrieve secret values for an existing secret.""" @abstractmethod def update_secret_values(self, secret_id: UUID, secret_values: Dict[str, str]) -> None: """Update secret values for an existing secret.""" @abstractmethod def delete_secret_values(self, secret_id: UUID) -> None: """Delete secret values for an existing secret.""" ``` #### Creating a Custom Secrets Store To implement a custom secrets store: 1. **Inherit from Base Class**: Create a class that extends `zenml.zen_stores.secrets_stores.base_secrets_store.BaseSecretsStore` and implements the required methods. Set `SecretsStoreType.CUSTOM` as the `TYPE`. 2. **Configuration Class**: If configuration is needed, create a class that inherits from `SecretsStoreConfiguration` to define parameters, and use it as the `CONFIG_TYPE`. 3. **Server Configuration**: Ensure your code is included in the ZenML server's container image. Configure the server to use your custom secrets store via environment variables or helm chart values, as detailed in the deployment guide. For complete details, refer to the [SDK documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-zen_stores/#zenml.zen_stores.secrets_stores.secrets_store_interface.SecretsStoreInterface). ================================================== === File: docs/book/getting-started/deploying-zenml/deploy-with-custom-image.md === ### Deploying ZenML with Custom Docker Images Deploying ZenML typically uses the default `zenmlhub/zenml-server` Docker image, but custom images may be necessary for: - Custom artifact stores requiring artifact visualizations or step logs. - Forked ZenML repositories with modified server/database logic. **Note:** Custom Docker image deployment is only supported for [Docker](deploy-with-docker.md) or [Helm](deploy-with-helm.md). ### Build and Push Custom ZenML Server Docker Image 1. **Set up a container registry** (e.g., Docker Hub). 2. **Clone ZenML** and checkout the desired branch: ```bash git checkout release/0.41.0 ``` 3. **Copy the base Dockerfile**: ```bash cp docker/base.Dockerfile docker/custom.Dockerfile ``` 4. **Modify the Dockerfile**: - Add dependencies: ```bash RUN pip install ``` - (For forks) Install local files: ```bash RUN pip install -e .[server,secrets-aws,secrets-gcp,secrets-azure,secrets-hashicorp,s3fs,gcsfs,adlfs,connectors-aws,connectors-gcp,connectors-azure] ``` 5. **Build and push the image**: ```bash docker build -f docker/custom.Dockerfile . -t /: --platform linux/amd64 docker push /: ``` **Tip:** Verify your custom image locally by following the [Deploy a custom ZenML image via Docker](deploy-with-custom-image.md#deploy-a-custom-zenml-image-via-docker) guide. ### Deploy ZenML with Your Custom Image #### Deploy via Docker Refer to the [ZenML Docker Deployment Guide](deploy-with-docker.md) and replace `zenmldocker/zenml-server` with your custom image: - To run the ZenML server: ```bash docker run -it -d -p 8080:8080 --name zenml /: ``` - Adjust `docker-compose.yml`: ```yaml services: zenml: image: /: ``` #### Deploy via Helm Refer to the [ZenML Helm Deployment Guide](deploy-with-helm.md) and modify the `values.yaml`: ```yaml zenml: image: repository: / tag: ``` ================================================== === File: docs/book/getting-started/zenml-pro/teams.md === ### Summary of ZenML Pro Teams Documentation **Overview**: ZenML Pro introduces Teams to manage user groups efficiently within organizations and tenants. Teams act as single entities, simplifying user management in MLOps workflows. #### Key Benefits of Teams: 1. **Group Management**: Manage permissions for multiple users simultaneously. 2. **Organizational Structure**: Reflect your company's structure or project teams. 3. **Simplified Access Control**: Assign roles to teams instead of individual users. #### Creating and Managing Teams: - **Creation Steps**: 1. Navigate to Organization settings. 2. Click on the "Teams" tab. 3. Use "Add team" to create a new team. **Required Information**: - Team name - Description (optional) - Initial team members #### Adding Users to Teams: 1. Go to the "Teams" tab in Organization settings. 2. Select the desired team. 3. Click "Add Members". 4. Choose users to add. #### Assigning Teams to Tenants: 1. Go to the tenant settings page. 2. Click on the "Members" tab, then the "Teams" tab. 3. Select "Add Team". 4. Choose the team and assign a role. #### Team Roles and Permissions: - Roles assigned to a team within a tenant are inherited by all team members. Roles can be predefined (Admin, Editor, Viewer) or custom. #### Best Practices: 1. Create teams that reflect your organization. 2. Use custom roles for precise access control. 3. Conduct regular audits of team memberships and roles. 4. Document each team's purpose and associated projects or tenants. By utilizing Teams in ZenML Pro, organizations can enhance user management, streamline access control, and improve MLOps workflows. ================================================== === File: docs/book/getting-started/zenml-pro/README.md === # ZenML Pro Overview ZenML Pro enhances the Open Source version with several key features: - **Managed Deployment**: Deploy multiple ZenML servers (tenants). - **User Management**: Create organizations and teams for scalable user management. - **Role-Based Access Control**: Implement customizable roles for secure resource management. - **Model and Artifact Control**: Utilize the Model Control Plane and Artifact Control Plane for better tracking of ML assets. - **Triggers and Run Templates**: Create and run templates via the dashboard or API for quick iterations. - **Early-Access Features**: Access pro-specific features like triggers, filters, and usage reports. For more details, visit the [ZenML website](https://zenml.io/pro). ## Deployment Scenarios ZenML Pro can be deployed as a SaaS solution or fully self-hosted. The SaaS version simplifies server management, allowing focus on MLOps workflows. For self-hosted deployment, refer to the [self-hosted deployment guide](./self-hosted.md) or [book a demo](https://www.zenml.io/book-your-demo). ### Key Resources - [Tenants](./tenants.md) - [Organizations](./organization.md) - [Teams](./teams.md) - [Roles](./roles.md) - [Self-Hosted Deployments](./self-hosted.md) ================================================== === File: docs/book/getting-started/zenml-pro/self-hosted.md === # ZenML Pro Self-Hosted Deployment Guide Summary This document outlines the installation of ZenML Pro, including the Control Plane and Tenant servers, in a Kubernetes cluster. ## Overview ZenML Pro requires access to private container images and infrastructure components like a Kubernetes cluster, a database server, a load balancer, an Ingress controller, HTTPS certificates, and DNS rules. Note that Single Sign-On (SSO) and Run Templates features are not available in the on-prem version. ## Preparation and Prerequisites ### Software Artifacts - **Control Plane Artifacts**: - **API Server**: - AWS: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-api` - GCP: `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-api` - **Dashboard**: - AWS: `715803424590.dkr.ecr.eu-west-1.amazonaws.com/zenml-pro-dashboard` - GCP: `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-dashboard` - **Helm Chart**: `oci://public.ecr.aws/zenml/zenml-pro` - **Tenant Server Artifacts**: - **Tenant Server**: - AWS: `715803424590.dkr.ecr.eu-central-1.amazonaws.com/zenml-pro-server` - GCP: `europe-west3-docker.pkg.dev/zenml-cloud/zenml-pro/zenml-pro-server` - **Open-source Helm Chart**: `oci://public.ecr.aws/zenml/zenml` - **Client Artifacts**: - Public client image: `zenmldocker/zenml` (available on Docker Hub). ### Accessing Container Images - **AWS**: Set up an IAM user/role with `AmazonEC2ContainerRegistryReadOnly` policy. - **GCP**: Create a service account with access to the Artifact Registry. ### Air-Gapped Installation For environments without internet access, download all required artifacts and transfer them to the air-gapped environment. ### Infrastructure Requirements 1. **Kubernetes Cluster** 2. **Database Server**: MySQL or Postgres for Control Plane; only MySQL for Tenant servers. 3. **Ingress Controller**: e.g., NGINX or Traefik. 4. **Domain Name**: Fully Qualified Domain Name (FQDN) for Control Plane and tenants. 5. **SSL Certificate**: Required for HTTPS traffic. ## Stage 1: Install the ZenML Pro Control Plane ### Set up Credentials Create a Kubernetes secret for accessing the container registry. ### Configure the Helm Chart Customize the Helm chart values, focusing on: - Database credentials - Server URL - Ingress hostname ### Install the Helm Chart Run the following command: ```bash helm --namespace zenml-pro upgrade --install --create-namespace zenml-pro oci://public.ecr.aws/zenml/zenml-pro --version --values my-values.yaml ``` ## Stage 2: Enroll and Deploy ZenML Pro Tenants ### Enroll a Tenant Run the `enroll-tenant.py` script to create a tenant entry and generate a Helm values file. ### Configure the Tenant Helm Chart Use the generated YAML file as a template and fill in necessary values. ### Deploy the Tenant Server Install the tenant server using Helm: ```bash helm --namespace zenml-pro- upgrade --install --create-namespace zenml oci://public.ecr.aws/zenml/zenml --version --values zenml--values.yaml ``` ## Day 2 Operations: Upgrades and Updates 1. Upgrade the ZenML Pro Control Plane first, followed by tenant servers. 2. Use Helm commands to upgrade, ensuring to check for available versions and release notes. This guide provides a comprehensive overview of the installation and management of ZenML Pro in a self-hosted environment, ensuring a successful deployment in Kubernetes. ================================================== === File: docs/book/getting-started/zenml-pro/core-concepts.md === # ZenML Pro Core Concepts ZenML Pro features a distinct entity hierarchy compared to the open-source version. Key concepts include: - **Organization**: A collection of users, teams, and tenants. - **Tenant**: An isolated ZenML server deployment containing resources for a project or team. - **Teams**: Groups of users within an organization for resource management and access control. - **Users**: Individual accounts on a ZenML Pro instance. - **Roles**: Define user permissions within a tenant or organization. - **Templates**: Re-runnable pipeline configurations. For more details, refer to the linked pages: | Concept | Description | Link | |------------------|----------------------------------------------|---------------------| | Organizations | Managing organizations in ZenML Pro | [organization.md](./organization.md) | | Tenants | Working with tenants in ZenML Pro | [tenants.md](./tenants.md) | | Teams | Team management in ZenML Pro | [teams.md](./teams.md) | | Roles & Permissions | Role-based access control in ZenML Pro | [roles.md](./roles.md) | ================================================== === File: docs/book/getting-started/zenml-pro/roles.md === # ZenML Pro: Roles and Permissions ZenML Pro utilizes a role-based access control (RBAC) system to manage permissions within organizations and tenants. This guide outlines available roles, assignment processes, and custom role creation. ## Organization-Level Roles ZenML Pro offers three predefined organization roles: 1. **Org Admin**: Full control, can manage members, tenants, billing, and assign roles. 2. **Org Editor**: Manages tenants and teams, but cannot access billing or delete the organization. 3. **Org Viewer**: Read-only access to tenants. ### Assigning Organization Roles 1. Go to Organization settings. 2. Click the "Members" tab to update roles or use "Add members" to invite new users. **Notes**: - Admins can add themselves to any tenant role. - Editors and viewers cannot add themselves to tenants they are not part of. - Custom organization roles can be created via the [ZenML Pro API](https://cloudapi.zenml.io/). ## Tenant-Level Roles Tenant roles dictate user permissions within a specific tenant. Predefined roles include: 1. **Admin**: Full control over tenant resources. 2. **Editor**: Can create and share resources but cannot modify or delete them. 3. **Viewer**: Read-only access. ### Custom Roles To create a custom tenant role: 1. Access tenant settings. 2. Click "Roles" and select "Add Custom Role". 3. Enter a name, description, and choose a base role. 4. Edit permissions for resources like artifacts, models, pipelines, etc. ### Managing Role Permissions 1. Go to the Roles page in tenant settings. 2. Select the role to modify. 3. Click "Edit Permissions" and adjust as needed. ## Sharing Individual Resources Users can share specific resources via the dashboard. ## Best Practices 1. **Least Privilege**: Assign minimal permissions. 2. **Regular Audits**: Review role assignments periodically. 3. **Use Custom Roles**: Tailor roles for specific team needs. 4. **Document Roles**: Keep records of custom roles and their purposes. By effectively utilizing ZenML Pro's RBAC, you can maintain security and enhance collaboration in MLOps projects. ================================================== === File: docs/book/getting-started/zenml-pro/organization.md === # Organizations in ZenML Pro ZenML Pro organizes work around the concept of an **Organization**, the highest structure in the ZenML Cloud environment. An organization typically includes a group of users and one or more [tenants](./tenants.md). ## Inviting Team Members To invite users to your organization, click `Add Member` in the Organization settings and assign an initial Role. The user will receive an invitation email. Once part of the organization, users can log in to all accessible tenants. ## Managing Organization Settings Organization settings, including billing and user roles, are managed at the organization level. Access these settings by clicking your profile picture in the top right corner and selecting "Settings". ## API Operations Various operations related to Organizations can be performed via the API. More details are available at [ZenML Cloud API](https://cloudapi.zenml.io/). ================================================== === File: docs/book/getting-started/zenml-pro/pro-api.md === ### ZenML Pro API Overview The ZenML Pro API is a RESTful API compliant with OpenAPI 3.1.0, enabling interaction with ZenML resources for both SaaS and self-hosted instances. Key functionalities include: - Tenant management - Organization management - User management - Role-based access control (RBAC) - Authentication and authorization #### Authentication To authenticate requests, you can log into your ZenML Pro account or use API tokens for programmatic access. Tokens are valid for 1 hour and scoped to your user account. **Generating API Tokens:** 1. Go to organization settings in the ZenML Pro dashboard. 2. Select "API Tokens" from the sidebar. 3. Click "Create new token" and use it as a bearer token in HTTP requests. **Example Requests:** - **cURL:** ```bash curl -H "Authorization: Bearer YOUR_API_TOKEN" https://cloudapi.zenml.io/users/me ``` - **Wget:** ```bash wget -qO- --header="Authorization: Bearer YOUR_API_TOKEN" https://cloudapi.zenml.io/users/me ``` - **Python:** ```python import requests response = requests.get( "https://cloudapi.zenml.io/users/me", headers={"Authorization": f"Bearer YOUR_API_TOKEN"} ) print(response.json()) ``` **Important Notes:** - Tokens expire after 1 hour and cannot be retrieved post-generation. - Tokens inherit user permissions. #### Tenant Programmatic Access Access the ZenML Pro tenant API using: - Temporary API tokens - Service account API keys Refer to the documentation for detailed instructions. #### Key API Endpoints - **Tenant Management:** - List tenants: `GET /tenants` - Create tenant: `POST /tenants` - Get tenant details: `GET /tenants/{tenant_id}` - Update tenant: `PATCH /tenants/{tenant_id}` - **Organization Management:** - List organizations: `GET /organizations` - Create organization: `POST /organizations` - Get organization details: `GET /organizations/{organization_id}` - Update organization: `PATCH /organizations/{organization_id}` - **User Management:** - List users: `GET /users` - Get current user: `GET /users/me` - Update user: `PATCH /users/{user_id}` - **Role-Based Access Control:** - Create role: `POST /roles` - Assign role: `POST /roles/{role_id}/assignments` - Check permissions: `GET /permissions` #### Error Handling The API uses standard HTTP status codes for request outcomes. Error responses include a message and additional details. #### Rate Limiting The API may enforce rate limits, returning a 429 status code for excessive requests. Implement backoff and retry logic as needed. For comprehensive details on endpoints and features, refer to the full API documentation at [https://cloudapi.zenml.io](https://cloudapi.zenml.io). ================================================== === File: docs/book/getting-started/zenml-pro/tenants.md === ## ZenML Pro Tenants Overview ### What are Tenants? Tenants in ZenML Pro are isolated deployments of the ZenML server, each with its own users, roles, and resources. All operations, including pipelines, stacks, and runs, are scoped to a tenant. ZenML Pro enhances the open-source version with additional features. ### Creating a Tenant To create a tenant: 1. Go to your organization page. 2. Click "+ New Tenant." 3. Enter a name and click "Create Tenant." You can also create a tenant via the Cloud API using the `POST /organizations` endpoint at `https://cloudapi.zenml.io/`. ### Organizing Tenants Effective tenant organization is crucial for MLOps management. Consider these dimensions: #### 1. By Development Stage - **Staging Tenants**: For development, testing, and experimentation. - **Production Tenants**: Host live services with stricter access controls and performance optimization. #### 2. By Business Logic - **Project-based**: Create tenants for specific ML projects (e.g., Recommendation System). - **Team-based**: Align tenants with organizational teams (e.g., Data Science Team). - **Data Sensitivity**: Separate tenants based on data classification (e.g., Public, Internal). ### Best Practices for Tenant Organization - Use clear naming conventions. - Implement role-based access control. - Maintain documentation for each tenant. - Regularly review tenant structures. - Design for scalability. ### Using Your Tenant Tenants enable running pipelines, experiments, and accessing Pro features such as: - Model Control Plane - Artifact Control Plane - Pipeline execution from the Dashboard ### Accessing Tenant Documentation Each tenant has a connection URL for the `zenml` client and to access the OpenAPI specification. Visit `/docs` for a list of available methods, including pipeline execution via the REST API. For more details, refer to the API documentation [here](../../reference/api-reference.md). ================================================== === File: docs/book/reference/api-reference.md === # ZenML API Reference Summary ## Overview The ZenML server is a FastAPI application, with OpenAPI-compliant documentation accessible at `/docs` or `/redoc`. For local instances, access the docs at `http://127.0.0.1:8237/docs`. ## Programmatic API Access ### Bearer Token Authentication To access the ZenML API programmatically, you can use an API token. #### Short-Lived API Token 1. Generate a short-lived API token (valid for 1 hour) from the ZenML dashboard. 2. Use the token as a bearer token in HTTP requests. **Example using curl:** ```bash curl -H "Authorization: Bearer YOUR_API_TOKEN" https://your-zenml-server/api/v1/current-user ``` **Example using Python:** ```python import requests response = requests.get( "https://your-zenml-server/api/v1/current-user", headers={"Authorization": f"Bearer YOUR_API_TOKEN"} ) print(response.json()) ``` **Important Notes:** - Tokens expire after 1 hour and cannot be retrieved post-generation. - Tokens are user-scoped and inherit permissions. - For long-term access, consider using a service account and API key. ### Service Account and API Key 1. Create a service account: ```bash zenml service-account create myserviceaccount ``` 2. Obtain an API token by sending a POST request to `/api/v1/login` using your API key. **Example using curl:** ```bash curl -X POST -d "password=" https://your-zenml-server/api/v1/login ``` **Example using Python:** ```python import requests response = requests.post( "https://your-zenml-server/api/v1/login", data={"password": ""}, headers={"Content-Type": "application/x-www-form-urlencoded"} ) print(response.json()) ``` 3. Use the obtained API token for authenticated requests similarly to the short-lived token method. **Important Notes:** - API tokens inherit permissions from the service account. - Tokens typically expire after 1 hour. - Handle tokens securely; rotate API keys if compromised. This summary captures essential details about accessing the ZenML API, including token generation and usage for both short-lived and service account-based authentication. ================================================== === File: docs/book/reference/global-settings.md === ### ZenML Global Settings Overview The **ZenML Global Config Directory** stores global settings for ZenML installations, typically located at: - **Linux:** `~/.config/zenml` - **Mac:** `~/Library/Application Support/zenml` - **Windows:** `C:\Users\%USERNAME%\AppData\Local\zenml` The location can be customized using the `ZENML_CONFIG_PATH` environment variable. To retrieve the current config directory, use: ```shell zenml status python -c 'from zenml.utils.io_utils import get_global_config_directory; print(get_global_config_directory())' ``` **Warning:** Avoid manually altering files in the global config directory. Use CLI commands for management: - `zenml analytics` - Manage analytics settings - `zenml clean` - Reset configuration to default - `zenml downgrade` - Downgrade ZenML version to match the installed package Upon first run, ZenML initializes the global config directory and creates a default stack: ``` Initializing the ZenML global configuration version to 0.13.2 Creating default user 'default' ... Creating default stack for user 'default'... ``` #### Global Config Directory Structure After initialization, the structure includes: ``` /home/stefan/.config/zenml ├── config.yaml # Global Configuration Settings └── local_stores # Local data storage for stack components ├── # Local Store paths for components └── default_zen_store └── zenml.db # SQLite database for ZenML data ``` **Key Configurations in `config.yaml`:** ```yaml active_stack_id: ... analytics_opt_in: true store: database: ... url: ... username: ... user_id: d980f13e-05d1-4765-92d2-1dc7eb7addb7 version: 0.13.2 ``` #### Usage Analytics ZenML collects anonymized usage statistics to improve the tool. Users can opt out with: ```bash zenml analytics opt-out ``` **Data Collection:** Utilizes [Segment](https://segment.com) for analytics, processed through a central ZenML server to optimize tracking. #### Version Mismatch Handling If a version mismatch occurs, indicated by: ```shell `The ZenML global configuration version (%s) is higher than the version of ZenML currently being used (%s).` ``` Use the following command to downgrade: ```shell zenml downgrade ``` **Warning:** Downgrading may cause unexpected behavior. To reset, run: ```shell zenml clean ``` This summary provides essential details about the ZenML global settings, directory structure, analytics, and version management while omitting redundancy. ================================================== === File: docs/book/reference/how-do-i.md === # ZenML Common Use Cases and Workflows **Last Updated**: December 13, 2023 ## Frequently Asked Questions - **Contributing to ZenML**: Refer to the [Contribution guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). For small changes, open a pull request; for larger features, discuss on [Slack](https://zenml.io/slack/) or create an issue. - **Adding Custom Components**: Start with the general documentation on [implementing custom stack components](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). For specific types, such as custom orchestrators, see the dedicated section [here](../component-guide/orchestrators/custom.md). - **Mitigating Dependency Clashes**: Check the documentation on [handling dependencies](../how-to/pipeline-development/configure-python-environments/handling-dependencies.md). - **Deploying Cloud Infrastructure/MLOps Stacks**: ZenML is stack-agnostic. Documentation for each stack component details deployment on popular cloud providers. - **Self-Hosted ZenML Deployments**: Refer to the documentation on [self-hosted deployments](../getting-started/deploying-zenml/README.md). - **Hyperparameter Tuning**: Learn more in our guide on [hyperparameter tuning](../how-to/pipeline-development/build-pipelines/hyper-parameter-tuning.md). - **Resetting ZenML Client**: Use `zenml clean` to reset your client and wipe the local metadata database. This is destructive; consult us on [Slack](https://zenml.io/slack/) if unsure. - **Dynamic Pipelines and Steps**: Read about composing steps and pipelines in the [starter guide](../user-guide/starter-guide/create-an-ml-pipeline.md) and check code examples in the hyperparameter tuning guide. - **Using Project Templates**: Project templates help you start quickly. The Starter template (`starter`) is recommended for most use cases. You can also create templates in a Git repository. - **Upgrading ZenML**: Upgrade the client with `pip install --upgrade zenml`. For server upgrades, refer to the [upgrade documentation](../how-to/manage-zenml-server/upgrade-zenml-server.md). - **Using Specific Stack Components**: For details on specific components, consult the [component guide](../component-guide/README.md). ![ZenML Scarf](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) ================================================== === File: docs/book/reference/environment-variables.md === # Environment Variables for ZenML ZenML allows configuration through several pre-defined environment variables. Below is a summary of key variables, their default values, and options: ## Logging Configuration - **Verbosity**: Control logging level. ```bash export ZENML_LOGGING_VERBOSITY=INFO # Options: INFO, WARN, ERROR, CRITICAL, DEBUG ``` - **Format**: Set logging format. ```bash export ZENML_LOGGING_FORMAT='%(asctime)s %(message)s' ``` ## Step Logs - **Disable Step Logs Storage**: Prevents storing step logs, improving performance. ```bash export ZENML_DISABLE_STEP_LOGS_STORAGE=false # Set to true to disable ``` ## Repository and Analytics - **Repository Path**: Specify ZenML repository location. ```bash export ZENML_REPOSITORY_PATH=/path/to/somewhere ``` - **Analytics Opt-Out**: Disable usage analytics. ```bash export ZENML_ANALYTICS_OPT_IN=false ``` ## Debugging and Execution Control - **Debug Mode**: Enable developer mode. ```bash export ZENML_DEBUG=true ``` - **Active Stack**: Set the active stack by UUID. ```bash export ZENML_ACTIVE_STACK_ID= ``` - **Prevent Pipeline Execution**: Stops pipeline execution when true. ```bash export ZENML_PREVENT_PIPELINE_EXECUTION=false # Set to true to prevent ``` ## Traceback and Logging Options - **Rich Traceback**: Enable or disable rich traceback. ```bash export ZENML_ENABLE_RICH_TRACEBACK=true # Set to false to disable ``` - **Colorful Logging**: Disable colorful logs. ```bash export ZENML_LOGGING_COLORS_DISABLED=true ``` ## Stack Validation and Code Repository - **Skip Stack Validation**: Disable stack validation. ```bash export ZENML_SKIP_STACK_VALIDATION=true ``` - **Ignore Untracked Files**: Allow untracked files in code repositories. ```bash export ZENML_CODE_REPOSITORY_IGNORE_UNTRACKED_FILES=true ``` ## Global Config and Server Connection - **Global Config Path**: Set path for global config file. ```bash export ZENML_CONFIG_PATH=/path/to/somewhere ``` - **Client Configuration**: Connect ZenML Client to a server. ```bash export ZENML_STORE_URL=https://... export ZENML_STORE_API_KEY= ``` For more details on server configuration, refer to the ZenML Server documentation. ================================================== === File: docs/book/reference/python-client.md === ### ZenML Python Client Overview The ZenML Python `Client` allows programmatic interaction with ZenML resources such as pipelines, runs, and stacks. Resources are stored and versioned in a database within your ZenML instance. #### Usage Example To fetch the last 10 pipeline runs for the current user on the active stack: ```python from zenml.client import Client client = Client() my_runs_on_current_stack = client.list_pipeline_runs( stack_id=client.active_stack_model.id, user_id=client.active_user.id, sort_by="desc:start_time", size=10, ) for pipeline_run in my_runs_on_current_stack: print(pipeline_run.name) ``` #### Main ZenML Resources - **Pipelines**: Tracked pipelines. - **Pipeline Runs**: Information on executed pipeline runs. - **Run Templates**: Templates for running pipelines. - **Step Runs**: Steps of pipeline runs, useful for fetching specific steps. - **Artifacts**: Information about artifacts from pipeline runs. - **Schedules**: Metadata on scheduled pipeline runs. - **Builds**: Docker images for containerized pipelines. - **Code Repositories**: Connected git repositories. #### Stacks and Infrastructure - **Stack**: Registered stacks in ZenML. - **Stack Components**: Components like orchestrators and artifact stores. - **Flavors**: Available stack component flavors (built-in, integration-enabled, custom). - **User**: Registered users (default user in local runs). - **Secrets**: Authentication secrets in the ZenML Secret Store. - **Service Connectors**: Connections to infrastructure. #### Client Methods - **List Methods**: Retrieve lists of resources, e.g., `client.list_pipeline_runs(...)`. - **Get Methods**: Fetch specific resources by ID or name. - **Create, Update, Delete Methods**: Available for certain resources; check SDK documentation for specifics. #### Active User and Stack Access the current user and active stack: ```python my_runs_on_current_stack = client.list_pipeline_runs( stack_id=client.active_stack_model.id, user_id=client.active_user.id, ) ``` #### Resource Models Client methods return **Response Models** (Pydantic Models) that validate data attributes and types. For example, `client.list_pipeline_runs` returns `Page[PipelineRunResponseModel]`. **Request, Update, and Filter Models** are used for server API endpoints, not for Client methods. For details on resource models, refer to the [ZenML Models SDK Documentation](https://sdkdocs.zenml.io/latest/core_code_docs/core-models/#zenml.models). This concise overview retains critical technical details necessary for understanding and utilizing the ZenML Python Client effectively. ================================================== === File: docs/book/reference/community-and-content.md === ### Community & Content Overview The ZenML community offers various channels for engagement and learning about the framework: - **Slack Channel**: Join the [ZenML Slack channel](https://zenml.io/slack) for direct interaction with the core team and community discussions. It's a great resource for questions and sharing projects. - **Social Media**: Follow us on [LinkedIn](https://www.linkedin.com/company/zenml) and [Twitter](https://twitter.com/zenml_io) for updates on releases, events, and MLOps. Engagement through comments and shares is encouraged. - **YouTube Channel**: Access tutorials and workshops on our [YouTube channel](https://www.youtube.com/c/ZenML) for visual learning. - **Public Roadmap**: Contribute to our [public roadmap](https://zenml.io/roadmap) by sharing feature ideas or voting on existing ones, influencing ZenML's development. - **Blog**: Visit our [Blog](https://zenml.io/blog/) for articles on tool implementation, new features, and insights from our team. - **Podcast**: Listen to our [Podcast](https://podcast.zenml.io/) for interviews and discussions on machine learning and MLOps with industry leaders. - **Newsletter**: Subscribe to our [Newsletter](https://zenml.io/newsletter-signup) for updates on open-source tooling and ZenML news. ================================================== === File: docs/book/reference/llms-txt.md === ## Summary of llms.txt Documentation for ZenML ### About llms.txt The `llms.txt` file format, proposed by [llmstxt.org](https://llmstxt.org/), provides a standardized way to supply LLMs with information about a product or website. It includes background information, guidance, and links to detailed markdown files, formatted for both human and LLM readability. The ZenML `llms.txt` file summarizes the documentation to answer basic questions about ZenML, with the base version available at [zenml.io/llms.txt](https://zenml.io/llms.txt). ### Available llms.txt Files ZenML offers multiple `llms.txt` files for different documentation aspects, accessible via the [HuggingFace dataset](https://huggingface.co/datasets/zenml/llms.txt): | File | Tokens | Purpose | |------------------------|--------|---------------------------------------------------------| | [llms.txt](https://zenml.io/llms.txt) | 120k | Basic ZenML concepts and getting started information | | [component-guide.txt](https://zenml.io/component-guide.txt) | 180k | Details on ZenML integrations and stack components | | [how-to-guides.txt](https://zenml.io/how-to-guides.txt) | 75k | Summarized how-to guides for common ZenML workflows | | [llms-full.txt](https://zenml.io/llms-full.txt) | 600k | Complete ZenML documentation | ### File Details 1. **[llms.txt](https://zenml.io/llms.txt)**: Covers User Guides and Getting Started sections, ideal for basic inquiries. 2. **[component-guide.txt](https://zenml.io/component-guide.txt)**: Contains comprehensive details on stack components and integrations. 3. **[how-to-guides.txt](https://zenml.io/how-to-guides.txt)**: Summarizes how-to documentation for common workflows. 4. **[llms-full.txt](https://zenml.io/llms-full.txt)**: Unabridged ZenML documentation for precise answers. ### How to Use llms.txt Files - Select the relevant file based on your inquiry. - Each file's text is prefixed with its filename, aiding in source referencing. - You can combine files for enhanced accuracy if your context window allows. - Instruct the LLM to avoid unverified answers not sourced from the provided text. - Use models with large context windows, like Gemini models, due to high token counts. ================================================== === File: docs/book/reference/faq.md === ### ZenML FAQ Summary #### Purpose of ZenML ZenML was developed to address challenges faced in deploying machine learning models in production, providing a simple, production-ready solution for large-scale ML pipelines. #### ZenML vs. Orchestrators ZenML is not merely an orchestrator like Airflow or Kubeflow; it is a framework that allows users to run pipelines on any orchestrator. It supports standard orchestrators out-of-the-box and allows for custom orchestrator development for enhanced control. #### Tool Integration For information on integrating tools with ZenML, refer to the [documentation](https://docs.zenml.io) and the [component guide](../component-guide/README.md). The ZenML team continuously adds new integrations, and users can suggest features via the [roadmap](https://zenml.io/roadmap) and [discussion forum](https://zenml.io/discussion). ZenML is designed to be extensible for various ML tools. #### OS Support - **Windows**: Officially supported via WSL; limited functionality outside of WSL. - **Mac (Apple Silicon)**: Supported with the environment variable: ```bash export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ``` This is necessary for local server use but not needed for CLI connections to a deployed server. #### Custom Tool Integration For extending ZenML or integrating custom tools, refer to the guide on [implementing custom stack components](../how-to/infrastructure-deployment/stack-deployment/implement-a-custom-stack-component.md). #### Community Contribution To contribute, select issues labeled as [`good-first-issue`](https://github.com/zenml-io/zenml/labels/good%20first%20issue) and review the [Contributing Guide](https://github.com/zenml-io/zenml/blob/main/CONTRIBUTING.md). #### Community Engagement Join the [Slack group](https://zenml.io/slack/) for questions and discussions with the community and core team. #### Licensing ZenML is licensed under the Apache License Version 2.0. Contributions are also licensed under this license. Full license details are available in the [LICENSE.md](https://github.com/zenml-io/zenml/blob/main/LICENSE). ================================================== === File: docs/book/user-guide/starter-guide/README.md === # ZenML Starter Guide Summary The ZenML Starter Guide is designed for MLOps engineers and data scientists to build robust ML platforms using the ZenML framework. It provides foundational knowledge and tools to manage machine learning operations effectively. ## Key Topics Covered: - **Creating Your First ML Pipeline**: Learn to set up a basic ML pipeline. - **Understanding Caching**: Explore how to cache results between pipeline steps. - **Managing Data and Versioning**: Techniques for handling data and its versions. - **Tracking ML Models**: Methods for tracking and managing machine learning models. ## Prerequisites: Ensure you have a Python environment and `virtualenv` installed. ## Outcome: By following this guide, you will complete a starter project, marking your entry into MLOps with ZenML. ## Additional Resources: Refer to the [SDK Docs](https://sdkdocs.zenml.io/) for internal ZenML functions and classes for further assistance. Prepare your development environment and begin your journey into MLOps with ZenML! ================================================== === File: docs/book/user-guide/starter-guide/create-an-ml-pipeline.md === ### ZenML Pipeline Overview ZenML enables the creation of modular and scalable machine learning (ML) pipelines by decoupling stages like data ingestion, preprocessing, and model evaluation into **Steps** integrated into an end-to-end **Pipeline**. This approach enhances reproducibility and efficiency in ML workflows. #### Installation Before starting, ensure ZenML is installed: ```shell pip install "zenml[server]" zenml login --local # Launches the dashboard locally ``` Run `zenml init` in your project root to configure ZenML for remote pipeline execution. ### Simple ML Pipeline Example Here’s a basic example of a ZenML pipeline: ```python from zenml import pipeline, step @step def load_data() -> dict: training_data = [[1, 2], [3, 4], [5, 6]] labels = [0, 1, 0] return {'features': training_data, 'labels': labels} @step def train_model(data: dict) -> None: total_features = sum(map(sum, data['features'])) total_labels = sum(data['labels']) print(f"Trained model using {len(data['features'])} data points. " f"Feature sum is {total_features}, label sum is {total_labels}") @pipeline def simple_ml_pipeline(): dataset = load_data() train_model(dataset) if __name__ == "__main__": run = simple_ml_pipeline() ``` Run the script with: ```bash $ python run.py ``` ### Dashboard Exploration After execution, view results in the ZenML Dashboard by running `zenml login --local`. The dashboard is typically accessible at [http://127.0.0.1:8237/](http://127.0.0.1:8237/). Log in with the username **"default"** (no password required) to explore pipeline components and execution history. ### Understanding Steps and Artifacts Each function executed in the pipeline is a `step`, connected by `artifacts` (returned objects). ZenML automatically stores and versions these artifacts, capturing all configurations for reproducibility. ### Expanding to a Full ML Workflow To create a complete ML workflow using the Iris dataset, start with the necessary imports: ```python from typing_extensions import Annotated, Tuple import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step ``` Install required packages: ```bash pip install matplotlib zenml integration install sklearn -y ``` ### Data Loader with Multiple Outputs Define a data loading step that returns multiple outputs: ```python @step def training_data_loader() -> Tuple[ Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) ``` ### Parameterized Training Step Create a training step for the SVC classifier: ```python @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[ Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"], ]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) ``` ### Combine Steps into a Pipeline Integrate the steps into a pipeline: ```python @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline(gamma=0.0015) ``` ### YAML Configuration You can configure pipeline runs using a YAML file: ```python training_pipeline = training_pipeline.with_options(config_path='/local/path/to/config.yaml') training_pipeline() ``` Example YAML file: ```yaml parameters: gamma: 0.01 ``` Generate a template config file with: ```python training_pipeline.write_run_configuration_template(path='/local/path/to/config.yaml') ``` ### Full Code Example Here’s the complete code for the ML pipeline: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.base import ClassifierMixin from sklearn.svm import SVC from zenml import pipeline, step @step def training_data_loader() -> Tuple[ Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"], ]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[ Annotated[ClassifierMixin, "trained_model"], Annotated[float, "training_acc"], ]: model = SVC(gamma=gamma) model.fit(X_train.to_numpy(), y_train.to_numpy()) return model, model.score(X_train.to_numpy(), y_train.to_numpy()) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() ``` This concise summary captures the essential details of creating and managing ML pipelines using ZenML. ================================================== === File: docs/book/user-guide/starter-guide/track-ml-models.md === # ZenML Model Control Plane Overview ## ZenML Model Definition A **ZenML Model** is an entity that consolidates pipelines, artifacts, metadata, and business data, representing the business logic of an ML product. Key artifacts associated with a model include the technical model (files with weights and parameters), training data, and production predictions. ## Model Management Models are managed through the **Model Control Plane (MCP)**, accessible via the ZenML API, client, or the ZenML Pro dashboard. ### Listing Models - **CLI**: Use `zenml model list` to list all models. - **Dashboard**: Visualize models in the ZenML Pro dashboard. ## Configuring Models in Pipelines Models can be linked to pipelines, ensuring all artifacts generated during runs are associated with the specified model. This provides lineage tracking. ### Example Code ```python from zenml import pipeline, Model model = Model(name="iris_classifier", version=None, license="Apache 2.0", description="A classification model for the iris dataset.") @pipeline(model=model) def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() ``` ## Fetching Models in Pipelines Models can be accessed within steps using `get_step_context()` or `get_pipeline_context()`. ### Example Code ```python from zenml import get_step_context, step, pipeline @step def svc_trainer(X_train, y_train, gamma=0.001): model = get_step_context().model @pipeline(model=Model(name="iris_classifier", version="production")) def training_pipeline(gamma: float = 0.002): model = get_pipeline_context().model ``` ## Logging Metadata Models can log metadata using the `log_model_metadata` method, capturing key-value pairs for model performance. ### Example Code ```python from zenml import get_step_context, step, log_model_metadata @step def svc_trainer(X_train, y_train, gamma=0.001): model = get_step_context().model log_model_metadata(model_name="iris_classifier", metadata={"accuracy": float(accuracy)}) ``` ## Retrieving Metadata Metadata can be retrieved using the ZenML client. ### Example Code ```python from zenml.client import Client model = Client().get_model_version('iris_classifier') print(model.run_metadata["accuracy"].value) ``` ## Model Stages Models can exist in various stages: - **staging**: Ready for production. - **production**: Active in production. - **latest**: Most recent version. - **archived**: No longer relevant. ### Example Code for Stages ```python model = Model(name="iris_classifier", version="latest") model.set_stage(stage="production", force=True) ``` ### CLI Commands for Stages ```shell zenml model version list --stage staging zenml model version update -s production ``` ## Conclusion ZenML's Model Control Plane provides robust features for managing ML models and their versions, enhancing traceability and reproducibility in ML workflows. For further details, refer to the dedicated Model Management guide. ================================================== === File: docs/book/user-guide/starter-guide/starter-project.md === ### Starter Project Overview This documentation outlines a simple starter project to apply foundational MLOps concepts, including pipelines, artifacts, and models. #### Getting Started 1. **Set Up Environment**: Create a fresh virtual environment and install dependencies: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` 2. **Initialize Project with ZenML Template**: ```bash mkdir zenml_starter cd zenml_starter zenml init --template starter --template-with-defaults pip install -r requirements.txt ``` **Alternative Method**: If the above steps fail, clone the ZenML starter example: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/mlops_starter pip install -r requirements.txt zenml init ``` #### Learning Outcomes By following the accompanying Jupyter notebook or README file, you will execute three key pipelines: - **Feature Engineering Pipeline**: Loads and prepares data for training. - **Training Pipeline**: Trains a model using the preprocessed dataset. - **Batch Inference Pipeline**: Runs predictions on new data using the trained model. #### Conclusion and Next Steps This project marks the beginning of your MLOps journey with ZenML. Experiment with ZenML to solidify your understanding, and when ready, proceed to the [production guide](../production-guide/) for advanced topics. ================================================== === File: docs/book/user-guide/starter-guide/manage-artifacts.md === ### ZenML Artifact Management Overview ZenML provides a framework for managing and versioning data artifacts in machine learning workflows, ensuring reproducibility and traceability. This guide covers how to name, organize, and utilize artifacts effectively. #### Artifact Management in ZenML - **Automatic Versioning**: Artifacts produced by ZenML pipelines are automatically versioned and stored. The naming convention for unspecified outputs is `{pipeline_name}::{step_name}::output`. - **Custom Naming**: Use the `Annotated` object to assign human-readable names to outputs for better discoverability. ```python from typing_extensions import Annotated import pandas as pd from sklearn.datasets import load_iris from zenml import pipeline, step @step def training_data_loader() -> Annotated[pd.DataFrame, "iris_dataset"]: iris = load_iris(as_frame=True) return iris.get("frame") @pipeline def feature_engineering_pipeline(): training_data_loader() if __name__ == "__main__": feature_engineering_pipeline() ``` - **Manual Versioning**: Specify custom versions using `ArtifactConfig` for critical runs. ```python from zenml import step, ArtifactConfig @step def training_data_loader() -> Annotated[pd.DataFrame, ArtifactConfig(name="iris_dataset", version="raw_2023")]: ... ``` - **Metadata and Tags**: Extend artifacts with additional metadata and tags. ```python from zenml import step, get_step_context, ArtifactConfig from typing_extensions import Annotated @step def annotation_approach() -> Annotated[str, ArtifactConfig(name="artifact_name", run_metadata={"key": "value"}, tags=["tag"])]: return "string" ``` #### Comparing Metadata Across Runs (Pro Feature) The ZenML Pro dashboard includes an Experiment Comparison tool for visualizing and analyzing metadata across runs. It offers: - **Table View**: Structured comparison of metadata. - **Parallel Coordinates View**: Identifies relationships between metadata parameters. #### Artifact Types Assigning types to artifacts enhances filtering and visualization in the dashboard. ```python from zenml import ArtifactConfig, step from zenml.enums import ArtifactType @step def trainer() -> Annotated[MyCustomModel, ArtifactConfig(artifact_type=ArtifactType.MODEL)]: return MyCustomModel(...) ``` #### Consuming External Artifacts Use `ExternalArtifact` to initialize artifacts from external data sources. ```python from zenml import ExternalArtifact, pipeline, step import numpy as np @step def print_data(data: np.ndarray): print(data) @pipeline def printing_pipeline(): data = ExternalArtifact(value=np.array([0])) print_data(data=data) if __name__ == "__main__": printing_pipeline() ``` #### Managing Non-ZenML Artifacts You can manage artifacts produced outside ZenML, such as predictions from deployed models. ```python from zenml.client import Client from zenml import save_artifact model = ... prediction = model.predict([[1, 1, 1, 1]]) save_artifact(prediction, name="iris_predictions") ``` #### Linking Existing Data Link existing data as ZenML artifacts to avoid unnecessary data movement. ```python import os from zenml.client import Client from zenml import register_artifact from pytorch_lightning import Trainer from uuid import uuid4 prefix = Client().active_stack.artifact_store.path default_root_dir = os.path.join(prefix, uuid4().hex) trainer = Trainer(default_root_dir=default_root_dir) trainer.fit(model) register_artifact(default_root_dir, name="all_my_model_checkpoints") ``` #### Logging Metadata Associate metadata with artifacts for better understanding and tracking. ```python from zenml import step, log_artifact_metadata @step def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model")]: model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model ``` ### Example Code A complete example demonstrating the above concepts: ```python from typing import Optional, Tuple from typing_extensions import Annotated import numpy as np from sklearn.base import ClassifierMixin from sklearn.datasets import load_digits from sklearn.svm import SVC from zenml import ArtifactConfig, pipeline, step, log_artifact_metadata, save_artifact, load_artifact from zenml.client import Client @step def versioned_data_loader_step() -> Annotated[Tuple[np.ndarray, np.ndarray], ArtifactConfig(name="my_dataset")]: digits = load_digits() return (digits.images.reshape((len(digits.images), -1)), digits.target) @step def model_finetuner_step(model: ClassifierMixin, dataset: Tuple[np.ndarray, np.ndarray]) -> Annotated[ClassifierMixin, ArtifactConfig(name="my_model")]: model.fit(dataset[0], dataset[1]) accuracy = model.score(dataset[0], dataset[1]) log_artifact_metadata(metadata={"accuracy": float(accuracy)}) return model @pipeline def model_finetuning_pipeline(dataset_version: Optional[str] = None): client = Client() dataset = client.get_artifact_version(name_id_or_prefix="my_dataset", version=dataset_version) if dataset_version else versioned_data_loader_step() model = client.get_artifact_version(name_id_or_prefix="my_model") model_finetuner_step(model=model, dataset=dataset) def main(): untrained_model = SVC(gamma=0.001) save_artifact(untrained_model, name="my_model", version="1") model_finetuning_pipeline() model_finetuning_pipeline(dataset_version="1") if __name__ == "__main__": main() ``` This code demonstrates artifact management, versioning, and metadata logging within ZenML. For more details, refer to the ZenML documentation. ================================================== === File: docs/book/user-guide/starter-guide/cache-previous-executions.md === ### Summary of ZenML Caching Documentation **Overview of Caching in ZenML** ZenML facilitates iterative development of machine learning pipelines through caching. When a pipeline is rerun, ZenML uses cached outputs from previous runs if there are no changes in inputs, parameters, or code. Caching is enabled by default and helps save time and resources, especially when running pipelines remotely. **Key Points:** - **Caching Behavior:** Caching is automatic; outputs are stored in the artifact store. Steps are not re-executed if there are no changes. - **Client-Side Caching:** If running without a schedule, cached steps are computed on the client machine. To prevent client-side caching, set `ZENML_PREVENT_CLIENT_SIDE_CACHING=True`. - **Manual Caching Control:** Caching does not detect external changes automatically. Use `enable_cache=False` for steps that depend on external inputs. **Configuring Caching:** 1. **Pipeline Level:** Set caching in the `@pipeline` decorator. ```python @pipeline(enable_cache=False) def first_pipeline(...): """Pipeline with cache disabled""" ``` 2. **Runtime Control:** Override caching settings at runtime. ```python first_pipeline = first_pipeline.with_options(enable_cache=False) ``` 3. **Step Level:** Control caching for individual steps using the `@step` decorator. ```python @step(enable_cache=False) def import_data_from_api(...): ... ``` **Code Example:** The following code illustrates caching behavior in a simple pipeline: ```python from typing_extensions import Tuple, Annotated import pandas as pd from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.svm import SVC from zenml import pipeline, step from zenml.logger import get_logger logger = get_logger(__name__) @step def training_data_loader() -> Tuple[Annotated[pd.DataFrame, "X_train"], Annotated[pd.DataFrame, "X_test"], Annotated[pd.Series, "y_train"], Annotated[pd.Series, "y_test"]]: iris = load_iris(as_frame=True) return train_test_split(iris.data, iris.target, test_size=0.2, random_state=42) @step def svc_trainer(X_train: pd.DataFrame, y_train: pd.Series, gamma: float = 0.001) -> Tuple[Annotated[SVC, "trained_model"], Annotated[float, "training_acc"]]: model = SVC(gamma=gamma) model.fit(X_train, y_train) return model, model.score(X_train, y_train) @pipeline def training_pipeline(gamma: float = 0.002): X_train, X_test, y_train, y_test = training_data_loader() svc_trainer(gamma=gamma, X_train=X_train, y_train=y_train) if __name__ == "__main__": training_pipeline() logger.info("First step cached, second not due to parameter change") training_pipeline(gamma=0.0001) svc_trainer = svc_trainer.with_options(enable_cache=False) logger.info("First step cached, second not due to settings") training_pipeline() logger.info("Caching disabled for the entire pipeline") training_pipeline.with_options(enable_cache=False)() ``` This example demonstrates how caching works, including scenarios where caching is disabled for specific steps or the entire pipeline. ================================================== === File: docs/book/user-guide/production-guide/configure-pipeline.md === ### Summary of ZenML Pipeline Configuration Documentation #### Overview This documentation explains how to configure a ZenML pipeline to add compute resources and manage dependencies through a YAML configuration file. #### Pipeline Configuration To configure the pipeline, the `run.py` script is executed, which includes: ```python pipeline_args["config_path"] = os.path.join(config_folder, "training_rf.yaml") training_pipeline_configured = training_pipeline.with_options(**pipeline_args) training_pipeline_configured() ``` The configuration file `training_rf.yaml` defines the pipeline's settings. #### YAML Configuration Breakdown 1. **Docker Settings**: ```yaml settings: docker: required_integrations: - sklearn requirements: - pyarrow ``` This section specifies Docker settings, including required integrations and pip requirements. 2. **Model Association**: ```yaml model: name: breast_cancer_classifier version: rf license: Apache 2.0 description: A breast cancer classifier tags: ["breast_cancer", "classifier"] ``` This section associates a ZenML model with the pipeline. 3. **Parameters**: ```yaml parameters: model_type: "rf" # Choose between rf/sgd ``` This defines parameters expected by the pipeline, such as `model_type`. #### Scaling Compute Resources To scale compute resources, modify the `training_rf.yaml` file: ```yaml settings: orchestrator: memory: 32 # in GB steps: model_trainer: settings: orchestrator: cpus: 8 ``` This configuration allocates 32 GB of memory for the pipeline and 8 CPU cores for the model trainer step. ##### Azure Users For Azure users using Kubernetes, the configuration should be: ```yaml settings: resources: memory: "32GB" steps: model_trainer: settings: resources: memory: "8GB" ``` #### Running the Pipeline To execute the pipeline with the new configuration, run: ```python python run.py --training-pipeline ``` #### Additional Information - Not all orchestrators support `ResourceSettings`. - For more details on settings and GPU attachment, refer to the ZenML documentation on runtime configuration and GPU training. This concise overview captures the essential details needed for configuring and scaling a ZenML pipeline while maintaining clarity on the YAML structure and usage. ================================================== === File: docs/book/user-guide/production-guide/remote-storage.md === # Transitioning to Remote Artifact Storage ## Connecting Remote Storage Remote storage allows for cloud-based artifact storage, enhancing accessibility, collaboration, and scalability for production workloads. Artifacts are materialized in a central storage location, facilitating team collaboration. ### Provisioning and Registering a Remote Artifact Store ZenML supports various artifact store flavors. Below are instructions for major cloud providers: #### AWS 1. Install AWS CLI: [AWS CLI Documentation](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html). 2. Install ZenML S3 integration: ```shell zenml integration install s3 -y ``` 3. Register S3 Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f s3 --path=s3://bucket-name ``` #### GCP 1. Install Google Cloud CLI: [Google Cloud Documentation](https://cloud.google.com/sdk/docs/install-sdk). 2. Install ZenML GCP integration: ```shell zenml integration install gcp -y ``` 3. Register GCS Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f gcp --path=gs://bucket-name ``` #### Azure 1. Install Azure CLI: [Azure Documentation](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli). 2. Install ZenML Azure integration: ```shell zenml integration install azure -y ``` 3. Register Azure Artifact Store: ```shell zenml artifact-store register cloud_artifact_store -f azure --path=az://container-name ``` #### Other For other environments, consider using cloud-agnostic storage solutions like Minio or creating a custom stack component. ## Configuring Permissions with Service Connectors Service connectors manage credentials for stack components to access cloud infrastructure securely. ### Creating Service Connectors #### AWS ```shell AWS_PROFILE= zenml service-connector register cloud_connector --type aws --auto-configure ``` #### GCP ```shell zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= --generate_temporary_tokens=False ``` #### Azure ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` ### Connecting Service Connector to Artifact Store ```shell zenml artifact-store connect cloud_artifact_store --connector cloud_connector ``` ## Running a Pipeline on a Cloud Stack 1. Register a new stack: ```shell zenml stack register local_with_remote_storage -o default -a cloud_artifact_store ``` 2. Set the stack active: ```shell zenml stack set local_with_remote_storage ``` 3. Run the training pipeline: ```shell python run.py --training-pipeline ``` Artifacts will be stored in remote storage, accessible for future runs. Colleagues can connect to the same ZenML server to access cached pipeline artifacts. ### Listing Artifact Versions ```shell zenml artifact version list --created="gte:$(date -v-15M '+%Y-%m-%d %H:%M:%S')" ``` By transitioning to remote storage, you enhance your MLOps workflow, making artifacts part of a cloud-based ecosystem ready for collaboration. ================================================== === File: docs/book/user-guide/production-guide/README.md === # Production Guide Summary The ZenML production guide is designed for ML practitioners looking to implement MLOps in a workplace setting, building on the concepts from the Starter guide. It focuses on transitioning from local pipeline execution to deploying pipelines in the cloud. ## Key Topics Covered: - **Deploying ZenML**: Instructions for setting up ZenML in a production environment. - **Understanding Stacks**: Overview of ZenML stacks and their components. - **Connecting Remote Storage**: Guidance on integrating cloud storage solutions. - **Orchestrating on the Cloud**: Techniques for managing workflows in cloud environments. - **Configuring the Pipeline for Scalability**: Best practices for scaling compute resources. - **Connecting a Code Repository**: Steps to link your codebase with ZenML. ## Prerequisites: - A Python environment with `virtualenv` installed. - A major cloud provider (AWS, GCP, Azure) with the respective CLI tools installed and authorized. By following this guide, you will complete an end-to-end MLOps project, serving as a model for your own implementations. **Note**: For internal ZenML functions and classes, refer to the [SDK Docs](https://sdkdocs.zenml.io/) for additional support. ================================================== === File: docs/book/user-guide/production-guide/cloud-orchestration.md === ### Summary: Orchestrate on the Cloud with ZenML ZenML enables the execution of MLOps pipelines in a cloud environment, leveraging cloud scalability and robustness. Key components include: - **Orchestrator**: Manages workflow and execution of pipelines. - **Container Registry**: Stores Docker container images. - **Remote Storage**: Complements the cloud stack. #### Basic Cloud Stack Setup The recommended starting orchestrator is **Skypilot**, which provisions a VM on a public cloud to execute pipelines. ZenML uses **Docker** to package code and dependencies into images, which are pushed to the container registry for the orchestrator to pull during execution. **Pipeline Execution Sequence**: 1. User runs a pipeline via `run.py`, which defines the steps. 2. The client fetches stack info and configuration. 3. An image is built and pushed to the container registry. 4. The orchestrator creates a VM to run the pipeline. 5. The orchestrator pulls the image from the container registry. 6. Artifacts are stored in a cloud-based artifact store. 7. Status updates are sent back to the ZenML server. #### Provisioning and Registering Components **AWS Setup**: 1. Install AWS and Skypilot integrations: ```shell zenml integration install aws skypilot_aws -y ``` 2. Register a service connector: ```shell AWS_PROFILE= zenml service-connector register cloud_connector --type aws --auto-configure ``` 3. Register the Skypilot orchestrator: ```shell zenml orchestrator register cloud_orchestrator -f vm_aws zenml orchestrator connect cloud_orchestrator --connector cloud_connector ``` 4. Register an AWS container registry: ```shell zenml container-registry register cloud_container_registry -f aws --uri=.dkr.ecr..amazonaws.com zenml container-registry connect cloud_container_registry --connector cloud_connector ``` **GCP Setup**: 1. Install GCP and Skypilot integrations: ```shell zenml integration install gcp skypilot_gcp -y ``` 2. Register a service connector: ```shell zenml service-connector register cloud_connector --type gcp --auth-method service-account --service_account_json=@ --project_id= --generate_temporary_tokens=False ``` 3. Register the Skypilot orchestrator: ```shell zenml orchestrator register cloud_orchestrator -f vm_gcp zenml orchestrator connect cloud_orchestrator --connect cloud_connector ``` 4. Register a GCP container registry: ```shell zenml container-registry register cloud_container_registry -f gcp --uri=gcr.io/ zenml container-registry connect cloud_container_registry --connector cloud_connector ``` **Azure Setup** (using Kubernetes): 1. Install Azure and Kubernetes integrations: ```shell zenml integration install azure kubernetes -y ``` 2. Register a service connector: ```shell zenml service-connector register cloud_connector --type azure --auth-method service-principal --tenant_id= --client_id= --client_secret= ``` 3. Register a Kubernetes orchestrator: ```shell zenml orchestrator register cloud_orchestrator --flavor kubernetes zenml orchestrator connect cloud_orchestrator --connect cloud_connector ``` 4. Register an Azure container registry: ```shell zenml container-registry register cloud_container_registry -f azure --uri=.azurecr.io zenml container-registry connect cloud_container_registry --connector cloud_connector ``` #### Running a Pipeline on Cloud Stack After registering components, register a new stack: ```shell zenml stack register minimal_cloud_stack -o cloud_orchestrator -a cloud_artifact_store -c cloud_container_registry ``` Set the stack active: ```shell zenml stack set minimal_cloud_stack ``` Run the training pipeline: ```shell python run.py --training-pipeline ``` The pipeline will execute in the cloud, with logs streamed back to the user. For more stack components and configurations, refer to the [Component Guide](../../component-guide/README.md). ================================================== === File: docs/book/user-guide/production-guide/understand-stacks.md === # Summary of Switching Infrastructure Backend in ZenML ## Understanding Stacks - A **stack** is the configuration of tools and infrastructure for running machine learning pipelines in ZenML. - Pipelines run on a **default** stack if no configuration is specified. - ZenML separates code from configuration, allowing easy switching of environments without code changes. ## Default Stack - Use `zenml stack describe` to view details of the active stack: ```bash zenml stack describe ``` - Use `zenml stack list` to see all registered stacks: ```bash zenml stack list ``` ## Stack Components - A stack consists of at least an **orchestrator** and an **artifact store**. - **Orchestrator**: Executes pipeline code (default is a local Python thread). ```bash zenml orchestrator list ``` - **Artifact Store**: Persists step outputs (default is local). ```bash zenml artifact-store list ``` ## Registering a Stack ### Create an Artifact Store - Register a new local artifact store: ```bash zenml artifact-store register my_artifact_store --flavor=local ``` ### Create a Local Stack - Register a new stack using the created artifact store: ```bash zenml stack register a_new_local_stack -o default -a my_artifact_store ``` ## Switching Stacks - Use the VS Code extension to view and switch stacks easily. ## Running a Pipeline on the New Local Stack 1. Set the new stack as active: ```bash zenml stack set a_new_local_stack ``` 2. Run the pipeline: ```bash python run.py --training-pipeline ``` ## Additional Notes - For more details on ZenML functions, refer to the [SDK Docs](https://sdkdocs.zenml.io/). ================================================== === File: docs/book/user-guide/production-guide/deploying-zenml.md === # Deploying ZenML Deploying ZenML is essential for moving to production. Initially, ZenML runs locally using an SQLite database to store metadata (pipelines, models, artifacts). For production, you need to deploy the server centrally to facilitate collaboration and interaction among infrastructure components. ## Deployment Options ### Option 1: ZenML Pro Trial - Sign up for a free trial of [ZenML Pro](https://zenml.io/pro), a managed SaaS solution with one-click deployment. - If the ZenML Python client is installed, connect to the trial instance using: ```bash zenml login --pro ``` - ZenML Pro includes additional features and a dashboard, with the option to revert to self-hosting later. ### Option 2: Self-host on Cloud Provider - ZenML is open source and can be self-hosted in a Kubernetes cluster. Create a cluster using documentation from your cloud provider: - [AWS](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html) - [Azure](https://learn.microsoft.com/en-us/azure/aks/learn/quick-kubernetes-deploy-portal?tabs=azure-cli) - [GCP](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster#before_you_begin) ## Connecting to Deployed ZenML To connect your local ZenML client to the ZenML Server, use: ```bash zenml login ``` This command initiates a validation process in your browser. Once connected, all metadata will be centrally tracked. To revert to local ZenML, use: ```bash zenml logout ``` ## Further Resources - [Deploying ZenML](../../getting-started/deploying-zenml/README.md): Overview of deployment options and architecture. - [Full how-to guides](../../getting-started/deploying-zenml/README.md): Guides for deploying ZenML on various platforms. ================================================== === File: docs/book/user-guide/production-guide/end-to-end.md === ### End-to-End MLOps Project with ZenML This documentation outlines the steps to create an end-to-end MLOps project using ZenML, integrating various advanced concepts learned throughout the course. #### Key Concepts Covered: - Deploying ZenML - Abstracting infrastructure with stacks - Connecting to remote storage - Cloud orchestration - Configuring scalable pipelines - Integrating with a Git repository #### Getting Started 1. **Set Up Environment**: Create a new virtual environment and install dependencies: ```bash pip install "zenml[templates,server]" notebook zenml integration install sklearn -y ``` 2. **Initialize Project**: Use ZenML templates to set up the project: ```bash mkdir zenml_batch_e2e cd zenml_batch_e2e zenml init --template e2e_batch --template-with-defaults pip install -r requirements.txt ``` **Alternative Method**: If the above doesn't work, clone the ZenML example: ```bash git clone --depth 1 git@github.com:zenml-io/zenml.git cd zenml/examples/e2e pip install -r requirements.txt zenml init ``` #### Learning Outcomes The e2e project serves as a comprehensive template for major ZenML use cases, featuring: - A collection of steps and pipelines - A simple CLI - Core concepts for supervised ML with batch predictions As you work through the template, consider running pipelines on a remote cloud stack and using a tracked Git repository to reinforce your learning. #### Conclusion By completing this guide, you are equipped to develop your own pipelines and stacks with ZenML. For further learning, explore the [how-to section](../../how-to/pipeline-development/build-pipelines/README.md). Good luck with your MLOps journey! ================================================== === File: docs/book/user-guide/production-guide/ci-cd.md === ### Managing the Lifecycle of a ZenML Pipeline with CI/CD #### Overview This documentation outlines the setup of Continuous Integration and Delivery (CI/CD) for ZenML pipelines, transitioning from local execution to a centralized workflow engine integrated with CI. This allows for automated testing and deployment of code changes after peer review. #### CI/CD Setup with GitHub Actions To implement CI/CD, we will use GitHub Actions. The [ZenML Gitflow Repository](https://github.com/zenml-io/zenml-gitflow/) serves as a template for automating CI/CD, continuous model training, and deployment. #### API Key Configuration Create an API key in ZenML for machine-to-machine connections: ```bash zenml service-account create github_action_api_key ``` This command will return a unique API key, which must be stored securely as it will not be shown again. #### Setting Up GitHub Secrets Store the `ZENML_API_KEY` as a GitHub secret for use in GitHub Actions. Additional environment variables can also be set as needed. #### Optional: Staging and Production Stacks You may want different stacks for staging and production. This involves using remote orchestration and artifact storage. You can parameterize pipelines for different data sources and use distinct configuration files for each environment. #### Triggering a Pipeline on Pull Requests To ensure only tested code is deployed, configure a GitHub Action to run the pipeline on pull requests. Use the following YAML snippet to trigger actions on specific branches: ```yaml on: pull_request: branches: [ staging, main ] ``` #### Workflow Configuration Set important environment variables in your GitHub Actions workflow: ```yaml jobs: run-staging-workflow: runs-on: run-zenml-pipeline env: ZENML_STORE_URL: ${{ secrets.ZENML_HOST }} ZENML_STORE_API_KEY: ${{ secrets.ZENML_API_KEY }} ZENML_STACK: stack_name ZENML_GITHUB_SHA: ${{ github.event.pull_request.head.sha }} ZENML_GITHUB_URL_PR: ${{ github.event.pull_request._links.html.href }} ``` #### Pipeline Execution Steps Include the following steps in your GitHub Action to check out code, set up Python, install requirements, connect to ZenML, and run the pipeline: ```yaml steps: - name: Check out repository code uses: actions/checkout@v3 - uses: actions/setup-python@v4 with: python-version: '3.9' - name: Install requirements run: pip3 install -r requirements.txt - name: Confirm ZenML client is connected run: zenml status - name: Set stack run: zenml stack set ${{ env.ZENML_STACK }} - name: Run pipeline run: python run.py --pipeline end-to-end --dataset production --version ${{ env.ZENML_GITHUB_SHA }} --github-pr-url ${{ env.ZENML_GITHUB_URL_PR }} ``` This configuration will automatically trigger the action when changes are pushed to a pull request branch. #### Optional: Commenting Metrics on Pull Requests You can enhance your workflow by adding functionality to comment on pull requests with pipeline metrics. Refer to the [template](https://github.com/zenml-io/zenml-gitflow/blob/main/.github/workflows/pipeline_run.yaml#L87-L99) for implementation. This summary provides a concise overview of setting up CI/CD for ZenML pipelines, ensuring key technical details are retained for effective implementation. ================================================== === File: docs/book/user-guide/production-guide/connect-code-repository.md === ### Summary of Connecting a Git Repository to ZenML **Overview**: Connecting a Git repository to ZenML enhances MLOps by optimizing Docker builds, managing repository changes, and facilitating collaboration. #### Pipeline Execution Flow: 1. Trigger a pipeline run locally; ZenML parses the `@pipeline` function. 2. Local client requests stack info from ZenML server. 3. Client detects the Git repository and requests its info. 4. Instead of a new Docker image, the client checks for an existing image based on the Git commit hash. 5. The orchestrator sets up the execution environment in the cloud. 6. Code is downloaded from the Git repository, using the existing Docker image. 7. Pipeline steps execute, storing artifacts in the cloud. 8. Execution status and metadata are reported back to the ZenML server. #### Benefits: - Avoids redundant builds. - Enables simultaneous team collaboration. - ZenML tracks versions to ensure the correct code is used for each run. ### Creating a GitHub Repository: 1. Sign in to GitHub. 2. Click "+" and select "New repository." 3. Name the repository, set visibility, and optionally add a README or .gitignore. 4. Click "Create repository." **Push Local Code to GitHub**: ```sh git init git add . git commit -m "Initial commit" git remote add origin https://github.com/YOUR_USERNAME/YOUR_REPOSITORY_NAME.git git push -u origin master ``` *Replace `YOUR_USERNAME` and `YOUR_REPOSITORY_NAME` accordingly.* ### Linking to ZenML: 1. Obtain a GitHub Personal Access Token (PAT): - Go to GitHub settings > Developer settings > Personal access tokens > Generate new token. - Name the token, select specific repository access with `contents` read-only, and generate it. 2. Install GitHub integration and register the repository: ```sh zenml integration install github zenml code-repository register --type=github \ --owner= --repository= \ --token= ``` *Fill in ``, ``, ``, and ``.* ### Running the Training Pipeline: ```python # First run builds the Docker image python run.py --training-pipeline # Subsequent runs skip Docker building python run.py --training-pipeline ``` For more details, refer to the ZenML Git Integration documentation. ================================================== === File: docs/book/user-guide/llmops-guide/README.md === # ZenML LLMOps Guide Summary The ZenML LLMOps Guide provides a framework for integrating Large Language Models (LLMs) into MLOps workflows. It is aimed at ML practitioners and MLOps engineers seeking to enhance their pipelines with LLM capabilities while ensuring robustness and scalability. ## Key Topics Covered: - **RAG with ZenML**: Introduction to Retrieval-Augmented Generation (RAG) and its implementation. - **Data Handling**: - Data ingestion and preprocessing. - Generation of embeddings and storing them in a vector database. - **Inference Pipeline**: Basic RAG inference setup. - **Evaluation Metrics**: - Evaluation strategies for retrieval and generation. - Practical evaluation techniques. - **Reranking**: - Understanding and implementing reranking to improve retrieval performance. - Evaluating reranking effectiveness. - **Embedding Fine-tuning**: - Techniques for improving retrieval through fine-tuning embeddings. - Synthetic data generation and evaluation of fine-tuned embeddings. - **LLM Fine-tuning**: - Strategies for fine-tuning LLMs, including using 🤗 Accelerate. - Deployment of fine-tuned models and evaluation metrics. ## Practical Application: The guide includes a use case focused on creating a question-answering system for ZenML, illustrating the transition from a basic RAG pipeline to more complex setups involving fine-tuning and reranking. ## Prerequisites: Users should have a Python environment with ZenML installed and familiarity with concepts from the Starter and Production Guides. By following this guide, users will gain a comprehensive understanding of leveraging LLMs in MLOps workflows, enabling the development of scalable and maintainable applications. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/embeddings-generation.md === ### Generating Embeddings for Retrieval This section details how to generate embeddings to enhance retrieval performance in a Retrieval-Augmented Generation (RAG) pipeline. Embeddings are vector representations capturing the semantic meaning of data in a high-dimensional space, allowing for efficient retrieval of relevant data chunks based on similarity to user queries. #### Key Points: - **Embeddings**: High-dimensional vectors that represent data semantically, generated using models like `sentence-transformers`. - **Purpose**: To improve retrieval accuracy beyond simple keyword searches, especially for complex queries. - **Model Used**: `sentence-transformers/all-MiniLM-L12-v2`, which generates 384-dimensional embeddings. #### Code for Generating Embeddings ```python from typing import Annotated, List import numpy as np from sentence_transformers import SentenceTransformer from structures import Document from zenml import ArtifactConfig, log_artifact_metadata, step @step def generate_embeddings(split_documents: List[Document]) -> Annotated[List[Document], ArtifactConfig(name="documents_with_embeddings")]: model = SentenceTransformer("sentence-transformers/all-MiniLM-L12-v2") log_artifact_metadata(artifact_name="embeddings", metadata={"embedding_type": "sentence-transformers/all-MiniLM-L12-v2", "embedding_dimensionality": 384}) embeddings = model.encode([doc.page_content for doc in split_documents]) for doc, embedding in zip(split_documents, embeddings): doc.embedding = embedding return split_documents ``` #### Visualization of Embeddings To visualize the embeddings, dimensionality reduction techniques like t-SNE and UMAP can be employed. This helps in understanding how similar chunks are clustered based on their semantic meaning. ##### Code for Visualization ```python import matplotlib.pyplot as plt import numpy as np from sklearn.manifold import TSNE import umap from zenml.client import Client def visualize_embeddings(embeddings, parent_sections, method='tsne'): if method == 'tsne': embeddings_2d = TSNE(n_components=2).fit_transform(embeddings) else: embeddings_2d = umap.UMAP(n_components=2).fit_transform(embeddings) plt.figure(figsize=(8, 8)) for section in set(parent_sections): mask = [section == ps for ps in parent_sections] plt.scatter(embeddings_2d[mask, 0], embeddings_2d[mask, 1], label=section) plt.title(f"{method.upper()} Visualization") plt.legend() plt.show() ``` #### Conclusion Embeddings are generated and stored as artifacts for modularity, allowing future flexibility in the retrieval process. The next steps will involve storing these embeddings in a vector database for efficient retrieval. For complete code examples, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/README.md === ### RAG Pipelines with ZenML Retrieval-Augmented Generation (RAG) combines retrieval-based and generation-based models, enhancing the capabilities of Large Language Models (LLMs). This guide covers setting up RAG pipelines with ZenML, focusing on: - **Purpose of RAG**: Addresses limitations of LLMs, which can produce incorrect responses, especially with ambiguous prompts. Most LLMs handle limited text, while advanced models like Google's Gemini 1.5 Pro can manage up to 1 million tokens. - **Key Components**: 1. **Data Ingestion and Preprocessing**: Preparing data for the RAG pipeline. 2. **Embeddings**: Representing data for retrieval mechanisms. 3. **Vector Database**: Storing embeddings for efficient access. 4. **Artifact Tracking**: Using ZenML to track RAG-related artifacts. The guide culminates in a demonstration of these components working together for basic RAG inference. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/understanding-rag.md === ### Summary of Retrieval-Augmented Generation (RAG) **Overview of RAG**: Retrieval-Augmented Generation (RAG) enhances the capabilities of Large Language Models (LLMs) by integrating a retrieval mechanism that fetches relevant documents from a large corpus to inform response generation. This technique addresses LLM limitations, such as generating incorrect responses and handling large text inputs. **RAG Pipeline Process**: 1. **Retriever**: Identifies relevant documents from a corpus. 2. **Generator**: Produces a response based on these documents. This method is effective for tasks requiring contextual understanding, such as question answering, summarization, and dialogue generation. It reduces the risk of incorrect responses and optimizes token usage by focusing on pertinent documents. **Benefits of RAG**: - **Cost-Effectiveness**: More efficient than pure generation models, especially in resource-constrained environments. - **Contextual Relevance**: Ensures responses are grounded in relevant information. **When to Use RAG**: RAG is ideal for generating long-form responses that require context and when a large corpus of relevant documents is available. It is a practical starting point for exploring LLMs due to its manageable data and resource requirements. **Integration with ZenML**: ZenML facilitates the creation of RAG pipelines, combining retrieval and generation models. Key features include: - **Data Ingestion**: Tools for managing document indexing and RAG artifacts. - **Scalability**: Ability to handle larger datasets via cloud deployment and scalable vector stores. - **Artifact Tracking**: Monitor hyperparameters, model weights, and performance metrics through the ZenML dashboard. - **Reproducibility**: Easily rerun pipelines to update documents or parameters while preserving previous artifact versions. - **Maintainability**: Modular pipeline design allows for easy updates and experimentation. - **Collaboration**: Share pipelines and insights with team members. ZenML supports transitioning to more complex RAG setups, including fine-tuning embeddings and LLMs, while maintaining a clear structure for pipeline management. ### Key Advantages of ZenML: - **Reproducibility**: Update and compare different pipeline versions. - **Scalability**: Handle larger corpora efficiently. - **Artifact Tracking**: Monitor and associate metadata with generated artifacts. - **Maintainability**: Modular design for easy updates. - **Collaboration**: Facilitate teamwork and sharing of insights. This summary provides a concise understanding of RAG and its implementation within the ZenML ecosystem, setting the stage for further exploration into advanced RAG techniques. ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/storing-embeddings-in-a-vector-database.md === ### Summary: Storing Embeddings in a Vector Database This documentation outlines the process of storing embeddings in a vector database, specifically PostgreSQL, for efficient retrieval based on similarity to queries. Storing embeddings avoids the need to regenerate them each time a document is retrieved. #### Key Points: - **Vector Database Choice**: PostgreSQL is used due to its scalability and efficiency for high-dimensional vectors. Other vector databases can also be considered (see [comparison site](https://superlinked.com/vector-db-comparison/)). - **Setup Instructions**: For setting up PostgreSQL, refer to the [repository instructions](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). - **Connection and Interaction**: Use the `psycopg2` package to connect and execute SQL commands. #### Code Overview: ```python from zenml import step @step def index_generator(documents: List[Document]) -> None: try: conn = get_db_conn() with conn.cursor() as cur: cur.execute("CREATE EXTENSION IF NOT EXISTS vector") conn.commit() cur.execute(""" CREATE TABLE IF NOT EXISTS embeddings ( id SERIAL PRIMARY KEY, content TEXT, token_count INTEGER, embedding VECTOR({EMBEDDING_DIMENSIONALITY}), filename TEXT, parent_section TEXT, url TEXT ); """) conn.commit() for doc in documents: cur.execute("SELECT COUNT(*) FROM embeddings WHERE content = %s", (doc.page_content,)) if cur.fetchone()[0] == 0: cur.execute(""" INSERT INTO embeddings (content, token_count, embedding, filename, parent_section, url) VALUES (%s, %s, %s, %s, %s, %s)""", (doc.page_content, doc.token_count, doc.embedding.tolist(), doc.filename, doc.parent_section, doc.url)) conn.commit() num_records = cur.execute("SELECT COUNT(*) FROM embeddings;").fetchone()[0] num_lists = max(num_records / 1000, 10) if num_records <= 1000000 else math.sqrt(num_records) cur.execute(f"CREATE INDEX IF NOT EXISTS embeddings_idx ON embeddings USING ivfflat (embedding vector_cosine_ops) WITH (lists = {num_lists});") conn.commit() except Exception as e: logger.error(f"Error in index_generator: {e}") raise finally: if conn: conn.close() ``` #### Functionality: - Connects to the database and creates the `vector` extension. - Creates an `embeddings` table if it doesn't exist. - Inserts documents and their embeddings, checking for duplicates. - Calculates index parameters for optimal performance and creates an index using the `ivfflat` method with `vector_cosine_ops`. #### Considerations: - Decide when to update embeddings based on data changes; the current implementation only adds new embeddings. - For large datasets, consider running the step on a GPU for improved performance. #### Next Steps: After storing embeddings, the next step is to retrieve relevant documents based on queries, facilitating a powerful question-answering system. For the complete code and additional details, visit the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/basic-rag-inference-pipeline.md === ### Simple RAG Inference This documentation outlines how to use RAG (Retrieval-Augmented Generation) components to generate responses from an index store of documents. The process involves querying the index store and utilizing an LLM (Large Language Model) to generate answers based on the retrieved documents. #### Running the Inference Query To execute a query, use the following command: ```bash python run.py --rag-query "how do I use a custom materializer inside my own zenml steps? i.e. how do I set it? inside the @step decorator?" --model=gpt4 ``` #### Inference Pipeline Code The inference process is encapsulated in the `process_input_with_retrieval` function: ```python def process_input_with_retrieval(input: str, model: str = OPENAI_MODEL, n_items_retrieved: int = 5) -> str: delimiter = "```" related_docs = get_topn_similar_docs(get_embeddings(input), get_db_conn(), n=n_items_retrieved) system_message = f"""You are a friendly chatbot. You can answer questions about ZenML, its features, and use cases. Respond concisely and technically. Use only ZenML documentation for context. If unsure, say so.""" messages = [ {"role": "system", "content": system_message}, {"role": "user", "content": f"{delimiter}{input}{delimiter}"}, {"role": "assistant", "content": "Relevant ZenML documentation:\n" + "\n".join(doc[0] for doc in related_docs)}, ] logger.debug("CONTEXT USED\n\n", messages[2]["content"], "\n\n") return get_completion_from_messages(messages, model=model) ``` #### Document Retrieval The `get_topn_similar_docs` function retrieves similar documents based on the query embedding: ```python def get_topn_similar_docs(query_embedding: List[float], conn: psycopg2.extensions.connection, n: int = 5) -> List[Tuple]: embedding_array = np.array(query_embedding) register_vector(conn) cur = conn.cursor() cur.execute(f"SELECT content FROM embeddings ORDER BY embedding <=> %s LIMIT {n}", (embedding_array,)) return cur.fetchall() ``` This function leverages `pgvector` for efficient document retrieval by similarity. #### Generating Responses The `get_completion_from_messages` function generates a completion response using `litellm`, which supports multiple LLMs: ```python def get_completion_from_messages(messages, model=OPENAI_MODEL, temperature=0.4, max_tokens=1000): model = MODEL_NAME_MAP.get(model, model) completion_response = litellm.completion(model=model, messages=messages, temperature=temperature, max_tokens=max_tokens) return completion_response.choices[0].message.content ``` Using `litellm` allows for flexibility in experimenting with different LLMs without rewriting code. ### Conclusion This basic RAG inference pipeline retrieves relevant text chunks based on a query, providing a foundation for more complex RAG setups. Future improvements will focus on fine-tuning embeddings to enhance retrieval performance, especially with diverse document sets. For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and specifically the [`llm_utils.py` file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/utils/llm_utils.py). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/data-ingestion.md === ### Summary of Data Ingestion and Preprocessing for RAG Pipelines with ZenML This documentation outlines the process of ingesting and preprocessing data for Retrieval-Augmented Generation (RAG) pipelines using ZenML. The initial step involves gathering a corpus of documents and relevant metadata to train retriever and generator models. ZenML facilitates integration with various tools for managing data ingestion, preprocessing, and indexing. #### URL Scraping Step A ZenML step can be created to scrape URLs from ZenML documentation. The following code demonstrates how to implement a URL scraper: ```python from typing import List from typing_extensions import Annotated from zenml import log_artifact_metadata, step from steps.url_scraping_utils import get_all_pages @step def url_scraper( docs_url: str = "https://docs.zenml.io", repo_url: str = "https://github.com/zenml-io/zenml", website_url: str = "https://zenml.io", ) -> Annotated[List[str], "urls"]: """Generates a list of relevant URLs to scrape.""" docs_urls = get_all_pages(docs_url) log_artifact_metadata({"count": len(docs_urls)}) return docs_urls ``` The `get_all_pages` function crawls the documentation site to retrieve unique URLs, focusing on the latest releases to ensure relevance. #### Document Loading Once URLs are obtained, the `unstructured` library can be used to load and parse the pages: ```python from typing import List from unstructured.partition.html import partition_html from zenml import step @step def web_url_loader(urls: List[str]) -> List[str]: """Loads documents from a list of URLs.""" return ["\n\n".join(map(str, partition_html(url))) for url in urls] ``` This step simplifies the extraction of text content from HTML, ensuring efficiency in processing. #### Data Preprocessing After loading documents, they must be preprocessed into manageable chunks suitable for RAG pipelines. The following code illustrates how to split documents into chunks: ```python import logging from typing import Annotated, List from utils.llm_utils import split_documents from zenml import ArtifactConfig, log_artifact_metadata, step logging.basicConfig(level=logging.INFO) @step(enable_cache=False) def preprocess_documents(documents: List[str]) -> Annotated[List[str], ArtifactConfig(name="split_chunks")]: """Preprocesses documents by splitting them into chunks.""" log_artifact_metadata({"chunk_size": 500, "chunk_overlap": 50}) return split_documents(documents, chunk_size=500, chunk_overlap=50) ``` The chunk size is set to 500 with a 50-character overlap to maintain context. This approach balances the need for efficient processing and retrieval. #### Additional Considerations Users can further refine their preprocessing by cleaning text, managing code snippets, or extracting metadata based on the specific requirements of their data and use case. For complete code and additional details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and the associated [steps code](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide/steps/). ================================================== === File: docs/book/user-guide/llmops-guide/rag-with-zenml/rag-85-loc.md === ### Summary of RAG Pipeline Implementation Documentation This documentation provides a concise implementation of a Retrieval-Augmented Generation (RAG) pipeline in 85 lines of Python code. The pipeline performs the following tasks: 1. **Data Loading**: Utilizes a fictional dataset about "ZenML World" as the corpus. 2. **Text Processing**: Splits the text into chunks and tokenizes it (i.e., splits into words). 3. **Query Handling**: Accepts a query and retrieves the most relevant text chunks from the corpus. 4. **Answer Generation**: Uses OpenAI's GPT-3.5 model to generate answers based on the retrieved chunks. ### Key Functions - **`preprocess_text(text)`**: - Converts text to lowercase, removes punctuation, and trims whitespace. - **`tokenize(text)`**: - Tokenizes preprocessed text into a list of words. - **`retrieve_relevant_chunks(query, corpus, top_n=2)`**: - Computes Jaccard similarity between the query and corpus chunks to find the top `n` relevant chunks. - **`answer_question(query, corpus, top_n=2)`**: - Retrieves relevant chunks and generates an answer using the OpenAI API. ### Example Usage A sample corpus is defined, and three questions are posed to demonstrate the functionality: ```python corpus = [ "The luminescent forests of ZenML World are inhabited by glowing Zenbots...", "In the neon skies of ZenML World, Cosmic Butterflies flutter gracefully...", # Additional sentences... ] question1 = "What are Plasma Phoenixes?" answer1 = answer_question(question1, corpus) question2 = "What kinds of creatures live on the prismatic shores of ZenML World?" answer2 = answer_question(question2, corpus) irrelevant_question_3 = "What is the capital of Panglossia?" answer3 = answer_question(irrelevant_question_3, corpus) ``` ### Output The program outputs answers based on the provided context, including handling irrelevant questions gracefully. ### Implementation Notes - The similarity measure used is the Jaccard similarity coefficient, which is a simple and naive approach. - The implementation is not optimized for performance but serves as a foundational example for understanding RAG pipelines. - Future documentation will explore more advanced techniques for similarity measurement and performance improvements using ZenML. This summary encapsulates the essential technical details and functionality of the RAG pipeline implementation while maintaining clarity and conciseness. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-practice.md === ### Summary: Evaluating RAG System Performance This documentation outlines the evaluation process for a Retrieval-Augmented Generation (RAG) system, emphasizing the separation of embedding generation and evaluation. The evaluation is structured as a separate pipeline that can run after the main embedding pipeline, allowing for focused assessment of performance. #### Key Points: 1. **Evaluation Pipeline**: - The evaluation can be run independently, which is a best practice. - It may also serve as a gating mechanism to ensure embeddings meet production standards. 2. **Local vs. Cloud LLM Judge**: - For faster iterations, consider using a local LLM judge during development. - For comprehensive evaluations, utilize cloud LLMs like Anthropic's Claude or OpenAI's GPT-3.5/4. 3. **Human Oversight**: - Automated evaluations save time but do not eliminate the need for human review. - Results from the LLM judge are costly and time-consuming, necessitating careful human analysis. 4. **Evaluation Frequency**: - The frequency and depth of evaluations depend on project constraints. - Balance quick, inexpensive tests (e.g., retrieval system) with more costly evaluations (e.g., LLM judge). - Structure tests to run some frequently and others less often. 5. **Next Steps**: - The documentation suggests improving retrieval performance by adding a reranker, which can enhance results without retraining embeddings. #### Practical Implementation: To run the evaluation pipeline: 1. Clone the project repository: ```bash git clone https://github.com/zenml-io/zenml-projects.git ``` 2. Navigate to the `llm-complete-guide` directory and follow the `README.md` instructions to generate embeddings first. 3. Execute the evaluation pipeline: ```bash python run.py --evaluation ``` Results will be output to the console, and progress can be monitored via the dashboard. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/README.md === ### Evaluation and Metrics for RAG Pipeline This section discusses evaluating the performance of a Retrieval-Augmented Generation (RAG) pipeline using metrics and visualizations. Evaluating the RAG pipeline is essential for understanding its performance and identifying improvement areas. Traditional metrics like accuracy, precision, and recall are often inadequate for language models due to their subjective nature. #### Key Evaluation Areas: 1. **Retrieval Evaluation**: Assessing the relevance of retrieved documents or document chunks to the query. 2. **Generation Evaluation**: Evaluating the coherence and helpfulness of the generated text for the specific use case. #### Considerations for Evaluation: - In a production setting, establish a baseline by evaluating a raw language model (without retrieval components) to compare against the RAG pipeline's performance. - The evaluation metrics will depend on the specific use case. For example, in a user-facing chatbot, consider: - Relevance of retrieved documents. - Coherence and helpfulness of generated answers. - Presence of toxic language in responses. #### Evaluation Methodology: - Generation evaluation serves as an end-to-end assessment of the RAG pipeline, allowing for subjective metrics since it evaluates the entire system output. - A high-level code example illustrates the two main evaluation areas, with further sections providing detailed guidance on evaluation practices and result interpretation. This summary encapsulates the critical components of evaluating a RAG pipeline, emphasizing the importance of both retrieval and generation evaluations while noting the flexibility needed based on specific use cases. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/retrieval.md === ### Retrieval Evaluation in RAG Pipeline The retrieval component in a Retrieval-Augmented Generation (RAG) pipeline identifies relevant documents to support the generation component. Evaluating its performance focuses on the accuracy of semantic search and the relevance of retrieved documents. #### Manual Evaluation with Handcrafted Queries Manual evaluation involves creating specific queries to check if the retrieval component can accurately retrieve known documents. This process, while time-consuming, helps identify edge cases and areas for improvement. Example queries include: | Question | URL Ending | |----------|------------| | How do I get going with the Label Studio integration? What are the first steps? | stacks-and-components/component-guide/annotators/label-studio | | How can I write my own custom materializer? | user-guide/advanced-guide/data-management/handle-custom-data-types | | How do I generate embeddings as part of a RAG pipeline when using ZenML? | user-guide/llmops-guide/rag-with-zenml/embeddings-generation | The retrieval process involves encoding the query into a vector and querying a PostgreSQL database for similar vectors. **Code Example:** ```python def query_similar_docs(question: str, url_ending: str) -> tuple: embedded_question = get_embeddings(question) db_conn = get_db_conn() top_similar_docs_urls = get_topn_similar_docs(embedded_question, db_conn, n=5, only_urls=True) urls = [url[0] for url in top_similar_docs_urls] return (question, url_ending, urls) def test_retrieved_docs_retrieve_best_url(question_doc_pairs: list) -> float: total_tests = len(question_doc_pairs) failures = sum(1 for pair in question_doc_pairs if pair["url_ending"] not in query_similar_docs(pair["question"], pair["url_ending"])[2]) return round((failures / total_tests) * 100, 2) ``` #### Automated Evaluation with Synthetic Queries For broader evaluation, synthetic queries can be generated using an LLM. This involves passing document chunks to the LLM to create relevant questions, which are then used to test the retrieval component. **Code Example:** ```python def generate_question(chunk: str, local: bool = False) -> str: model = "ollama/mixtral" if local else "gpt-3.5-turbo" response = completion(model=model, messages=[{"content": f"Generate a question about: `{chunk}`", "role": "user"}]) return response.choices[0].message.content @step def generate_questions_from_chunks(docs_with_embeddings: List[Document], local: bool = False) -> List[Document]: for doc in docs_with_embeddings: doc.generated_questions = [generate_question(doc.page_content, local)] return docs_with_embeddings ``` Once questions are generated, they can be evaluated against the retrieval component to check if the original document's URL appears in the top results. **Code Example:** ```python @step def retrieval_evaluation_full(sample_size: int = 50) -> Annotated[float, "full_failure_rate_retrieval"]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) failures = sum(1 for item in dataset if item["filename"].split("/")[-1] not in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1])[2]) return round((failures / len(dataset)) * 100, 2) ``` #### Improvement Strategies 1. **Diverse Question Generation:** Experiment with different prompts to create varied question types. 2. **Semantic Similarity Metrics:** Use metrics like cosine similarity for nuanced performance evaluation. 3. **Comparative Evaluation:** Test different retrieval approaches to identify strengths and weaknesses. 4. **Error Analysis:** Investigate failure cases to understand patterns and improve the retrieval component. The evaluation process, combining manual and automated methods, provides insight into the retrieval component's performance, highlighting areas for improvement. Future evaluations will also consider the generation component to assess the overall effectiveness of the RAG pipeline. For complete code and further details, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) and specifically the [eval_retrieval.py file](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py). ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/evaluation-in-65-loc.md === ### Evaluation in 65 Lines of Code This section explains how to evaluate the performance of a Retrieval-Augmented Generation (RAG) pipeline using 65 lines of code. The full code can be found in the project repository [here](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/most_basic_eval.py). The evaluation code builds on a previously established RAG pipeline. #### Evaluation Data The evaluation data consists of questions and their expected answers: ```python eval_data = [ {"question": "What creatures inhabit the luminescent forests of ZenML World?", "expected_answer": "The luminescent forests of ZenML World are inhabited by glowing Zenbots."}, {"question": "What do Fractal Fungi do in the melodic caverns of ZenML World?", "expected_answer": "Fractal Fungi emit pulsating tones that resonate through the crystalline structures, creating a symphony of otherworldly sounds in the melodic caverns of ZenML World."}, {"question": "Where do Gravitational Geckos live in ZenML World?", "expected_answer": "Gravitational Geckos traverse the inverted cliffs of ZenML World."}, ] ``` #### Evaluation Functions Two functions are defined for evaluation: 1. **Retrieval Evaluation**: - Checks if any words from the expected answer are present in the retrieved chunks. ```python def evaluate_retrieval(question, expected_answer, corpus, top_n=2): relevant_chunks = retrieve_relevant_chunks(question, corpus, top_n) return any(any(word in chunk for word in tokenize(expected_answer)) for chunk in relevant_chunks) ``` 2. **Generation Evaluation**: - Uses OpenAI's API to determine if the generated answer is relevant and accurate. ```python def evaluate_generation(question, expected_answer, generated_answer): client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) chat_completion = client.chat.completions.create( messages=[ {"role": "system", "content": "You are an evaluation judge. Given a question, an expected answer, and a generated answer, your task is to determine if the generated answer is relevant and accurate. Respond with 'YES' or 'NO'."}, {"role": "user", "content": f"Question: {question}\nExpected Answer: {expected_answer}\nGenerated Answer: {generated_answer}\nIs the generated answer relevant and accurate?"} ], model="gpt-3.5-turbo", ) return chat_completion.choices[0].message.content.strip().lower() == "yes" ``` #### Evaluation Process The evaluation process involves iterating through the evaluation data, scoring both retrieval and generation: ```python retrieval_scores = [] generation_scores = [] for item in eval_data: retrieval_scores.append(evaluate_retrieval(item["question"], item["expected_answer"], corpus)) generated_answer = answer_question(item["question"], corpus) generation_scores.append(evaluate_generation(item["question"], item["expected_answer"], generated_answer)) retrieval_accuracy = sum(retrieval_scores) / len(retrieval_scores) generation_accuracy = sum(generation_scores) / len(generation_scores) print(f"Retrieval Accuracy: {retrieval_accuracy:.2f}") print(f"Generation Accuracy: {generation_accuracy:.2f}") ``` #### Conclusion The example demonstrates a straightforward approach to evaluating RAG performance, achieving 100% accuracy for both retrieval and generation in this case. Future sections will explore more sophisticated evaluation methods. ================================================== === File: docs/book/user-guide/llmops-guide/evaluation/generation.md === ### Summary of Generation Evaluation in RAG Pipeline #### Overview The generation component of a Retrieval-Augmented Generation (RAG) pipeline is evaluated to assess the quality of the answers generated based on retrieved context. This evaluation is more subjective than retrieval evaluation, making it challenging to establish precise metrics. #### Handcrafted Evaluation Tests - Create examples to check if generated outputs contain or exclude specific terms based on known correct and incorrect outputs. - For instance, verify that answers about supported orchestrators include "Airflow" and "Kubeflow" but exclude "Flyte" and "Prefect." - Start with simple tests and expand as needed, avoiding complex frameworks initially. **Example Tables:** - **Bad Answers Table:** | Question | Bad Words | |----------|-----------| | What orchestrators does ZenML support? | AWS Step Functions, Flyte, Prefect, Dagster | - **Good Responses Table:** | Question | Good Words | |----------|------------| | What are the supported orchestrators in ZenML? | Kubeflow, Airflow | #### Testing Code Snippets 1. **Test for Bad Words:** ```python class TestResult(BaseModel): success: bool question: str keyword: str = "" response: str def test_content_for_bad_words(item: dict, n_items_retrieved: int = 5) -> TestResult: question = item["question"] bad_words = item["bad_words"] response = process_input_with_retrieval(question, n_items_retrieved=n_items_retrieved) for word in bad_words: if word in response: return TestResult(success=False, question=question, keyword=word, response=response) return TestResult(success=True, question=question, response=response) ``` 2. **Run Tests:** ```python def run_tests(test_data: list, test_function: Callable) -> float: failures = sum(1 for item in test_data if not test_function(item).success) failure_rate = (failures / len(test_data)) * 100 return round(failure_rate, 2) ``` 3. **End-to-End Evaluation:** ```python @step def e2e_evaluation() -> Tuple[float, float, float]: failure_rate_bad_answers = run_tests(bad_answers, test_content_for_bad_words) failure_rate_good_responses = run_tests(good_responses, test_content_contains_good_words) return failure_rate_bad_answers, failure_rate_good_responses ``` #### Automated Evaluation Using Another LLM - Use a second LLM to evaluate the output of the first LLM on a scale of 1 to 5 for categories like toxicity, faithfulness, helpfulness, and relevance. - Set up a Pydantic model to validate the scores. **Pydantic Model:** ```python class LLMJudgedTestResult(BaseModel): toxicity: conint(ge=1, le=5) faithfulness: conint(ge=1, le=5) helpfulness: conint(ge=1, le=5) relevance: conint(ge=1, le=5) ``` **LLM Judged Test Function:** ```python def llm_judged_test_e2e(question: str, context: str, n_items_retrieved: int = 5) -> LLMJudgedTestResult: response = process_input_with_retrieval(question, n_items_retrieved=n_items_retrieved) prompt = f"Analyze the following text and provide scores for toxicity, faithfulness, helpfulness, and relevance. **Text:** {response} **Context:** {context} **Output format:** {{\"toxicity\": int, \"faithfulness\": int, \"helpfulness\": int, \"relevance\": int}}" response = completion(model="gpt-4-turbo", messages=[{"content": prompt, "role": "user"}]) return LLMJudgedTestResult(**json.loads(response["choices"][0]["message"]["content"].strip())) ``` **Run LLM Judged Tests:** ```python def run_llm_judged_tests(test_function: Callable, sample_size: int = 50) -> Tuple[float, float, float, float]: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train").shuffle(seed=42).select(range(sample_size)) scores = [test_function(item["generated_questions"][0], item["page_content"]) for item in dataset] return tuple(round(sum(score) / len(scores), 3) for score in zip(*scores)) ``` #### Considerations for Improvement - Implement retries for JSON output errors. - Utilize OpenAI's JSON mode for consistent output formatting. - Explore batch processing for efficiency. - Increase sample size for better evaluation accuracy. - Consider using multiple evaluators for more reliable scoring. #### Conclusion This evaluation framework allows tracking improvements in the RAG pipeline's retrieval and generation components, providing a foundation for further enhancements and integrations with other sophisticated evaluation frameworks. For complete code, refer to the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_e2e.py). ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/starter-choices-for-finetuning-llms.md === ### Summary of Finetuning LLMs Documentation This guide provides a structured approach to finetuning large language models (LLMs) for specific tasks. Key steps include selecting a use case, gathering data, choosing a base model, and evaluating success. #### Quick Assessment Questions Before starting, consider: 1. **Define Success**: Use measurable metrics (e.g., "95% accuracy in extracting order IDs"). 2. **Data Readiness**: Ensure data is prepared (e.g., "1000 labeled support tickets"). 3. **Task Consistency**: Choose tasks with clear outputs (e.g., "Convert email to 5 specific fields"). 4. **Human Verification**: Ensure correctness can be checked (e.g., "Verify extracted date matches document"). #### Picking a Use Case Select a small, manageable use case that is not easily solvable without LLMs. Examples include: - **Good Use Cases**: Structured data extraction, domain-specific classification, standardized response generation. - **Challenging Use Cases**: Open-ended chat, creative writing, general knowledge QA. #### Picking Data Data should closely align with your use case to minimize annotation effort. Aim for hundreds to thousands of examples. Examples of good data sources include: - Customer support email responses. - Manually extracted metadata. #### Base Model Selection Choose a base model based on your task: - **Llama 3.1-8B**: Best for structured data extraction. - **Llama 3.1-70B**: Suitable for complex reasoning. - **Mistral 7B**: Good for general text generation. - **Phi-2**: Ideal for lightweight tasks. #### Model Selection Matrix ```mermaid graph TD A[Choose Your Task] --> B{Structured Output?} B -->|Yes| C[Llama-8B Base] B -->|No| D{Complex Reasoning?} D -->|Yes| E[Llama-70B Base] D -->|No| F{Resource Constrained?} F -->|Yes| G[Phi-2] F -->|No| H[Mistral-7B] ``` #### Evaluating Success Define clear metrics for success, especially for structured tasks. Metrics may include: - Accuracy of extracted fields. - Precision and recall for specific field types. - Processing time per document. #### Next Steps With a clear understanding of scoping, data selection, and evaluation, proceed to implement finetuning using the Accelerate library, as detailed in the next section. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-with-accelerate.md === # Finetuning an LLM with Accelerate and PEFT This documentation outlines the process of finetuning language models using the Viggo dataset, which consists of over 5,000 pairs of structured meaning representations and corresponding natural language descriptions for video game dialogues. The goal is to train models to generate fluent responses from structured inputs. ## Finetuning Pipeline The finetuning pipeline includes the following steps: 1. **prepare_data**: Load and preprocess the Viggo dataset. 2. **finetune**: Finetune the model on the dataset. 3. **evaluate_base**: Evaluate the base model before finetuning. 4. **evaluate_finetuned**: Evaluate the finetuned model. 5. **promote**: Promote the best model to "staging" in the Model Control Plane. For initial experiments, it is recommended to use smaller models (e.g., Llama 3.1 at ~8B parameters) to facilitate rapid iterations. ## Implementation Details The `prepare_data` step tokenizes the dataset using the model's tokenizer. Care should be taken with input data formatting, especially for instruction-tuned models. Logging inputs and outputs is advised for verification. ### Finetuning Code The finetuning process utilizes the `accelerate` library for multi-GPU support. Below is a concise version of the finetuning code: ```python model = load_base_model(base_model_id, use_accelerate=use_accelerate) trainer = transformers.Trainer( model=model, train_dataset=tokenized_train_dataset, eval_dataset=tokenized_val_dataset, args=transformers.TrainingArguments( output_dir=output_dir, warmup_steps=warmup_steps, per_device_train_batch_size=per_device_train_batch_size, max_steps=max_steps, learning_rate=lr, logging_dir="./logs", save_strategy="steps", evaluation_strategy="steps", do_eval=True, label_names=["input_ids"], ), data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), callbacks=[ZenMLCallback(accelerator=accelerator)], ) ``` ### Evaluation Metrics Evaluation uses the `evaluate` library to compute ROUGE scores, which measure the overlap and quality of generated text against reference texts through various metrics (ROUGE-N, ROUGE-L, ROUGE-W, ROUGE-S). ### ZenML Accelerate Decorator ZenML offers a `@run_with_accelerate` decorator for cleaner distributed training setup: ```python from zenml.integrations.huggingface.steps import run_with_accelerate @run_with_accelerate(num_processes=4, multi_gpu=True, mixed_precision='bf16') @step def finetune_step(tokenized_train_dataset, tokenized_val_dataset, base_model_id: str, output_dir: str): model = load_base_model(base_model_id, use_accelerate=True) trainer = transformers.Trainer(...) trainer.train() return trainer.model ``` ### Docker Configuration Ensure the Docker environment is configured for CUDA support: ```python from zenml import pipeline from zenml.config import DockerSettings docker_settings = DockerSettings( parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime", requirements=["accelerate", "torchvision"] ) @pipeline(settings={"docker": docker_settings}) def finetuning_pipeline(...): # Pipeline steps ``` ## Dataset Iteration Careful attention to input data formatting is crucial. Poorly formatted data can lead to suboptimal model performance. Inspect data at all stages, and consider augmenting the dataset if necessary. Evaluations should be established early to measure model performance and guide further refinements. ### Future Considerations As you refine your model, consider: - Improved evaluation methods - Model deployment strategies - Integration within existing production architectures - The feasibility of smaller models for specific use cases This structured approach will help in optimizing the finetuning process and achieving desired outcomes. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-100-loc.md === # Summary: Fine-tuning an LLM in 100 Lines of Code This documentation provides a concise implementation guide for fine-tuning a language model (LLM) using TinyLlama (1.1B parameters). It covers the following key steps: ### Installation Install necessary packages: ```bash pip install datasets transformers torch accelerate>=0.26.0 ``` ### Code Overview The implementation consists of several functions: 1. **Dataset Preparation**: Creates a small instruction-tuning dataset. ```python def prepare_dataset() -> Dataset: data = [ {"instruction": "Describe a Zenbot.", "response": "A Zenbot is a luminescent robotic entity..."}, {"instruction": "What are Cosmic Butterflies?", "response": "Cosmic Butterflies are ethereal creatures..."}, {"instruction": "Tell me about the Telepathic Treants.", "response": "Telepathic Treants are ancient, sentient trees..."} ] return Dataset.from_list(data) ``` 2. **Formatting and Tokenization**: Prepares input for the model. ```python def format_instruction(example: Dict[str, str]) -> str: return f"### Instruction: {example['instruction']}\n### Response: {example['response']}" def tokenize_data(example: Dict[str, str], tokenizer: AutoTokenizer) -> Dict[str, torch.Tensor]: formatted_text = format_instruction(example) return tokenizer(formatted_text, truncation=True, padding="max_length", max_length=128) ``` 3. **Model Fine-tuning**: Initializes the model and trains it. ```python def fine_tune_model(base_model: str = "TinyLlama/TinyLlama-1.1B-Chat-v1.0") -> Tuple[AutoModelForCausalLM, AutoTokenizer]: tokenizer = AutoTokenizer.from_pretrained(base_model) tokenizer.pad_token = tokenizer.eos_token model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.bfloat16, device_map="auto") dataset = prepare_dataset() tokenized_dataset = dataset.map(lambda x: tokenize_data(x, tokenizer), remove_columns=dataset.column_names) training_args = TrainingArguments( output_dir="./zenml-world-model", num_train_epochs=3, per_device_train_batch_size=1, gradient_accumulation_steps=4, learning_rate=2e-4, bf16=True, logging_steps=10, save_total_limit=2, ) trainer = Trainer(model=model, args=training_args, train_dataset=tokenized_dataset, data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)) trainer.train() return model, tokenizer ``` 4. **Response Generation**: Generates responses based on user prompts. ```python def generate_response(prompt: str, model: AutoModelForCausalLM, tokenizer: AutoTokenizer, max_length: int = 128) -> str: formatted_prompt = f"### Instruction: {prompt}\n### Response:" inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_length=max_length, temperature=0.7, num_return_sequences=1) return tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ### Execution The main execution block fine-tunes the model and tests it with sample prompts: ```python if __name__ == "__main__": model, tokenizer = fine_tune_model() test_prompts = ["What is a Zenbot?", "Describe the Cosmic Butterflies.", "Tell me about an unknown creature."] for prompt in test_prompts: response = generate_response(prompt, model, tokenizer) print(f"\nPrompt: {prompt}\nResponse: {response}") ``` ### Key Components - **Dataset**: Small instruction-response pairs. - **Model**: TinyLlama, chosen for its size and pre-training. - **Training Configuration**: 3 epochs, batch size of 1, learning rate of 2e-4, and mixed precision training. - **Response Generation**: Uses the same instruction format as training with controlled randomness. ### Limitations - Small dataset size (only 3 examples). - Limited training epochs and simple learning rate. - No evaluation metrics included. ### Next Steps Future enhancements include: - Using larger models and datasets. - Implementing evaluation metrics. - Exploring parameter-efficient fine-tuning techniques. - Experiment tracking and model management. This guide serves as a foundation for building more robust fine-tuning pipelines. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/finetuning-llms.md === ### Summary of LLM Finetuning Documentation **Objective**: This documentation focuses on finetuning Large Language Models (LLMs) for specific tasks to enhance performance and reduce costs. **Context**: Previous sections covered using RAG with ZenML, evaluating RAG systems, improving retrieval with reranking, and finetuning embeddings. This section delves into LLM finetuning, particularly when using APIs like OpenAI and Anthropic may not suffice. **Reasons to Finetune LLMs**: - Improve response generation in specific formats. - Enhance understanding of domain-specific terminology. - Reduce prompt length for consistent outputs. - Follow specific patterns or protocols. - Optimize for latency by minimizing context window. **Guide Structure**: 1. [Finetuning in 100 lines of code](finetuning-100-loc.md) 2. [Why and when to finetune LLMs](why-and-when-to-finetune-llms.md) 3. [Starter choices with finetuning](starter-choices-for-finetuning-llms.md) 4. [Finetuning with 🤗 Accelerate](finetuning-with-accelerate.md) 5. [Evaluation for finetuning](evaluation-for-finetuning.md) 6. [Deploying finetuned models](deploying-finetuned-models.md) 7. [Next steps](next-steps.md) **Key Points**: - The steps for finetuning LLMs are straightforward, but understanding when and how to finetune is crucial. - Evaluation of performance and data selection are important considerations. - For practical examples, refer to the [`llm-lora-finetuning` repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-lora-finetuning) for code that can be run locally (with GPU) or on cloud compute. This guide emphasizes the importance of strategic finetuning to maximize the effectiveness of LLMs in various applications. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/evaluation-for-finetuning.md === # Summary of LLM Finetuning Evaluations Documentation ## Overview Evaluations (evals) for Large Language Model (LLM) finetuning are essential for assessing model performance, reliability, and safety, similar to unit tests in software development. They help catch issues early, track progress, and ensure models behave as expected. An incremental approach to building evaluations is recommended to avoid paralysis and facilitate early implementation. ## Motivation and Benefits Key motivations for thorough evals include: 1. **Prevent Regressions**: Ensure changes do not negatively impact existing functionality. 2. **Track Improvements**: Quantify model enhancements over iterations. 3. **Ensure Safety and Robustness**: Identify and mitigate risks, biases, or unexpected behaviors. A robust evaluation strategy leads to more reliable and performant finetuned LLMs. ## Types of Evaluations While generic evaluation frameworks exist, custom evaluations tailored to specific use cases are crucial. These can be categorized into: 1. **Success Modes**: Focus on desired outputs (e.g., correct formatting, appropriate responses). 2. **Failure Modes**: Target undesired outputs (e.g., hallucinations, incorrect formats). ### Custom Evals Example A simple implementation for testing success and failure modes: ```python from my_library import query_llm good_responses = { "what are the best salads available at the food court?": ["caesar", "italian"], "how late is the shopping center open until?": ["10pm", "22:00", "ten"] } for question, answers in good_responses.items(): llm_response = query_llm(question) assert any(answer in llm_response for answer in answers) bad_responses = { "who is the manager of the shopping center?": ["tom hanks", "spiderman"] } for question, answers in bad_responses.items(): llm_response = query_llm(question) assert not any(answer in llm_response for answer in answers) ``` ## Generalized Evals and Frameworks Generalized evals provide structured evaluation approaches, including standardized metrics and insights into model performance. However, they should complement custom evaluations. Notable frameworks include: - [prodigy-evaluate](https://github.com/explosion/prodigy-evaluate) - [ragas](https://docs.ragas.io/en/stable/getstarted/monitoring.html) - [giskard](https://docs.giskard.ai/en/stable/getting_started/quickstart/quickstart_llm.html) - [langcheck](https://github.com/citadel-ai/langcheck) - [nervaluate](https://github.com/MantisAI/nervaluate) ## Data and Tracking Regular analysis of inference data is vital for identifying patterns and guiding improvements. Implement comprehensive logging early in development and consider using LLM evaluation frameworks for structured data collection. Recommended options include: - [weave](https://github.com/wandb/weave) - [openllmetry](https://github.com/traceloop/openllmetry) - [langsmith](https://smith.langchain.com/) - [langfuse](https://langfuse.com/) - [braintrust](https://www.braintrust.dev/) Creating simple dashboards to visualize core metrics can help monitor model performance and track improvements over time. Prioritize simplicity to ensure consistent evaluation and monitoring. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/why-and-when-to-finetune-llms.md === # Finetuning LLMs: When and Why ## Overview This guide provides a practical approach to finetuning language models (LLMs) on custom data. Key points to consider include: - **Not a Universal Solution**: Finetuning may not solve every problem and can introduce technical debt. - **Diverse Applications**: LLMs can be used beyond chatbots, often with lower failure rates in non-chatbot interfaces. - **Final Step in Experimentation**: Finetuning should follow other methods like smaller models or Retrieval-Augmented Generation (RAG). ## When to Finetune an LLM Finetuning is beneficial in the following scenarios: 1. **Domain-Specific Knowledge**: For specialized fields (e.g., medical, legal) where the base model lacks depth. 2. **Consistent Style/Format**: When outputs need to conform to specific styles (e.g., code generation). 3. **Task Accuracy**: To enhance accuracy for critical tasks. 4. **Proprietary Information**: When handling sensitive data that cannot be sent to external APIs. 5. **Custom Instructions**: To embed frequently used prompts directly into the model. 6. **Efficiency**: To improve performance with shorter prompts, reducing costs and latency. ### Decision Flowchart ```mermaid flowchart TD A[Should I finetune an LLM?] --> B{Is prompt engineering
sufficient?} B -->|Yes| C[Use prompt engineering
No finetuning needed] B -->|No| D{Is it primarily a
knowledge retrieval
problem?} D -->|Yes| E{Is real-time data
access needed?} E -->|Yes| F[Use RAG
No finetuning needed] E -->|No| G{Is data volume
very large?>} G -->|Yes| H[Consider hybrid:
RAG + Finetuning] G -->|No| F D -->|No| I{Is it a narrow,
specific task?} I -->|Yes| J{Can a smaller
specialized model
handle it?} J -->|Yes| K[Use smaller model
No finetuning needed] J -->|No| L[Consider finetuning] I -->|No| M{Do you need
consistent style
or format?} M -->|Yes| L M -->|No| N{Is deep domain
expertise required?} N -->|Yes| O{Is the domain
well-represented in
base model?} O -->|Yes| P[Use base model
No finetuning needed] O -->|No| L N -->|No| Q{Is data
proprietary/sensitive?} Q -->|Yes| R{Can you use
API solutions?} R -->|Yes| S[Use API solutions
No finetuning needed] R -->|No| L Q -->|No| S ``` ## Alternatives to Finetuning Consider these options before finetuning: - **Prompt Engineering**: Effective prompts may yield satisfactory results without finetuning. - **RAG**: Often more effective for specific knowledge bases than finetuning. - **Smaller Models**: For narrow tasks, specialized smaller models might perform better. - **API Solutions**: If sensitive data handling isn't required, API solutions can be simpler and cost-effective. Finetuning can be powerful but should be approached cautiously. Start with simpler solutions and only consider finetuning after exploring other options. The next section will cover practical considerations for finetuning LLMs. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/next-steps.md === # Next Steps After iterating on your finetuned model, you should have insights into: - Factors that improve or worsen model performance - Minimum viable model size - Alignment with company processes (iteration time and hardware limitations) - Effectiveness in solving the business use case These insights will guide you in the next stages of finetuning, which may include: - Scaling for more users or real-time scenarios - Meeting critical accuracy requirements, possibly necessitating a larger model - Integrating LLM finetuning into your business systems, including monitoring, logging, and ongoing evaluation While it may be tempting to switch to larger models, focus on enhancing your data quality first, especially if you started with a limited dataset. Expanding your dataset through a flywheel approach or synthetic data generation is often more beneficial than simply upgrading to a more powerful model. ## Resources Recommended resources for further learning on LLM finetuning: - [Mastering LLMs Course](https://parlance-labs.com/education/) - Video course by Hamel Husain and Dan Becker - [Phil Schmid's blog](https://www.philschmid.de/) - Examples of LLM finetuning techniques - [Sam Witteveen's YouTube channel](https://www.youtube.com/@samwitteveenai) - Videos on finetuning and prompt engineering ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-llms/deploying-finetuned-models.md === # Deployment Options for Finetuned LLMs Deploying a finetuned LLM is essential for integrating it into real-world applications. This process requires careful planning regarding performance, reliability, and cost. ## Deployment Considerations Key factors influencing deployment include: - **Resource Requirements**: LLMs demand substantial RAM, processing power, and specialized hardware. Balancing performance and cost is crucial. - **Real-Time Needs**: Applications requiring immediate responses must plan for failover scenarios, conduct benchmarks, and model user load. - **Streaming vs. Non-Streaming**: Choose between these approaches based on latency and resource utilization. - **Optimization Techniques**: Methods like quantization can reduce resource usage but require evaluation to avoid performance loss. ## Deployment Options and Trade-offs 1. **Roll Your Own**: Manage your own infrastructure for maximum control, typically using Docker (e.g., FastAPI). 2. **Serverless Options**: Cost-efficient, pay-per-use models but may experience latency due to "cold starts." 3. **Always-On Options**: Constantly running models minimize latency but incur higher costs during idle times. 4. **Fully Managed Solutions**: Simplify deployment but may limit flexibility and increase costs. Consider your team's expertise, budget, load patterns, and specific requirements when selecting an option. ## Deployment with vLLM and ZenML [vLLM](https://github.com/vllm-project/vllm) is a library for high-throughput, low-latency LLM deployment. ZenML integrates with vLLM for easy deployment. ### Example Code: ```python from zenml import pipeline from steps.vllm_deployer import vllm_model_deployer_step from zenml.integrations.vllm.services.vllm_deployment import VLLMDeploymentService @pipeline() def deploy_vllm_pipeline(model: str, timeout: int = 1200) -> VLLMDeploymentService: service = vllm_model_deployer_step(model=model, timeout=timeout) return service ``` The `model` argument can be a local path or a Hugging Face model ID, deploying it locally with vLLM for batch inference. ## Cloud-Specific Deployment Options - **AWS**: Amazon SageMaker offers managed LLM deployment with real-time inference and scaling. For serverless, use AWS Lambda with API Gateway, noting potential cold start issues. Amazon ECS or EKS with Fargate provides container orchestration. - **GCP**: Google Cloud AI Platform parallels SageMaker, offering managed services. For serverless, use Cloud Run; for more control, consider Google Kubernetes Engine (GKE). ## Architectures for Real-Time Engagement Deploy models behind a load balancer with auto-scaling for responsiveness. Implement caching (e.g., Redis) to enhance performance and use asynchronous architectures (e.g., message queues) for complex queries. For global deployments, consider edge computing services like AWS Lambda@Edge or CloudFront Functions to reduce latency. ## Reducing Latency and Increasing Throughput Optimize for low latency and high throughput through: - **Model Optimization**: Techniques like quantization and distillation can enhance performance. - **Hardware Acceleration**: Use GPU instances for faster inference. - **Request Batching**: Process multiple inputs simultaneously to improve throughput. - **Monitoring and Profiling**: Continuously measure and optimize your deployment to identify bottlenecks. ## Monitoring and Maintenance Post-deployment, focus on: 1. **Evaluation Failures**: Regularly assess model performance. 2. **Latency Metrics**: Ensure response times meet requirements. 3. **Load Patterns**: Monitor user interactions for scaling insights. 4. **Data Analysis**: Analyze inputs/outputs for trends and biases. Ensure compliance with data protection regulations in your logging practices. By considering these deployment options and maintaining vigilant monitoring, you can optimize your finetuned LLM to meet user needs effectively. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/evaluating-reranking-performance.md === ### Evaluating Reranking Performance with ZenML This documentation outlines how to evaluate the performance of a reranking model using ZenML. The evaluation process involves comparing retrieval performance before and after reranking using established metrics. #### Key Steps in Evaluation 1. **Retrieval Evaluation Function**: The core function `perform_retrieval_evaluation` assesses retrieval performance based on generated questions and relevant documents. It checks if the expected URL is present in the retrieved URLs and calculates the failure rate. ```python def perform_retrieval_evaluation(sample_size: int, use_reranking: bool) -> float: dataset = load_dataset("zenml/rag_qa_embedding_questions", split="train") sampled_dataset = dataset.shuffle(seed=42).select(range(sample_size)) failures = sum( 1 for item in sampled_dataset if not any( item["filename"].split("/")[-1] in url for url in query_similar_docs(item["generated_questions"][0], item["filename"].split("/")[-1], use_reranking)[2] ) ) return round((failures / len(sampled_dataset)) * 100, 2) ``` 2. **Evaluation Steps**: Two steps are defined to evaluate the retrieval system with and without reranking, returning the respective failure rates. ```python @step def retrieval_evaluation_full(sample_size: int = 100) -> float: return perform_retrieval_evaluation(sample_size, use_reranking=False) @step def retrieval_evaluation_full_with_reranking(sample_size: int = 100) -> float: return perform_retrieval_evaluation(sample_size, use_reranking=True) ``` 3. **Logging and Analysis**: The logs provide insights into specific failures, helping identify anomalies in generated questions that may affect performance. #### Visualizing Results ZenML allows visualization of evaluation results. A bar chart can be created to compare failure rates with and without reranking. ```python @step(enable_cache=False) def visualize_evaluation_results(...): scores = [...] # Normalized scores labels = [...] # Corresponding labels fig, ax = plt.subplots(figsize=(10, 6)) ax.barh(np.arange(len(labels)), scores, align="center") ax.set_yticks(np.arange(len(labels))) ax.set_yticklabels(labels) ax.invert_yaxis() plt.tight_layout() buf = io.BytesIO() plt.savefig(buf, format="png") return Image.open(buf) ``` #### Running the Evaluation Pipeline To run the evaluation pipeline, clone the project repository and execute the evaluation command after running the main pipeline: ```bash git clone https://github.com/zenml-io/zenml-projects.git cd llm-complete-guide python run.py --evaluation ``` This will execute the evaluation pipeline and display results in the ZenML dashboard, allowing for inspection of progress and logs. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/understanding-reranking.md === ## Reranking Overview ### Definition Reranking refines the initial ranking of documents retrieved by a system, enhancing the relevance and quality of documents used in Retrieval-Augmented Generation (RAG). The initial retrieval typically employs sparse methods like BM25 or TF-IDF, which focus on lexical matching but may miss semantic context. Rerankers reorder documents by incorporating features like semantic similarity and relevance scores to prioritize the most informative documents for generating accurate responses. ### Types of Rerankers 1. **Cross-Encoders**: - Input: Concatenated query and document. - Output: Relevance score. - Example: BERT-based models for passage ranking. - Pros: Effective interaction capture. - Cons: Computationally expensive. 2. **Bi-Encoders**: - Separate encoders for query and document. - Generate independent embeddings and compute similarity. - Pros: More efficient. - Cons: Weaker interaction capture. 3. **Lightweight Models**: - Distilled models or small transformers. - Balance effectiveness and efficiency. - Suitable for real-time applications. ### Benefits of Reranking in RAG 1. **Improved Relevance**: Identifies the most relevant documents, enhancing context for LLM responses. 2. **Semantic Understanding**: Captures semantic meaning, retrieving documents that may not match keywords but are contextually relevant. 3. **Domain Adaptation**: Fine-tuned on specific data to leverage domain knowledge. 4. **Personalization**: Tailors document retrieval based on user preferences and interactions. ### Next Steps The following section will cover the implementation of reranking in ZenML and its integration into the RAG inference pipeline. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/README.md === ### Summary of Reranking in RAG Inference with ZenML **Overview**: Rerankers enhance retrieval systems using LLMs by improving the quality of retrieved documents through reordering based on additional features or scores. This section details how to integrate a reranker into your RAG inference pipeline in ZenML. **Key Points**: - Reranking is an optional enhancement to the existing workflow, which includes data ingestion, preprocessing, embeddings generation, and retrieval. - The addition of a reranker can improve the relevance and quality of retrieved documents, leading to better responses from the LLM. - Basic evaluation metrics should be established to assess retrieval performance before implementing reranking. **Visual Aid**: A workflow diagram illustrates the reranking process within the overall retrieval system. This concise integration of a reranker can significantly boost the performance of your retrieval system. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/implementing-reranking.md === ### Implementing Reranking in ZenML This documentation outlines the process of integrating a reranking mechanism into an existing RAG (Retrieval-Augmented Generation) pipeline using the `rerankers` package. The reranker reorders retrieved documents based on their relevance to a given query. #### Adding Reranking 1. **Dependency**: Use the `rerankers` package, which provides a `Reranker` abstract class for custom implementations or ready-to-use models. 2. **Functionality**: The reranker takes a query and a list of documents, returning a reordered list based on reranking scores. **Example Code**: ```python from rerankers import Reranker ranker = Reranker('cross-encoder') texts = [ "I like to play soccer", "I like to play football", "War and Peace is a great book", "I love dogs", "Ginger cats aren't very smart", "I like to play basketball", ] results = ranker.rank(query="What's your favorite sport?", docs=texts) ``` **Sample Output**: ``` RankedResults( results=[ Result(doc_id=5, text='I like to play basketball', score=-0.465, rank=1), Result(doc_id=0, text='I like to play soccer', score=-0.735, rank=2), ... ], query="What's your favorite sport?", has_scores=True ) ``` #### Reranking Function A helper function can be created to rerank documents based on a query: ```python def rerank_documents(query: str, documents: List[Tuple], reranker_model: str = "flashrank") -> List[Tuple[str, str]]: ranker = Reranker(reranker_model) docs_texts = [f"{doc[0]} PARENT SECTION: {doc[2]}" for doc in documents] results = ranker.rank(query=query, docs=docs_texts) return [(results.results[i].text, documents[results.results[i].doc_id][1]) for i in range(len(results.results))] ``` This function returns a list of tuples containing reranked document texts and their original URLs. #### Querying Similar Documents The reranking function can be integrated into a document querying function: ```python def query_similar_docs(question: str, url_ending: str, use_reranking: bool = False, returned_sample_size: int = 5) -> Tuple[str, str, List[str]]: embedded_question = get_embeddings(question) db_conn = get_db_conn() num_docs = 20 if use_reranking else returned_sample_size top_similar_docs = get_topn_similar_docs(embedded_question, db_conn, n=num_docs, include_metadata=True) urls = [doc[1] for doc in (rerank_documents(question, top_similar_docs)[:returned_sample_size] if use_reranking else top_similar_docs)] return (question, url_ending, urls) ``` This function retrieves and optionally reranks documents based on the question, returning the top URLs. #### Evaluation To evaluate the performance of the reranker, refer to the complete code in the [Complete Guide](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/) repository, specifically the [`eval_retrieval.py`](https://github.com/zenml-io/zenml-projects/blob/main/llm-complete-guide/steps/eval_retrieval.py) file. ================================================== === File: docs/book/user-guide/llmops-guide/reranking/reranking.md === **Summary of Reranking in RAG Inference for ZenML** Rerankers enhance retrieval systems using LLMs by reordering retrieved documents based on additional features or scores, improving their quality. This section outlines how to integrate a reranker into your RAG inference pipeline in ZenML. Key Points: - Reranking is optional but can significantly improve the relevance and quality of retrieved documents, leading to better LLM responses. - The overall workflow includes data ingestion, preprocessing, embeddings generation, retrieval, and evaluation metrics. - Reranking is an additional step that optimizes the existing setup. ![Reranking Workflow](https://static.scarf.sh/a.png?x-pxid=f0b4f458-0a54-4fcd-aa95-d5ee424815bc) This concise addition can enhance system performance without being strictly necessary. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/synthetic-data-generation.md === ### Summary of Documentation for Synthetic Data Generation with Distilabel This documentation outlines the process of generating synthetic data using `distilabel` to fine-tune embeddings based on a pre-existing dataset of technical documentation available on Hugging Face. The dataset consists of `page_content` and corresponding source URLs. The goal is to pair `page_content` with generated questions, leveraging LLMs to automate question creation. #### Pipeline Overview 1. Load the Hugging Face dataset. 2. Use `distilabel` to generate synthetic data. 3. Push the generated data to a new Hugging Face dataset and an Argilla instance for annotation. #### Synthetic Data Generation `distilabel` facilitates scalable knowledge distillation from LLMs. In this case, it generates queries related to documentation chunks using `gpt-4o`. The pipeline setup includes: - **Load Data**: Load the dataset and map `page_content` to `anchor`. - **Generate Queries**: Use `GenerateSentencePair` to create both positive and negative queries. **Key Code Snippet:** ```python @step def generate_synthetic_queries(train_dataset: Dataset, test_dataset: Dataset) -> Tuple[Annotated[Dataset, "train_with_queries"], Annotated[Dataset, "test_with_queries"]]: llm = OpenAILLM(model=OPENAI_MODEL_GEN, api_key=os.getenv("OPENAI_API_KEY")) with distilabel.pipeline.Pipeline(name="generate_embedding_queries") as pipeline: load_dataset = LoadDataFromHub(output_mappings={"page_content": "anchor"}) generate_sentence_pair = GenerateSentencePair(triplet=True, action="query", llm=llm, input_batch_size=10, context=synthetic_generation_context) load_dataset >> generate_sentence_pair train_distiset = pipeline.run(parameters={load_dataset.name: {"repo_id": DATASET_NAME_DEFAULT, "split": "train"}, generate_sentence_pair.name: {"llm": {"generation_kwargs": OPENAI_MODEL_GEN_KWARGS_EMBEDDINGS}}}) test_distiset = pipeline.run(parameters={load_dataset.name: {"repo_id": DATASET_NAME_DEFAULT, "split": "test"}, generate_sentence_pair.name: {"llm": {"generation_kwargs": OPENAI_MODEL_GEN_KWARGS_EMBEDDINGS}}}) return train_distiset["default"]["train"], test_distiset["default"]["train"] ``` #### Data Annotation with Argilla After generating synthetic data, it is pushed to an Argilla instance for inspection and annotation. Metadata added includes: - `parent_section`: Source section of the documentation. - `token_count`: Number of tokens in the chunk. - Similarity metrics between queries. **Key Code Snippet for Formatting Data:** ```python def format_data(batch): model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") def get_embeddings(batch_column): return [vector.tolist() for vector in model.encode(batch_column)] batch["anchor-vector"] = get_embeddings(batch["anchor"]) # Similarity calculations omitted for brevity return batch ``` #### Next Steps After data inspection and potential cleaning, the next phase involves fine-tuning the embeddings using the generated dataset, even if annotation is not performed, assuming the quality is satisfactory. This concise summary captures the essential technical details and key points from the original documentation, ensuring clarity and completeness for further inquiries. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings-with-sentence-transformers.md === ### Summary: Finetuning Embeddings with Sentence Transformers This documentation outlines the process for finetuning embeddings using the Sentence Transformers library. The steps include loading data, finetuning the model, evaluating performance, and visualizing results. #### Key Steps in the Pipeline: 1. **Data Loading**: - Load data from Hugging Face or Argilla using the command: ```bash python run.py --embeddings --argilla ``` - Ensure an Argilla annotator is set up in the stack. 2. **Finetuning with Sentence Transformers**: - **Model Loading**: Load the base model using Sentence Transformers with efficient training via SDPA and Flash Attention 2. - **Loss Function**: Use `MatryoshkaLoss`, a wrapper around `MultipleNegativesRankingLoss`, to train with varying embedding dimensions. - **Dataset Preparation**: Load the training dataset from a specified path and save it as a temporary JSON file. - **Evaluator**: Create an evaluator using `get_evaluator` to assess model performance. - **Training Arguments**: Set hyperparameters like epochs, batch size, learning rate, and precision using `SentenceTransformerTrainingArguments`. - **Trainer Initialization**: Use `SentenceTransformerTrainer` with the model, training arguments, dataset, and loss function. Start training with `trainer.train()`. - **Model Saving**: Push the finetuned model to Hugging Face Hub with: ```python trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) ``` - **Metadata Logging**: Log training parameters and hardware info. - **Model Rehydration**: Save and reload the trained model to handle materialization errors. #### Simplified Code Snippet: ```python # Load the base model model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE) # Define the loss function train_loss = MatryoshkaLoss(model, MultipleNegativesRankingLoss(model)) # Prepare the training dataset train_dataset = load_dataset("json", data_files=train_dataset_path) # Set up the training arguments args = SentenceTransformerTrainingArguments(...) # Create the trainer trainer = SentenceTransformerTrainer(model, args, train_dataset, train_loss) # Start training trainer.train() # Save the finetuned model trainer.model.push_to_hub(EMBEDDINGS_MODEL_ID_FINE_TUNED) ``` The finetuning process enhances model performance across different embedding sizes and ensures the model is versioned and tracked within ZenML for observability. The pipeline concludes with evaluating and visualizing the base and finetuned embeddings. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/finetuning-embeddings.md === **Summary: Finetuning Embeddings on Custom Synthetic Data** This documentation outlines the process of enhancing retrieval performance by finetuning embeddings on custom synthetic data. It builds on previous knowledge of using RAG (Retrieval-Augmented Generation) with ZenML to create a production-ready pipeline. The main steps involved in optimizing embedding models are: 1. **Generating Synthetic Data**: Utilize `distilabel` for synthetic data generation. 2. **Finetuning Embeddings**: Use Sentence Transformers to finetune embeddings on domain-specific data. 3. **Evaluating Embeddings**: Assess the performance of finetuned embeddings and leverage ZenML's model control plane for systematic evaluation. The process aims to improve the retrieval step in the RAG pipeline, which retrieves relevant documents from a vector database before generating responses with a language model. **Key Libraries**: - **`distilabel`**: Generates synthetic data and provides AI feedback, focusing on knowledge distillation from LLMs. - **`argilla`**: Facilitates collaboration between AI engineers and domain experts through an interactive UI for data organization and exploration. Both libraries can be used independently but are more effective when combined. The full implementation details can be found in the [llm-complete-guide repository](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide), and the finetuning process can be executed locally or on cloud compute. ================================================== === File: docs/book/user-guide/llmops-guide/finetuning-embeddings/evaluating-finetuned-embeddings.md === ### Summary of Documentation on Evaluating Finetuned Embeddings **Objective**: Evaluate finetuned embeddings and compare them to original base embeddings using the MatryoshkaLoss function. **Evaluation Steps**: 1. **Base Model Evaluation**: - Load the base model using `SentenceTransformer`. - Evaluate the model on the dataset and log results as model metadata. **Code Snippet**: ```python from zenml import log_model_metadata, step def evaluate_model(dataset: DatasetDict, model: SentenceTransformer) -> Dict[str, float]: evaluator = get_evaluator(dataset=dataset, model=model) return evaluator(model) @step def evaluate_base_model(dataset: DatasetDict) -> Annotated[Dict[str, float], "base_model_evaluation_results"]: model = SentenceTransformer(EMBEDDINGS_MODEL_ID_BASELINE, device="cuda" if torch.cuda.is_available() else "cpu") results = evaluate_model(dataset=dataset, model=model) base_model_eval = {f"dim_{dim}_cosine_ndcg@10": float(results[f"dim_{dim}_cosine_ndcg@10"]) for dim in EMBEDDINGS_MODEL_MATRYOSHKA_DIMS} log_model_metadata(metadata={"base_model_eval": base_model_eval}) return results ``` **Logging and Visualization**: - Results are logged in ZenML for inspection via the Model Control Plane. - Visualization can be done using `PIL.Image` and `matplotlib` to compare base and finetuned model evaluations. **Results Interpretation**: - Finetuned embeddings improved recall but still require better training data. Consider refining the dataset used for training. **Model Control Plane**: - A unified interface to inspect artifacts, models, logged metadata, and pipeline runs. - Available in ZenML Pro. **Next Steps**: - Integrate finetuned embeddings into the original RAG pipeline for further evaluations. - Upcoming sections will cover LLM finetuning and deployment. **Resources**: - For LLM finetuning, refer to the LoRA project and the blog post on finetuning Llama 3.1 with ZenML. - Follow instructions in the project repository README for practical implementation. ================================================== === File: docs/book/user-guide/cloud-guide/cloud-guide.md === ### Cloud Guide Overview This section provides straightforward instructions for connecting major public clouds to your ZenML deployment by configuring a **stack**. A stack is the set of tools and infrastructure that your pipelines utilize. ZenML executes different actions based on the stack used when running a pipeline. **Key Points:** - **Stack Registration:** This guide focuses on registering a stack, assuming the necessary resources for pipeline execution are already provisioned. - **Provisioning Options:** You can provision infrastructure through: - Manual setup - In-browser stack deployment wizard - Stack registration wizard - ZenML Terraform modules ![ZenML Stack](../../.gitbook/assets/vpc_zenml.png) *ZenML acts as a translation layer for code execution across different stacks.* ==================================================