βΈοΈ Kubernetes Integration
What you'll learn
How to deploy FlowyML pipelines to Kubernetes clusters for massive scale β turn your K8s cluster into a powerful ML engine.
Orchestrate pipelines on Kubernetes clusters with per-step resource allocation, GPU support, and Kubernetes-native secrets management.
Why Kubernetes?
| Feature | Benefit |
|---|---|
| Scale | Run thousands of steps in parallel |
| Resource Management | CPU/GPU quotas and limits per step |
| Resilience | K8s automatically restarts failed pods |
| Portability | Same configs work on any K8s cluster |
βΈοΈ Running on Kubernetes
FlowyML submits each step as a Kubernetes Pod:
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
namespace |
str |
"default" |
Kubernetes namespace for pods |
image |
str |
required | Container image for steps |
image_pull_policy |
str |
"Always" |
Always, IfNotPresent, or Never |
service_account |
str |
None |
K8s service account name |
env_vars |
dict |
{} |
Environment variables and secrets |
node_selector |
dict |
{} |
Node selection labels |
βοΈ Per-Step Resources
Customize CPU, memory, and GPU for specific steps:
π Secrets & Environment Variables
Inject Kubernetes secrets safely into your pods:
Best Practices
Use node selectors for GPU steps
Label GPU nodes and use node_selector={"gpu": "true"} to ensure GPU steps land on the right nodes.
Resource requests vs. limits
Set Resources to match expected usage. Over-requesting wastes cluster capacity; under-requesting causes OOM kills.
Image pull secrets
If using a private registry, configure imagePullSecrets in your namespace.