Back to Blog

Kubernetes in Production: Enterprise Multi-Cluster Best Practices

TL;DR: This article shares real-world experience scaling Kubernetes infrastructure from a single development cluster to a robust multi-cluster production environment. Learn the best practices, challenges, and solutions for enterprise Kubernetes - from security hardening to cost optimization.

🚀 The Challenge: Scaling Beyond Development

In my experience as a DevOps Engineer, I've seen many organizations start with everything running on a single Kubernetes cluster. While this works fine for development and testing, growing user bases and business requirements demand a more robust, scalable approach.

The challenges we faced were typical but critical:

🏗️ Multi-Cluster Architecture Design

After extensive research and planning, we designed a multi-cluster architecture that addresses our scaling needs while maintaining operational simplicity.

Cluster Separation Strategy

We implemented a three-cluster approach:

# Production Cluster Configuration
apiVersion: v1
kind: Namespace
metadata:
  name: enterprise-prod
  labels:
    environment: production
    company: enterprise
    tier: critical
  annotations:
    kubernetes.io/managed-by: "devops-team"
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: prod-resource-quota
  namespace: enterprise-prod
spec:
  hard:
    requests.cpu: "10"
    requests.memory: 20Gi
    limits.cpu: "20"
    limits.memory: 40Gi
    pods: "50"
    services: "10"

🔒 Security Best Practices We Implemented

Security was our top priority when designing the production setup. Here are the key security measures we implemented:

1. Role-Based Access Control (RBAC)

We implemented strict RBAC policies with principle of least privilege:

# DevOps Team Role for Production
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: enterprise-prod
  name: devops-engineer
rules:
- apiGroups: [""]
  resources: ["pods", "services", "configmaps"]
  verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets"]
  verbs: ["get", "list", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: devops-team-binding
  namespace: enterprise-prod
subjects:
- kind: User
  name: devops.engineer@company.com
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: devops-engineer
  apiGroup: rbac.authorization.k8s.io

2. Network Policies

We implemented network segmentation to control traffic between namespaces and external access:

# Production Network Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: enterprise-prod-network-policy
  namespace: enterprise-prod
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          environment: production
    ports:
    - protocol: TCP
      port: 8080
  egress:
  - to: []
    ports:
    - protocol: TCP
      port: 443
    - protocol: TCP
      port: 80

3. Pod Security Standards

We enforced pod security standards to prevent privilege escalation and ensure container security:

Key Security Measures:

  • All containers run as non-root users
  • Read-only root filesystems where possible
  • Restricted security contexts
  • Regular security scanning of container images

📊 Monitoring & Observability Strategy

With multiple clusters, observability became even more critical. We implemented a comprehensive monitoring stack:

Prometheus Multi-Cluster Setup

We deployed Prometheus in each cluster with a central federation setup:

# Prometheus Federation Configuration
global:
  scrape_interval: 15s
  external_labels:
    cluster: 'enterprise-prod'
    environment: 'production'

scrape_configs:
  - job_name: 'kubernetes-nodes'
    kubernetes_sd_configs:
    - role: node
    relabel_configs:
    - source_labels: [__address__]
      regex: '(.*):10250'
      target_label: __address__
      replacement: '${1}:9100'

  - job_name: 'kubernetes-pods'
    kubernetes_sd_configs:
    - role: pod
    relabel_configs:
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
      action: keep
      regex: true

Custom Alerting Rules

We created specific alerting rules for production workloads:

# Production Alerting Rules
groups:
- name: enterprise-production
  rules:
  - alert: HighPodMemoryUsage
    expr: container_memory_usage_bytes{namespace="enterprise-prod"} / container_spec_memory_limit_bytes > 0.9
    for: 5m
    labels:
      severity: warning
      team: devops
    annotations:
      summary: "High memory usage in production pod"
      description: "Pod {{ $labels.pod }} in namespace {{ $labels.namespace }} is using {{ $value | humanizePercentage }} of its memory limit."

  - alert: PodCrashLooping
    expr: rate(kube_pod_container_status_restarts_total{namespace="enterprise-prod"}[5m]) > 0
    for: 5m
    labels:
      severity: critical
      team: devops
    annotations:
      summary: "Pod is crash looping in production"
      description: "Pod {{ $labels.pod }} in namespace {{ $labels.namespace }} is crash looping."

🔄 CI/CD Integration with GitOps

We implemented a GitOps workflow using ArgoCD for consistent and reliable deployments:

ArgoCD Application Configuration

# ArgoCD Application for Production
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: enterprise-prod-app
  namespace: argocd
spec:
  project: production
  source:
    repoURL: https://github.com/enterprise/k8s-manifests
    targetRevision: main
    path: environments/production
  destination:
    server: https://prod-cluster.enterprise.com
    namespace: enterprise-prod
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true

💰 Cost Optimization Techniques

Running multiple clusters can be expensive. Here are the cost optimization strategies we implemented:

1. Resource Quotas and Limits

We set appropriate resource quotas and limits to prevent resource waste:

# Resource Limits for Production Pods
apiVersion: v1
kind: LimitRange
metadata:
  name: prod-limit-range
  namespace: enterprise-prod
spec:
  limits:
  - type: Container
    default:
      cpu: 200m
      memory: 256Mi
    defaultRequest:
      cpu: 100m
      memory: 128Mi
    max:
      cpu: 2
      memory: 4Gi
    min:
      cpu: 50m
      memory: 64Mi

2. Cluster Autoscaling

We implemented horizontal and vertical pod autoscaling along with cluster autoscaling:

Autoscaling Strategy:

  • HPA: Scale pods based on CPU and memory metrics
  • VPA: Right-size pod resource requests
  • Cluster Autoscaler: Add/remove nodes based on demand
  • Spot Instances: Use spot instances for non-critical workloads

📚 Lessons Learned & Key Takeaways

What Worked Well

Challenges We Overcame

Future Improvements

Based on our experience, here's what we're planning next:

🎯 Practical Tips for Your Implementation

Before You Start:

  1. Plan Your Architecture: Define clear boundaries and responsibilities
  2. Start Small: Begin with development and staging before production
  3. Invest in Monitoring: Set up observability from day one
  4. Automate Everything: Manual processes don't scale
  5. Document Decisions: Future you will thank present you

🤝 Connect & Learn More

If you're implementing Kubernetes in production or have questions about enterprise setup strategies, I'd love to connect and share experiences. The DevOps community thrives on knowledge sharing, and I'm always eager to learn from others' experiences too.

About the Author: David M is a DevOps & Observability Engineer at Finstein, specializing in Kubernetes, AWS cloud infrastructure, and CI/CD automation. With 2+ years of experience building enterprise solutions and 50+ projects completed, he helps teams deliver reliable, scalable solutions through automation and DevOps best practices.