Kubernetes

Sistema de orquestração de containers para automação de deployment, scaling e gerenciamento

Open Source Cloud Native Container Orchestration Production Ready

Visão Geral

Kubernetes (K8s) é uma plataforma open-source para automação de deployment, scaling e gerenciamento de aplicações containerizadas. Originalmente desenvolvido pelo Google e baseado em sua experiência com sistemas internos como Borg e Omega, Kubernetes se tornou o padrão da indústria para orquestração de containers.

O Kubernetes fornece um framework para executar sistemas distribuídos de forma resiliente, cuidando do scaling, failover, deployment patterns e muito mais para suas aplicações.

Principais Características

  • Orquestração automática de containers
  • Auto-scaling horizontal e vertical
  • Service discovery e load balancing
  • Rolling updates e rollbacks
  • Self-healing (restart, replace, kill)
  • Gerenciamento de configuração e secrets
  • Storage orchestration
  • Batch execution
  • Multi-cloud e hybrid cloud
  • Extensibilidade via APIs

Arquitetura Kubernetes

Kubernetes utiliza uma arquitetura master-worker com os seguintes componentes:

Control Plane (Master)

  • kube-apiserver: API server que expõe a API do Kubernetes
  • etcd: Armazenamento key-value para dados do cluster
  • kube-scheduler: Agenda pods para nodes
  • kube-controller-manager: Executa controllers
  • cloud-controller-manager: Interage com APIs de cloud providers

Worker Nodes

  • kubelet: Agente que executa nos nodes
  • kube-proxy: Proxy de rede para services
  • Container Runtime: Docker, containerd, CRI-O

Exemplo Básico - Deployment

Exemplo de um deployment simples de uma aplicação web:

deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  labels:
    app: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
      - name: web-app
        image: nginx:1.21
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: web-app-service
spec:
  selector:
    app: web-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer
Comandos kubectl
# Aplicar configuração
kubectl apply -f deployment.yaml

# Verificar status do deployment
kubectl get deployments

# Verificar pods
kubectl get pods

# Verificar services
kubectl get services

# Escalar deployment
kubectl scale deployment web-app --replicas=5

# Ver logs
kubectl logs -l app=web-app

# Executar comando em pod
kubectl exec -it  -- /bin/bash

Kubernetes para Big Data

Exemplo de deployment do Apache Spark no Kubernetes:

spark-cluster.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spark-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: spark-master
  template:
    metadata:
      labels:
        app: spark-master
    spec:
      containers:
      - name: spark-master
        image: bitnami/spark:3.5
        env:
        - name: SPARK_MODE
          value: "master"
        - name: SPARK_RPC_AUTHENTICATION_ENABLED
          value: "no"
        - name: SPARK_RPC_ENCRYPTION_ENABLED
          value: "no"
        ports:
        - containerPort: 8080
        - containerPort: 7077
        resources:
          requests:
            memory: "1Gi"
            cpu: "500m"
          limits:
            memory: "2Gi"
            cpu: "1000m"
---
apiVersion: v1
kind: Service
metadata:
  name: spark-master-service
spec:
  selector:
    app: spark-master
  ports:
  - name: web-ui
    port: 8080
    targetPort: 8080
  - name: spark
    port: 7077
    targetPort: 7077
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spark-worker
spec:
  replicas: 3
  selector:
    matchLabels:
      app: spark-worker
  template:
    metadata:
      labels:
        app: spark-worker
    spec:
      containers:
      - name: spark-worker
        image: bitnami/spark:3.5
        env:
        - name: SPARK_MODE
          value: "worker"
        - name: SPARK_MASTER_URL
          value: "spark://spark-master-service:7077"
        - name: SPARK_WORKER_MEMORY
          value: "2G"
        - name: SPARK_WORKER_CORES
          value: "2"
        resources:
          requests:
            memory: "2Gi"
            cpu: "1000m"
          limits:
            memory: "4Gi"
            cpu: "2000m"

ConfigMaps e Secrets

Gerenciamento de configurações e dados sensíveis:

configmap-secret.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  database.host: "postgres-service"
  database.port: "5432"
  database.name: "myapp"
  log.level: "INFO"
  app.properties: |
    server.port=8080
    spring.datasource.url=jdbc:postgresql://postgres-service:5432/myapp
    logging.level.root=INFO
---
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  database.username: dXNlcm5hbWU=  # base64 encoded
  database.password: cGFzc3dvcmQ=  # base64 encoded
  api.key: YWJjZGVmZ2hpams=        # base64 encoded
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-with-config
spec:
  replicas: 2
  selector:
    matchLabels:
      app: app-with-config
  template:
    metadata:
      labels:
        app: app-with-config
    spec:
      containers:
      - name: app
        image: myapp:latest
        env:
        - name: DB_HOST
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: database.host
        - name: DB_USERNAME
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: database.username
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: database.password
        volumeMounts:
        - name: config-volume
          mountPath: /app/config
        - name: secret-volume
          mountPath: /app/secrets
          readOnly: true
      volumes:
      - name: config-volume
        configMap:
          name: app-config
      - name: secret-volume
        secret:
          secretName: app-secrets

Persistent Volumes

Gerenciamento de armazenamento persistente:

persistent-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: fast-ssd
  hostPath:
    path: /data/volumes/pv1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: fast-ssd
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: database
spec:
  serviceName: database-service
  replicas: 1
  selector:
    matchLabels:
      app: database
  template:
    metadata:
      labels:
        app: database
    spec:
      containers:
      - name: postgres
        image: postgres:15
        env:
        - name: POSTGRES_DB
          value: "mydb"
        - name: POSTGRES_USER
          value: "user"
        - name: POSTGRES_PASSWORD
          value: "password"
        ports:
        - containerPort: 5432
        volumeMounts:
        - name: data-volume
          mountPath: /var/lib/postgresql/data
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
      volumes:
      - name: data-volume
        persistentVolumeClaim:
          claimName: data-pvc

Monitoramento e Observabilidade

Stack de monitoramento com Prometheus e Grafana:

monitoring-stack.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - name: prometheus
        image: prom/prometheus:latest
        ports:
        - containerPort: 9090
        volumeMounts:
        - name: prometheus-config
          mountPath: /etc/prometheus
        - name: prometheus-data
          mountPath: /prometheus
        args:
          - '--config.file=/etc/prometheus/prometheus.yml'
          - '--storage.tsdb.path=/prometheus'
          - '--web.console.libraries=/etc/prometheus/console_libraries'
          - '--web.console.templates=/etc/prometheus/consoles'
      volumes:
      - name: prometheus-config
        configMap:
          name: prometheus-config
      - name: prometheus-data
        emptyDir: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
    scrape_configs:
    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
    - job_name: 'kubernetes-nodes'
      kubernetes_sd_configs:
      - role: node
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:latest
        ports:
        - containerPort: 3000
        env:
        - name: GF_SECURITY_ADMIN_PASSWORD
          value: "admin"
        volumeMounts:
        - name: grafana-data
          mountPath: /var/lib/grafana
      volumes:
      - name: grafana-data
        emptyDir: {}

Casos de Uso em Big Data

Spark Clusters

Deploy e scaling automático de clusters Apache Spark para processamento distribuído.

Kafka Streaming

Orquestração de clusters Kafka para processamento de streams em tempo real.

ML Pipelines

Execução de pipelines de Machine Learning com Kubeflow e MLflow.

Data Lakes

Gerenciamento de componentes de Data Lake como MinIO, Trino e Hive.

Melhores Práticas

Segurança

  • Use RBAC (Role-Based Access Control)
  • Configure Network Policies
  • Escaneie imagens por vulnerabilidades
  • Use Pod Security Standards
  • Implemente secrets management

Performance

  • Configure resource requests e limits
  • Use Horizontal Pod Autoscaler
  • Implemente health checks
  • Otimize imagens de container
  • Configure node affinity
Informações Rápidas
  • Tipo: Container Orchestration
  • Licença: Apache 2.0
  • Linguagem: Go
  • Primeira Versão: 2014
  • Desenvolvedor: CNCF
Tecnologias Relacionadas