The Container Attack Surface
Kubernetes runs over 84% of organizations' production workloads, yet a 2025 Red Hat survey found that 67% of enterprises delayed or slowed deployments due to container security concerns. The default Kubernetes configuration is optimized for ease-of-use, not security—making hardening a non-negotiable step before any production deployment.
Why Kubernetes Is the New Perimeter
The traditional network perimeter is gone. In a cloud-native world, Kubernetes is the platform that defines what runs, where it runs, and who can access it. A misconfigured cluster doesn't just expose one application—it exposes every workload sharing that cluster, the underlying node infrastructure, and potentially the entire cloud account via service account credential chains.
For defense and regulated enterprises, this means treating Kubernetes security with the same rigor applied to network firewalls and physical access controls. The good news: Kubernetes has a mature, layered security model. The challenge is understanding and correctly implementing each layer.
The 4C's of Cloud-Native Security
Kubernetes security follows a defense-in-depth model often described as the "4C's":
- Cloud: The underlying infrastructure (CSP IAM, network configuration, node hardening).
- Cluster: The Kubernetes control plane (API server, etcd encryption, admission controllers).
- Container: The container runtime and images (vulnerability scanning, image signing, read-only root filesystems).
- Code: The application itself (dependency scanning, secrets management, secure coding).
A vulnerability at any layer can compromise the layers above it. Securing only the "Code" layer is meaningless if the cluster's API server is publicly exposed with anonymous authentication enabled.
Pod Security Standards: Your First Line of Defense
The Pod Security Standards (PSS), enforced via the built-in Pod Security Admission controller, define three profiles that progressively restrict what a pod is allowed to do:
- Privileged: Unrestricted. Used only for system-level workloads like CNI plugins and log collectors.
- Baseline: Prevents known privilege escalations. Blocks
hostNetwork,hostPID, and privileged containers. - Restricted: The hardened standard. Enforces running as non-root, drops all
capabilities, requires read-only root filesystem, and mandates a
seccompProfile.
For defense workloads, the Restricted profile should be the default for every namespace, with explicit exceptions granted only to infrastructure-level DaemonSets via namespace-level labels:
apiVersion: v1
kind: Namespace
metadata:
name: mission-apps
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
RBAC: The Principle of Least Privilege
Role-Based Access Control is the gatekeeper to the Kubernetes API. A single overly broad
ClusterRoleBinding can turn a compromised service account into a cluster admin. The key
principles:
- Avoid
cluster-admin: Bind granular Roles per namespace. Never bindcluster-adminto a service account used by an application. - Namespace isolation: Each team or mission gets its own namespace with dedicated Roles and RoleBindings. Use ResourceQuotas to prevent noisy neighbors.
- Audit ServiceAccount tokens: Every pod gets a service account. Disable automounting
where the pod doesn't need API access:
automountServiceAccountToken: false. - Short-lived tokens: Use bound service account tokens (projected volumes) which have a configurable expiry instead of the legacy non-expiring secrets.
# Minimal Role: read-only access to pods in a single namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: sensor-data
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
Network Policies: Zero Trust at the Pod Level
By default, every pod in a Kubernetes cluster can communicate with every other pod—across any namespace. This is the antithesis of Zero Trust. Network Policies act as the firewall layer for pod-to-pod traffic.
A critical first step is deploying a default-deny policy in every namespace, then explicitly allowing only the traffic each workload requires:
# Default deny all ingress and egress in a namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: mission-apps
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
For defense environments requiring encryption between services, a service mesh like Istio or Linkerd provides automatic mutual TLS (mTLS) between all pods—ensuring both encryption and cryptographic identity verification for every connection.
Runtime Security: Detecting the Unknown
Static hardening isn't enough. Attackers who bypass admission controls and network policies will attempt runtime exploitation: reverse shells, credential theft from instance metadata, or cryptominer injection. This is where runtime security tools powered by eBPF become critical.
- Falco: The CNCF-graduated runtime security project. Monitors syscalls in real-time
and alerts on suspicious behavior (e.g., a shell spawned inside a container, sensitive file reads in
/etc/shadow). - Tetragon (Cilium): eBPF-based observability and enforcement. Can block malicious actions at the kernel level before they complete—not just alert on them.
- KubeArmor: A runtime security enforcement system that restricts container behavior based on security policies, including file access, process execution, and networking.
For classified or defense workloads, runtime telemetry should be forwarded to a SIEM or SOAR platform for automated incident response.
Image Security & Supply Chain Integrity
A container image is the software bill of materials in a single artifact. Securing it means:
- Minimal base images: Use distroless or scratch-based images to minimize the attack surface. Each additional package is a potential CVE.
- Vulnerability scanning: Integrate Trivy, Grype, or Snyk into CI/CD to block images with critical CVEs from reaching production.
- Image signing: Use Cosign (part of the Sigstore project) to cryptographically sign images at build time. Enforce signature verification at admission using Kyverno or OPA Gatekeeper.
- SBOM generation: Generate a Software Bill of Materials for every image and store it alongside the image in the OCI registry.
# Sign an image with Cosign
cosign sign --key cosign.key registry.example.com/mission-app:v2.1.0
# Verify before deployment
cosign verify --key cosign.pub registry.example.com/mission-app:v2.1.0
Secrets Management: Beyond Kubernetes Secrets
Default Kubernetes Secrets are base64-encoded, not encrypted. Anyone with get
access to Secrets in a namespace can read them. For defense-grade deployments:
- Enable encryption at rest: Configure the API server's
EncryptionConfigurationto encrypt Secret data in etcd using AES-CBC or AES-GCM. - External secrets stores: Use the Secrets Store CSI Driver to mount secrets from HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault directly into pods as volumes.
- Rotate automatically: Ensure secrets have TTLs and are rotated automatically. Vault's dynamic secrets are ideal for database credentials.
CIS Kubernetes Benchmark & DISA STIG
The CIS Kubernetes Benchmark provides over 200 specific security recommendations for
cluster configuration. Tools like kube-bench automate compliance checks against the
benchmark:
# Run CIS benchmark checks on your cluster
kube-bench run --targets master,node,policies
For U.S. Department of Defense workloads, the DISA STIG for Kubernetes (published 2024) provides mandatory configuration controls aligned with DISA STIG compliance requirements. Key STIG controls include:
- API server audit logging must be enabled and forwarded to a centralized log system.
- etcd must use TLS for peer and client communication.
- Anonymous authentication must be disabled on the API server.
- All namespaces must enforce Pod Security Standards at the Restricted level.
- Node authorizer and admission webhook must be enabled.
Putting It All Together: A Hardening Checklist
Here is a prioritized checklist for teams beginning their Kubernetes hardening journey:
- Enable RBAC and remove any legacy ABAC policies. Audit all ClusterRoleBindings.
- Enforce Pod Security Standards (Restricted) via namespace labels.
- Deploy default-deny NetworkPolicies in every namespace.
- Enable audit logging on the API server and forward to your SIEM.
- Encrypt etcd data at rest and ensure TLS for all etcd communication.
- Scan images in CI/CD and enforce image signing in admission.
- Deploy runtime security (Falco or Tetragon) for behavioral monitoring.
- Enable mTLS between services via a service mesh or Cilium encryption.
- Run
kube-benchweekly and track remediation of findings. - Externalize secrets to a dedicated secrets manager with automatic rotation.
Alterra Solutions' Perspective
At Alterra, we build and harden Kubernetes platforms for clients who can't afford a breach—from defense agencies running classified workloads in air-gapped environments to fintech firms processing millions of transactions per second. Our approach combines the CIS Benchmark, DISA STIG controls, and runtime behavioral analysis into a single, continuously validated security posture.
If your team is deploying mission-critical workloads on Kubernetes and needs a security baseline that goes beyond the defaults, we'd love to talk.