Kubernetes for Security: Orchestrating Containers at Scale
The problem: You have a compliance scanning tool running in a Docker container. It works great. But what if you need to scan 50 AWS accounts? Or run scans every hour? Or handle failures automatically? Managing containers manually doesn’t scale. Kubernetes (K8s) is your solution. It’s like having a smart system that manages your containers for you - automatically.
What is Kubernetes, Really?
Think of Kubernetes like this:
Without Kubernetes:
- You manually start containers
- You manually restart them when they crash
- You manually scale up/down
- You manually handle networking
- Lots of manual work
With Kubernetes:
- You declare what you want (“I want 3 instances of my scanner”)
- Kubernetes makes it happen
- It restarts crashed containers automatically
- It scales based on demand
- It handles networking automatically
- It’s self-healing and self-managing
Real-world analogy: Kubernetes is like a smart warehouse manager. You say “I need 10 scanners running,” and the manager:
- Finds available workers (nodes)
- Assigns them tasks (pods)
- Monitors their health
- Replaces them if they fail
- Balances the workload
Core Kubernetes Concepts
Cluster: The Big Picture
A cluster is your entire Kubernetes setup. It consists of:
- Control Plane (master) - The brain that makes decisions
- Nodes (workers) - The machines that run your containers
Think of it as a company:
- Control Plane = Management
- Nodes = Employees doing the work
Pods: The Smallest Unit
A pod is the smallest deployable unit in Kubernetes. It’s usually one container, but can contain multiple related containers.
Key point: Pods are ephemeral (temporary). They can be created, destroyed, and recreated. Don’t store important data in pods!
Deployments: Managing Pods
A Deployment manages a set of pods. It ensures a specified number of pods are running.
Real-world example: You want 3 instances of your compliance scanner running. You create a Deployment that says “keep 3 pods running.” If one crashes, Kubernetes automatically creates a new one.
Services: Exposing Pods
A Service provides a stable network endpoint for pods. Even if pods are recreated with new IPs, the service provides a consistent address.
Real-world example: Your scanner needs to be accessible. You create a Service that gives it a stable IP address and DNS name, even when pods restart.
Your First Kubernetes Deployment
Let’s deploy the compliance scanner to Kubernetes:
Step 1: Create a Deployment
deployment.yaml:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
apiVersion: apps/v1
kind: Deployment
metadata:
name: compliance-scanner
labels:
app: compliance-scanner
spec:
replicas: 2 # Run 2 instances
selector:
matchLabels:
app: compliance-scanner
template:
metadata:
labels:
app: compliance-scanner
spec:
containers:
- name: scanner
image: compliance-tool:latest
imagePullPolicy: IfNotPresent
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-credentials
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-credentials
key: secret-access-key
volumeMounts:
- name: reports
mountPath: /app/reports
volumes:
- name: reports
persistentVolumeClaim:
claimName: reports-pvc
Breaking it down:
replicas: 2- Run 2 instancesimage: compliance-tool:latest- The container imageenv- Environment variables (from secrets)volumes- Persistent storage for reports
Step 2: Create a Service
service.yaml:
1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: v1
kind: Service
metadata:
name: compliance-scanner-service
spec:
selector:
app: compliance-scanner
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP # Internal access only
Step 3: Create a PersistentVolumeClaim (for reports)
pvc.yaml:
1
2
3
4
5
6
7
8
9
10
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: reports-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Step 4: Create Secrets (for AWS credentials)
1
2
3
4
# Create secret from literal values
kubectl create secret generic aws-credentials \
--from-literal=access-key-id='YOUR_ACCESS_KEY' \
--from-literal=secret-access-key='YOUR_SECRET_KEY'
⚠️ Security Note: Never commit secrets to Git! Use Kubernetes secrets or external secret management.
Step 5: Deploy Everything
1
2
3
4
5
6
7
8
9
# Apply all configurations
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
kubectl apply -f pvc.yaml
# Check status
kubectl get pods
kubectl get services
kubectl get pvc
Expected output:
1
2
3
NAME READY STATUS RESTARTS AGE
compliance-scanner-7d8f9b4c5-abc12 1/1 Running 0 10s
compliance-scanner-7d8f9b4c5-xyz34 1/1 Running 0 10s
Common Kubernetes Commands
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# Get pods
kubectl get pods
# Get all resources
kubectl get all
# Describe a pod (detailed info)
kubectl describe pod compliance-scanner-abc12
# View logs
kubectl logs compliance-scanner-abc12
# Follow logs (like tail -f)
kubectl logs -f compliance-scanner-abc12
# Execute command in pod
kubectl exec -it compliance-scanner-abc12 -- /bin/bash
# Delete a pod (will be recreated by Deployment)
kubectl delete pod compliance-scanner-abc12
# Scale deployment
kubectl scale deployment compliance-scanner --replicas=5
# Update image
kubectl set image deployment/compliance-scanner scanner=compliance-tool:v2.0
# Rollback update
kubectl rollout undo deployment/compliance-scanner
Scheduled Jobs: CronJobs
Want to run your scanner on a schedule? Use a CronJob:
cronjob.yaml:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: batch/v1
kind: CronJob
metadata:
name: compliance-scanner-cron
spec:
schedule: "0 2 * * *" # Run daily at 2 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: scanner
image: compliance-tool:latest
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-credentials
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-credentials
key: secret-access-key
restartPolicy: OnFailure
Schedule syntax: "minute hour day month weekday"
"0 2 * * *"- Daily at 2 AM"0 */6 * * *"- Every 6 hours"0 9 * * 1"- Every Monday at 9 AM
ConfigMaps: Configuration Management
Store configuration separately from code:
configmap.yaml:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: v1
kind: ConfigMap
metadata:
name: scanner-config
data:
baseline_checks.json: |
{
"iam_policy": {
"disallow_wildcard_action": true
},
"s3_bucket": {
"require_server_side_encryption": true
}
}
scan_interval: "3600"
Use in deployment:
1
2
3
4
5
6
7
8
9
10
11
12
13
containers:
- name: scanner
image: compliance-tool:latest
envFrom:
- configMapRef:
name: scanner-config
volumeMounts:
- name: config
mountPath: /app/config
volumes:
- name: config
configMap:
name: scanner-config
Security Best Practices
1. Use Non-Root Users
1
2
3
4
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
2. Limit Resources
1
2
3
4
5
6
7
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
This prevents one pod from consuming all resources.
3. Use Network Policies
Control network traffic between pods:
network-policy.yaml:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: scanner-policy
spec:
podSelector:
matchLabels:
app: compliance-scanner
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: nginx
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: elasticsearch
ports:
- protocol: TCP
port: 9200
4. Use Secrets, Not Environment Variables
Bad:
1
2
3
env:
- name: AWS_SECRET_ACCESS_KEY
value: "AKIA..." # DON'T DO THIS!
Good:
1
2
3
4
5
6
env:
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-credentials
key: secret-access-key
Complete Production Setup
Here’s a production-ready configuration:
Namespace
namespace.yaml:
1
2
3
4
apiVersion: v1
kind: Namespace
metadata:
name: compliance-system
Deployment (with all best practices)
deployment-production.yaml:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
apiVersion: apps/v1
kind: Deployment
metadata:
name: compliance-scanner
namespace: compliance-system
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: compliance-scanner
template:
metadata:
labels:
app: compliance-scanner
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: scanner
image: compliance-tool:1.0.0
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-credentials
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-credentials
key: secret-access-key
volumeMounts:
- name: reports
mountPath: /app/reports
- name: tmp
mountPath: /tmp
livenessProbe:
exec:
command:
- python
- -c
- "import sys; sys.exit(0)"
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
exec:
command:
- python
- -c
- "import sys; sys.exit(0)"
initialDelaySeconds: 5
periodSeconds: 5
volumes:
- name: reports
persistentVolumeClaim:
claimName: reports-pvc
- name: tmp
emptyDir: {}
Monitoring and Observability
View Pod Logs
1
2
3
4
5
6
7
8
# All pods
kubectl logs -l app=compliance-scanner
# Specific pod
kubectl logs compliance-scanner-abc12
# Previous container (if crashed)
kubectl logs compliance-scanner-abc12 --previous
Check Resource Usage
1
2
3
4
5
# Top pods by CPU/memory
kubectl top pods
# Top nodes
kubectl top nodes
Describe Resources
1
2
3
4
# Get detailed info
kubectl describe pod compliance-scanner-abc12
kubectl describe deployment compliance-scanner
kubectl describe service compliance-scanner-service
Key Takeaways
- Kubernetes = Container Orchestration - Manages containers at scale
- Pods = Running Containers - The actual workloads
- Deployments = Pod Managers - Ensure pods stay running
- Services = Network Endpoints - Expose pods to network
- Secrets = Secure Credentials - Never hardcode secrets
- ConfigMaps = Configuration - Separate config from code
- CronJobs = Scheduled Tasks - Run jobs on schedule
- Always use non-root - Security best practice
- Limit resources - Prevent resource exhaustion
- Use namespaces - Organize resources
Practice Exercise
Try this yourself:
- Create a simple deployment with 2 replicas
- Create a service to expose it
- Scale it to 5 replicas
- Create a CronJob that runs hourly
- Check logs and status
Resources to Learn More
What’s Next?
Now that you understand Kubernetes, you’re ready to:
- Deploy to cloud Kubernetes (EKS, GKE, AKS)
- Set up CI/CD pipelines
- Monitor and scale applications
- Build production-ready systems
Remember: Kubernetes is powerful but complex. Start simple, learn the basics, then gradually add complexity!
💡 Pro Tip: Use
kubectl explainto understand any Kubernetes resource. For example:kubectl explain deployment.specshows you all available options for deployments!
Ready to secure your deployments? Check out our next post on CI/CD Security, where we’ll learn how to build secure deployment pipelines!