Post

Docker for Security Professionals: Containerization Made Simple

Docker for Security Professionals: Containerization Made Simple

Here’s the problem: You write a Python script on your Mac. It works perfectly. You send it to your colleague who uses Windows. It doesn’t work. Different Python versions, missing libraries, path issues - the classic “works on my machine” problem. Docker solves this. Your script runs the same way everywhere. Let me show you how.

What is Docker, Really?

Think of Docker like this:

Without Docker:

  • Your app needs Python 3.11
  • Your colleague has Python 3.9
  • Your server has Python 3.10
  • Everyone has different library versions
  • Chaos ensues

With Docker:

  • You package your app with Python 3.11
  • It runs the same on Mac, Windows, Linux, AWS, anywhere
  • No “works on my machine” problems
  • Consistent, reproducible environments

The Core Concepts

Images: The Blueprint

An image is like a recipe. It contains:

  • The operating system (usually Linux)
  • Your application code
  • All dependencies
  • Configuration files

Think of it as a snapshot of everything your app needs to run.

Containers: The Running Instance

A container is a running instance of an image. Like how a house is built from a blueprint, a container is created from an image.

Key point: You can run multiple containers from the same image. Each one is isolated and independent.

Dockerfile: The Recipe

A Dockerfile is the instructions for building an image. It’s like a recipe card:

1
2
3
4
5
6
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "main.py"]

This says:

  1. Start with Python 3.11
  2. Set working directory to /app
  3. Copy requirements file
  4. Install dependencies
  5. Copy application code
  6. Run the application

Your First Docker Container

Let’s containerize a simple Python script:

Step 1: Create a Simple Python Script

hello_security.py:

1
2
3
4
5
6
7
8
9
10
#!/usr/bin/env python3
import sys
import platform

print("=" * 50)
print("Security Compliance Scanner")
print("=" * 50)
print(f"Running on: {platform.system()}")
print(f"Python version: {sys.version}")
print("✅ Scanner initialized successfully!")

Step 2: Create a Dockerfile

Dockerfile:

1
2
3
4
5
6
7
8
9
10
11
# Use Python 3.11 as base image
FROM python:3.11-slim

# Set working directory
WORKDIR /app

# Copy the script
COPY hello_security.py .

# Run the script
CMD ["python", "hello_security.py"]

Step 3: Build the Image

1
docker build -t security-scanner:latest .

Breaking it down:

  • docker build - Build command
  • -t security-scanner:latest - Tag (name:version)
  • . - Build context (current directory)

Step 4: Run the Container

1
docker run security-scanner:latest

Output:

1
2
3
4
5
6
==================================================
Security Compliance Scanner
==================================================
Running on: Linux
Python version: 3.11.0 (default, ...)
✅ Scanner initialized successfully!

Real-world example: You’ve just created a containerized version of your script. It will run the same way on your laptop, your colleague’s Windows machine, and your AWS EC2 instance!

Containerizing the Compliance Tool

Let’s containerize a real security tool - our compliance scanner:

Step 1: Project Structure

1
2
3
4
5
6
7
compliance-tool/
├── Dockerfile
├── requirements.txt
├── main.py
├── config/
│   └── baseline_checks.json
└── reports/

Step 2: requirements.txt

1
2
3
4
5
boto3>=1.28.0
pandas>=2.0.0
openpyxl>=3.1.0
elasticsearch>=8.8.0
requests>=2.31.0

Step 3: Dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# Use Python 3.11 slim image (smaller, faster)
FROM python:3.11-slim

# Set working directory
WORKDIR /app

# Install system dependencies (if needed)
RUN apt-get update && apt-get install -y \
    gcc \
    && rm -rf /var/lib/apt/lists/*

# Copy requirements first (for better caching)
COPY requirements.txt .

# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY . .

# Create reports directory
RUN mkdir -p /app/reports

# Set environment variables
ENV PYTHONUNBUFFERED=1

# Run the application
CMD ["python", "main.py"]

Step 4: Build and Run

1
2
3
4
5
6
7
8
9
# Build the image
docker build -t compliance-tool:1.0 .

# Run the container
docker run --rm \
  -v $(pwd)/reports:/app/reports \
  -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
  -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
  compliance-tool:1.0

What this does:

  • --rm - Remove container after it stops
  • -v $(pwd)/reports:/app/reports - Mount reports directory (persist data)
  • -e - Pass environment variables (AWS credentials)

Docker Compose: Multi-Container Applications

Docker Compose lets you run multiple containers together. Perfect for complex applications!

docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
version: '3.8'

services:
  # Compliance scanner
  scanner:
    build: .
    image: compliance-tool:latest
    environment:
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_DEFAULT_REGION=us-east-1
    volumes:
      - ./reports:/app/reports
    depends_on:
      - nginx
  
  # Web server to view reports
  nginx:
    image: nginx:alpine
    ports:
      - "8080:80"
    volumes:
      - ./reports:/usr/share/nginx/html/reports:ro
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
    restart: unless-stopped
  
  # Scheduled scanner (runs daily)
  scheduler:
    image: compliance-tool:latest
    command: python main.py
    environment:
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
    volumes:
      - ./reports:/app/reports
    restart: unless-stopped

Run it:

1
docker-compose up -d

This starts:

  • Your compliance scanner
  • An Nginx web server (view reports at http://localhost:8080)
  • A scheduled scanner

Real-world example: This is exactly how you’d deploy the compliance tool in production. One command starts everything!

Security Best Practices

1. Don’t Run as Root

Bad:

1
2
FROM python:3.11
# Runs as root by default - DANGEROUS!

Good:

1
2
3
4
5
6
7
8
9
FROM python:3.11-slim

# Create non-root user
RUN useradd -m -u 1000 appuser

# Switch to non-root user
USER appuser

WORKDIR /app

2. Use .dockerignore

Create a .dockerignore file (like .gitignore):

1
2
3
4
5
6
7
__pycache__
*.pyc
.git
.env
*.log
reports/*
.DS_Store

This prevents sensitive files from being copied into the image.

3. Don’t Store Secrets in Images

Bad:

1
2
ENV AWS_ACCESS_KEY_ID="AKIA..."
ENV AWS_SECRET_ACCESS_KEY="secret..."

Good:

1
2
# Pass as environment variables at runtime
docker run -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID ...

Or use Docker secrets (for Docker Swarm) or mounted files.

4. Use Multi-Stage Builds

Keep images small:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Stage 1: Build
FROM python:3.11 as builder
WORKDIR /build
COPY requirements.txt .
RUN pip install --user -r requirements.txt

# Stage 2: Runtime
FROM python:3.11-slim
WORKDIR /app
# Copy only installed packages from builder
COPY --from=builder /root/.local /root/.local
COPY . .
ENV PATH=/root/.local/bin:$PATH
CMD ["python", "main.py"]

This creates a smaller final image (no build tools).

5. Scan Images for Vulnerabilities

1
2
3
4
# Use Trivy to scan images
docker run --rm \
  -v /var/run/docker.sock:/var/run/docker.sock \
  aquasec/trivy image compliance-tool:latest

Common Docker Commands

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# List images
docker images

# Stop a container
docker stop container-name

# Remove a container
docker rm container-name

# Remove an image
docker rmi image-name

# View logs
docker logs container-name

# Execute command in running container
docker exec -it container-name /bin/bash

# Build image
docker build -t my-image:tag .

# Run container
docker run -d --name my-container my-image:tag

Real-World Example: Complete Compliance Tool Setup

Here’s a production-ready setup:

Dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
FROM python:3.11-slim

# Create non-root user
RUN useradd -m -u 1000 scanner && \
    mkdir -p /app/reports && \
    chown -R scanner:scanner /app

WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt && \
    rm -rf /root/.cache

# Copy application
COPY --chown=scanner:scanner . .

# Switch to non-root user
USER scanner

# Health check
HEALTHCHECK --interval=30s --timeout=3s \
  CMD python -c "import sys; sys.exit(0)"

CMD ["python", "main.py"]

docker-compose.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
version: '3.8'

services:
  scanner:
    build:
      context: .
      dockerfile: Dockerfile
    image: compliance-tool:latest
    container_name: compliance-scanner
    environment:
      - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
      - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
      - AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION:-us-east-1}
    volumes:
      - ./reports:/app/reports
    restart: unless-stopped
    networks:
      - compliance-network

networks:
  compliance-network:
    driver: bridge

Debugging Containers

View Logs

1
2
3
4
5
# Follow logs
docker logs -f container-name

# Last 100 lines
docker logs --tail 100 container-name

Enter Container

1
2
3
4
5
# Get a shell inside the container
docker exec -it container-name /bin/bash

# Check what's running
docker exec container-name ps aux

Inspect Container

1
2
3
4
5
# View container details
docker inspect container-name

# View container stats
docker stats container-name

Key Takeaways

  1. Docker = Consistency - Same environment everywhere
  2. Images = Blueprints - Define what your app needs
  3. Containers = Running Instances - Actual running applications
  4. Dockerfile = Recipe - Instructions for building images
  5. Don’t run as root - Security best practice
  6. Use .dockerignore - Don’t copy unnecessary files
  7. Multi-stage builds - Keep images small
  8. Scan for vulnerabilities - Use Trivy or similar tools

Practice Exercise

Try this yourself:

  1. Create a simple Python script
  2. Write a Dockerfile for it
  3. Build the image
  4. Run the container
  5. Modify the script and rebuild
  6. Use Docker Compose to run multiple containers

Resources to Learn More

What’s Next?

Now that you understand Docker, you’re ready to:

  • Deploy containers to Kubernetes (our next post!)
  • Build CI/CD pipelines with Docker
  • Create production-ready containerized applications

Remember: Docker is about consistency and portability. Master it, and deployment becomes much easier!

💡 Pro Tip: Start with simple containers, then gradually add complexity. Don’t try to containerize everything at once. Learn the basics first, then build up!


Ready to orchestrate containers at scale? Check out our next post on Kubernetes, where we’ll learn how to manage containerized applications in production!

This post is licensed under CC BY 4.0 by the author.