C
h
i
L
L
u
.
.
.
Containerization Platform

The Complete Docker Guide

Master Docker — the industry-standard containerization platform that lets you build, ship, and run applications consistently across any environment.

Why Learn Docker?

Docker was launched in 2013 and revolutionized how software is developed and deployed. Before Docker, the classic problem was "it works on my machine" — environments differed between development, staging, and production. Docker solves this by packaging your application and all its dependencies into a portable container.

Containers are lightweight, isolated environments that share the host OS kernel but run in their own filesystem and process space. Unlike virtual machines, they start in milliseconds and use far fewer resources. Docker has become fundamental to modern DevOps, CI/CD pipelines, and cloud-native development.

Docker integrates with Kubernetes for orchestration, enabling deployment of containerized applications at massive scale.

Key insight: Docker containers are consistent across all environments — if it runs in Docker locally, it runs the same way in production.

1. Core Concepts & Architecture

Images, Containers, and Registry

Docker Images are read-only templates built in layers. Containers are running instances of images. Docker Hub is the public registry for sharing images.

# Basic Docker commands

# Pull an image from Docker Hub
docker pull nginx:latest
docker pull node:20-alpine
docker pull postgres:16

# List downloaded images
docker images

# Run a container
docker run nginx                          # foreground
docker run -d nginx                       # detached (background)
docker run -d -p 8080:80 nginx            # map port 8080 -> 80
docker run -d --name my-nginx nginx       # named container

# List running containers
docker ps
docker ps -a                              # include stopped

# Container lifecycle
docker stop my-nginx                      # graceful stop
docker start my-nginx                     # restart
docker restart my-nginx
docker rm my-nginx                        # remove container
docker rm -f my-nginx                     # force remove (running)

# Remove images
docker rmi nginx
docker image prune                        # remove unused images
docker system prune -a                    # remove everything unused

# View logs
docker logs my-nginx
docker logs -f my-nginx                   # follow (live)
docker logs --tail 50 my-nginx            # last 50 lines

2. Images & Dockerfile

Writing a Dockerfile

A Dockerfile defines how to build your image. Each instruction creates a new layer. Ordering instructions from least-to-most frequently changing optimizes build cache.

# Dockerfile for a Node.js app
# Use official Node.js slim image
FROM node:20-alpine

# Set working directory inside container
WORKDIR /app

# Copy package files first (cache optimization)
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy rest of source code
COPY . .

# Create non-root user for security
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

# Expose port
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=3s   CMD wget -qO- http://localhost:3000/health || exit 1

# Start command
CMD ["node", "server.js"]

# -------------------------------------------
# Multi-stage build (smaller production image)
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package*.json ./
RUN npm ci --only=production
CMD ["node", "dist/server.js"]

Building and Tagging Images

Build images with docker build and tag them for pushing to a registry. Use .dockerignore to exclude files from the build context.

# Build image from Dockerfile in current directory
docker build -t my-app:1.0 .
docker build -t my-app:latest .
docker build -f Dockerfile.prod -t my-app:prod .

# Tag an existing image
docker tag my-app:latest username/my-app:1.0

# Push to Docker Hub
docker login
docker push username/my-app:1.0

# .dockerignore file
# node_modules
# .git
# .env
# *.log
# dist
# coverage

# Inspect image layers
docker history my-app:latest
docker inspect my-app:latest

3. Running Containers

Container Options and Exec

docker run supports many flags for resource limits, environment variables, restart policies, and more. docker exec lets you run commands inside a running container.

# Environment variables
docker run -d   -e DATABASE_URL=postgres://user:pass@db:5432/mydb   -e NODE_ENV=production   -e PORT=3000   my-app:latest

# Load env from file
docker run -d --env-file .env my-app:latest

# Resource limits
docker run -d   --memory="512m"   --cpus="0.5"   my-app:latest

# Restart policies
docker run -d --restart=always nginx         # always restart
docker run -d --restart=unless-stopped nginx # restart unless manually stopped
docker run -d --restart=on-failure:3 nginx   # restart on failure, max 3 times

# Execute commands inside running container
docker exec -it my-container bash            # interactive shell
docker exec my-container ls /app
docker exec my-container cat /app/.env

# Copy files to/from container
docker cp ./config.json my-container:/app/config.json
docker cp my-container:/app/logs/error.log ./error.log

# Container stats
docker stats my-container
docker top my-container                      # running processes

4. Volumes & Networking

Persisting Data and Container Networking

Containers are ephemeral — data is lost when removed. Volumes persist data. Docker networks allow containers to communicate securely.

# Named volumes (managed by Docker)
docker volume create my-data
docker run -d -v my-data:/var/lib/postgresql/data postgres:16

# Bind mounts (host path mapped to container)
docker run -d   -v $(pwd)/app:/app   -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro   nginx

# Volume management
docker volume ls
docker volume inspect my-data
docker volume rm my-data
docker volume prune              # remove unused volumes

# Networking
docker network create my-network          # bridge network
docker network ls
docker network inspect my-network

# Connect containers to a network
docker run -d --name db --network my-network postgres:16
docker run -d --name api --network my-network   -e DB_HOST=db                           # can reach 'db' by name!
  my-api:latest

# Port mapping
docker run -d -p 80:80 nginx              # host:container
docker run -d -p 127.0.0.1:3000:3000 app # bind to localhost only
docker run -d -P nginx                    # random host port

# Inspect container networking
docker inspect --format='{{.NetworkSettings.IPAddress}}' my-container

5. Docker Compose

Defining Multi-Container Applications

Docker Compose defines multi-container applications in a YAML file. With one command you can start an entire stack — web server, database, cache, and more.

# docker-compose.yml
services:
  # Web application
  app:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgresql://postgres:secret@db:5432/mydb
      - REDIS_URL=redis://cache:6379
    volumes:
      - .:/app
      - /app/node_modules
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    restart: unless-stopped

  # PostgreSQL database
  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: mydb
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  # Redis cache
  cache:
    image: redis:7-alpine
    volumes:
      - redis-data:/data

  # Nginx reverse proxy
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      - ./ssl:/etc/nginx/ssl:ro
    depends_on:
      - app

volumes:
  postgres-data:
  redis-data:

# Common commands
# docker compose up -d        # start all services
# docker compose down         # stop and remove containers
# docker compose logs -f app  # follow app logs
# docker compose exec app sh  # shell into app container
# docker compose restart app  # restart one service

6. Production Best Practices

Security, Optimization, and CI/CD

Production Docker images should be minimal, run as non-root, use multi-stage builds, and have health checks. Integrate Docker into CI/CD pipelines for automated builds and deployments.

# Security best practices Dockerfile
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:20-alpine
WORKDIR /app

# Don't run as root
RUN addgroup -g 1001 -S nodejs &&     adduser -S nextjs -u 1001

# Copy only what's needed from builder
COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./

USER nextjs
EXPOSE 3000
ENV NODE_ENV=production

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s   CMD wget -qO- http://localhost:3000/api/health || exit 1

CMD ["node", "dist/server.js"]

# GitHub Actions CI/CD example
# .github/workflows/docker.yml
# on: [push]
# jobs:
#   build:
#     runs-on: ubuntu-latest
#     steps:
#       - uses: actions/checkout@v4
#       - uses: docker/login-action@v3
#         with:
#           username: ${{ secrets.DOCKER_USER }}
#           password: ${{ secrets.DOCKER_TOKEN }}
#       - uses: docker/build-push-action@v5
#         with:
#           push: true
#           tags: user/app:latest
#           cache-from: type=gha
#           cache-to: type=gha,mode=max

Ship with Confidence Using Docker!

Docker is an essential tool in every developer's toolkit. Master it and you'll deploy faster, eliminate environment issues, and scale your applications with ease.

Happy Containerizing!