What You'll Learn: This comprehensive guide covers Docker best practices including Dockerfile optimization, security considerations, multi-stage builds, networking, and deployment strategies.
Dockerfile Best Practices
1. Optimize Docker Images
Creating efficient Docker images is crucial for faster deployments and reduced storage costs:
Inefficient Dockerfile (Bad)
# BAD: Uses heavy base image and inefficient layering
FROM ubuntu:latest
# Each RUN creates a new layer
RUN apt-get update
RUN apt-get install -y nodejs
RUN apt-get install -y npm
RUN apt-get install -y python3
RUN apt-get install -y git
WORKDIR /app
COPY . .
# Install all dependencies including dev dependencies
RUN npm install
# Creates unnecessary layers
RUN npm run build
RUN rm -rf node_modules
RUN npm install --production
EXPOSE 3000
CMD ["node", "server.js"]
Optimized Dockerfile (Good)
# GOOD: Multi-stage build with Alpine base
FROM node:18-alpine AS builder
WORKDIR /app
# Copy only package files first (better caching)
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# Copy source code
COPY . .
# Build the application
RUN npm run build
# Production stage
FROM node:18-alpine AS production
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001
WORKDIR /app
# Copy only necessary files from builder
COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist
COPY --from=builder --chown=nextjs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nextjs:nodejs /app/package.json ./package.json
# Switch to non-root user
USER nextjs
EXPOSE 3000
# Use exec form for better signal handling
CMD ["node", "dist/server.js"]
Use multi-stage builds to reduce final image size by excluding build dependencies and intermediate files.
2. Layer Optimization
Optimize Docker layers for better caching and smaller images:
Layer Optimization Strategies
# Combine RUN instructions to reduce layers
RUN apt-get update && apt-get install -y \
nodejs \
npm \
python3 \
git \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Use .dockerignore to exclude unnecessary files
# .dockerignore
node_modules
npm-debug.log
.git
.gitignore
README.md
Dockerfile
.dockerignore
coverage/
.nyc_output
.env.local
.env.*.local
# Order layers by change frequency (least to most likely to change)
FROM node:18-alpine
# 1. System dependencies (rarely change)
RUN apk add --no-cache dumb-init
# 2. Application dependencies (change occasionally)
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force
# 3. Application code (changes frequently)
COPY . .
# Use specific tags, not 'latest'
FROM node:18.17.0-alpine
Security Best Practices
1. Non-Root User
Always run containers with non-root users to minimize security risks:
Security-First Container Setup
FROM node:18-alpine
# Create a non-root user
RUN addgroup -g 1001 -S appgroup && \
adduser -S appuser -u 1001 -G appgroup
# Create app directory with proper permissions
RUN mkdir -p /app && chown -R appuser:appgroup /app
WORKDIR /app
# Install dependencies as root
COPY package*.json ./
RUN npm ci --only=production && \
npm cache clean --force && \
chown -R appuser:appgroup /app
# Switch to non-root user before copying app code
USER appuser
COPY --chown=appuser:appgroup . .
EXPOSE 3000
# Use exec form and run as non-root
CMD ["node", "server.js"]
2. Secrets Management
Secure Secrets Handling
# BAD: Never put secrets in Dockerfile
# ENV API_KEY=secret_key_here
# RUN echo "password123" > /app/config.txt
# GOOD: Use Docker secrets or environment variables
FROM node:18-alpine
WORKDIR /app
# Use build args for build-time variables (not secrets)
ARG NODE_ENV=production
ENV NODE_ENV=$NODE_ENV
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# Use Docker secrets in swarm mode
# docker service create --secret my_secret my_app
# Or mount secrets as files
# docker run -v /host/secrets:/run/secrets:ro my_app
# Access secrets from environment or mounted files
CMD ["node", "server.js"]
Never include secrets, passwords, or API keys directly in Dockerfiles or images. Use environment variables, Docker secrets, or mounted volumes instead.
3. Image Scanning
Security Scanning Setup
# Use official base images from trusted sources
FROM node:18-alpine
# Install security updates
RUN apk update && apk upgrade
# Use specific versions to avoid supply chain attacks
COPY package-lock.json ./
RUN npm ci --only=production
# Scan image for vulnerabilities
# docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
# -v $HOME/Library/Caches:/tmp/.cache/ \
# aquasec/trivy image my-app:latest
# Use minimal base images when possible
FROM scratch
# or
FROM distroless/nodejs18-debian11
# Example CI/CD security scan
# name: Security Scan
# on: [push, pull_request]
# jobs:
# scan:
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v3
# - name: Build image
# run: docker build -t my-app .
# - name: Scan image
# uses: aquasecurity/trivy-action@master
# with:
# image-ref: 'my-app:latest'
# format: 'sarif'
# output: 'trivy-results.sarif'
Multi-Stage Builds
1. React Application Example
Multi-Stage React Build
# Build stage
FROM node:18-alpine AS build
WORKDIR /app
# Install dependencies
COPY package*.json ./
RUN npm ci
# Copy source and build
COPY . .
RUN npm run build
# Production stage
FROM nginx:alpine AS production
# Copy built assets from build stage
COPY --from=build /app/dist /usr/share/nginx/html
# Copy custom nginx config
COPY nginx.conf /etc/nginx/nginx.conf
# Create non-root user for nginx
RUN adduser -D -s /bin/sh www-data
# Use non-root user
USER www-data
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
2. Go Application Example
Multi-Stage Go Build
# Build stage
FROM golang:1.21-alpine AS builder
WORKDIR /app
# Copy go mod files
COPY go.mod go.sum ./
RUN go mod download
# Copy source code
COPY . .
# Build the binary
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
# Production stage
FROM scratch
# Copy CA certificates for HTTPS
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Copy the binary
COPY --from=builder /app/main /main
# Expose port
EXPOSE 8080
# Run the binary
ENTRYPOINT ["/main"]
Docker Networking
1. Network Configuration
Docker Network Setup
# Create custom network
docker network create --driver bridge my-app-network
# Run containers on the same network
docker run -d --name database \
--network my-app-network \
-e POSTGRES_DB=myapp \
-e POSTGRES_USER=user \
-e POSTGRES_PASSWORD=password \
postgres:15-alpine
docker run -d --name api \
--network my-app-network \
-p 3000:3000 \
-e DATABASE_URL=postgresql://user:password@database:5432/myapp \
my-api:latest
# Use docker-compose for complex networking
version: '3.8'
services:
database:
image: postgres:15-alpine
networks:
- app-network
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- db-data:/var/lib/postgresql/data
api:
build: .
networks:
- app-network
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://user:password@database:5432/myapp
depends_on:
- database
networks:
app-network:
driver: bridge
volumes:
db-data:
Docker Compose Best Practices
1. Production-Ready Compose File
Production Docker Compose
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile.prod
restart: unless-stopped
ports:
- "80:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:${DB_PASSWORD}@db:5432/myapp
depends_on:
db:
condition: service_healthy
networks:
- app-network
volumes:
- ./logs:/app/logs
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
db:
image: postgres:15-alpine
restart: unless-stopped
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- db-data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
networks:
- app-network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
interval: 10s
timeout: 5s
retries: 5
nginx:
image: nginx:alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
depends_on:
- app
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
db-data:
driver: local
Performance Optimization
1. Resource Limits
Resource Management
# Set resource limits in docker-compose.yml
version: '3.8'
services:
app:
image: my-app:latest
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
# Or use docker run with limits
docker run -d \
--name my-app \
--memory=512m \
--cpus=0.5 \
--restart=unless-stopped \
my-app:latest
# Monitor resource usage
docker stats
# Use healthchecks to ensure container health
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
2. Image Size Optimization
Size Optimization Techniques
# Use alpine-based images
FROM node:18-alpine
# Use distroless for even smaller images
FROM gcr.io/distroless/nodejs18-debian11
# Remove unnecessary packages after installation
RUN apt-get update && apt-get install -y \
package1 \
package2 \
&& apt-get autoremove -y \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Use .dockerignore effectively
# .dockerignore
**/.git
**/node_modules
**/npm-debug.log
**/.coverage
**/.nyc_output
**/.cache
**/.env
**/dist
**/build
# Minimize layers and use multi-stage builds
FROM node:18-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM node:18-alpine AS runner
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
CMD ["node", "server.js"]
# Check image size
docker images
docker history my-app:latest
Production Deployment
1. CI/CD Pipeline
GitHub Actions Docker Pipeline
name: Build and Deploy
on:
push:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-and-push:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Log in to Container Registry
uses: docker/login-action@v2
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v4
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=sha,prefix={{branch}}-
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
- name: Deploy to production
run: |
# Update docker-compose.yml with new image tag
sed -i "s|image: .*|image: ${{ steps.meta.outputs.tags }}|" docker-compose.prod.yml
# Deploy using docker-compose
docker-compose -f docker-compose.prod.yml up -d
Monitoring and Logging
1. Centralized Logging
Docker Logging Setup
# Configure logging driver
version: '3.8'
services:
app:
image: my-app:latest
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Or use syslog
logging:
driver: syslog
options:
syslog-address: "tcp://log-server:514"
# Use structured logging in application
const winston = require('winston');
const logger = winston.createLogger({
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
transports: [
new winston.transports.Console(),
new winston.transports.File({ filename: '/app/logs/error.log', level: 'error' }),
new winston.transports.File({ filename: '/app/logs/combined.log' })
]
});
# Use ELK stack for log aggregation
version: '3.8'
services:
app:
image: my-app:latest
logging:
driver: "gelf"
options:
gelf-address: "udp://logstash:12201"
logstash:
image: logstash:8.5.0
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
elasticsearch:
image: elasticsearch:8.5.0
environment:
- discovery.type=single-node
kibana:
image: kibana:8.5.0
ports:
- "5601:5601"
Best Practices Summary
- Use official base images: Start with official, minimal base images from trusted sources
- Implement multi-stage builds: Reduce final image size by excluding build dependencies
- Run as non-root user: Always create and use non-root users for security
- Optimize layers: Combine RUN commands and order by change frequency
- Use .dockerignore: Exclude unnecessary files from build context
- Set resource limits: Define CPU and memory limits to prevent resource exhaustion
- Implement health checks: Use health checks to ensure container health
- Use specific image tags: Avoid 'latest' tag in production
- Scan for vulnerabilities: Regularly scan images for security vulnerabilities
- Monitor and log: Implement proper logging and monitoring solutions
Conclusion
Following Docker best practices ensures that your containers are secure, efficient, and production-ready. These practices will help you build maintainable and scalable containerized applications while avoiding common pitfalls.
Next Steps: Implement these practices in your current Docker setup, set up automated security scanning, and consider using orchestration tools like Kubernetes for complex deployments.