Docker has fundamentally transformed how we build, ship, and run applications. What started as a simple containerization tool has evolved into a complete ecosystem for modern application development and deployment.
This is the fourth and most comprehensive article in my Docker series for .NET developers. If you're new to Docker, you might want to start with:
In this deep dive, we'll build on those foundations to explore:
NOTE: This is part of my experiments with AI / a way to spend $1000 Claude Code Web credits. I've fed this a BUNCH of papers, my understanding, questions I had to generate this article. It's fun and fills a gap I haven't seen filled anywhere else.
Whether you're deploying a simple web app or orchestrating a complex microservices architecture with GPU-accelerated machine learning models, this guide takes you from Docker basics to production-ready containerized applications - with real examples from running mostlylucid.net.
At its core, Docker is a containerization platform that packages your application and all its dependencies into a standardized unit called a container. Unlike virtual machines that virtualize entire operating systems, containers share the host OS kernel while maintaining isolated user spaces.
Think of it this way:
# The classic developer problem
"It works on my machine!"
# The container solution
"Ship your machine!"
Containers solve several critical problems:
# An image is a template (like a class in OOP)
docker pull mcr.microsoft.com/dotnet/aspnet:9.0
# A container is a running instance (like an object)
docker run -d -p 8080:8080 myapp:latest
Images are immutable, layered filesystems. Each instruction in a Dockerfile creates a new layer:
FROM mcr.microsoft.com/dotnet/aspnet:9.0 # Layer 1: Base OS + .NET runtime
WORKDIR /app # Layer 2: Directory structure
COPY *.dll ./ # Layer 3: Application files
ENTRYPOINT ["dotnet", "MyApp.dll"] # Layer 4: Startup command
This layering is powerful:
A common source of confusion for Docker beginners: Commands in a Dockerfile don't run on your machine - they run inside the build container's OS.
Here's what actually happens:
Your Machine (Windows/Mac/Linux)
↓ (reads Dockerfile)
Build Image (usually Linux)
↓ (executes RUN commands here)
Output Image (contains results)
Why this matters:
# You're on Windows, writing this Dockerfile
FROM ubuntu:24.04
# This RUN command executes in Ubuntu, NOT on your Windows machine!
RUN apt-get update && apt-get install -y curl
# This copies FROM your Windows filesystem
COPY myapp.exe /app/
# This executes IN the Ubuntu container
RUN chmod +x /app/myapp.exe
Key insights:
COPY and ADD commands read from your machineRUN commands execute in the container's OS (not your machine)This is why you can write apt-get commands in a Dockerfile on Windows - they're not running on Windows! They're running inside the Linux-based build container.
Example confusion scenario:
FROM mcr.microsoft.com/dotnet/sdk:9.0 # This is Linux-based
# You might think: "But I'm on Windows, how can I use these Linux commands?"
RUN apt-get update # ← Executes in the Linux build container, not your Windows machine
RUN dotnet restore # ← Executes in the Linux build container
The Docker daemon handles the translation between your local filesystem and the containerized build environment.
Here's a production-ready .NET Dockerfile with commentary:
# Multi-stage build: separates build environment from runtime
# Stage 1: Build
FROM mcr.microsoft.com/dotnet/sdk:9.0 AS build
WORKDIR /src
# Copy only csproj files first (better layer caching)
COPY ["MyApp/MyApp.csproj", "MyApp/"]
COPY ["MyApp.Core/MyApp.Core.csproj", "MyApp.Core/"]
# Restore dependencies (cached unless csproj changes)
RUN dotnet restore "MyApp/MyApp.csproj"
# Copy everything else
COPY . .
# Build the application
WORKDIR "/src/MyApp"
RUN dotnet build "MyApp.csproj" -c Release -o /app/build
# Stage 2: Publish
FROM build AS publish
RUN dotnet publish "MyApp.csproj" -c Release -o /app/publish /p:UseAppHost=false
# Stage 3: Final runtime image
FROM mcr.microsoft.com/dotnet/aspnet:9.0 AS final
WORKDIR /app
# Create non-root user for security
RUN addgroup --gid 1001 appuser && \
adduser --uid 1001 --gid 1001 --disabled-password --gecos "" appuser
# Copy published output from publish stage
COPY --from=publish /app/publish .
# Switch to non-root user
USER appuser
# Expose port (documentation only, doesn't actually publish)
EXPOSE 8080
# Set environment variables
ENV ASPNETCORE_URLS=http://+:8080
ENV ASPNETCORE_ENVIRONMENT=Production
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
ENTRYPOINT ["dotnet", "MyApp.dll"]
Here's what actually happens during a multi-stage Docker build - where files come from and where they end up:
graph TB
subgraph "Your Machine"
A[Source Code<br/>*.cs, *.csproj]
B[Frontend Assets<br/>*.js, *.css]
C[Configuration<br/>appsettings.json]
D[Static Files<br/>wwwroot/]
end
subgraph "Stage 1: Build Container (SDK Image)"
E[dotnet/sdk:9.0<br/>~1.5GB]
F[Copy .csproj files]
G[dotnet restore<br/>Download NuGet packages]
H[Copy source code]
I[dotnet build<br/>Compile to DLLs]
J[Build artifacts<br/>/app/build/]
end
subgraph "Stage 2: Publish Container"
K[dotnet publish<br/>Optimize & trim]
L[Published output<br/>/app/publish/]
end
subgraph "Stage 3: Final Runtime Container (ASPNET Image)"
M[dotnet/aspnet:9.0<br/>~220MB]
N[Create app user<br/>Security]
O[Copy published files<br/>ONLY production artifacts]
P[Final image<br/>~250MB total]
end
subgraph "Frontend Build Pipeline (Parallel)"
Q[npm install<br/>node_modules/]
R[Webpack bundling<br/>*.js → dist/]
S[TailwindCSS + PostCSS<br/>*.css → dist/]
T[Optimized assets<br/>wwwroot/js/dist/<br/>wwwroot/css/dist/]
end
A --> F
A --> H
F --> G
G --> H
H --> I
I --> J
J --> K
K --> L
B --> Q
Q --> R
Q --> S
R --> T
S --> T
L --> O
T --> O
C --> O
D --> O
M --> N
N --> O
O --> P
Key insights from this flow:
.csproj before source code means dependency restore is cached unless dependencies changenpm run build) and copied into the final imageappsettings.json and wwwroot/ content copied from your machine, not builtKey principles illustrated:
# Build the image
docker build -t myapp:1.0.0 -t myapp:latest .
# Run with common options
docker run -d \
--name myapp \
-p 8080:8080 \
-e ConnectionStrings__DefaultConnection="Server=db;Database=myapp" \
-v /data/logs:/app/logs \
--restart unless-stopped \
myapp:latest
# View logs
docker logs -f myapp
# Execute commands inside running container
docker exec -it myapp /bin/bash
# Stop and remove
docker stop myapp
docker rm myapp
Docker Compose allows you to define and run multi-container applications. Instead of managing containers individually, you describe your entire application stack in a YAML file.
Consider a typical .NET web application:
Managing these individually with docker run commands becomes unwieldy. Docker Compose solves this.
Here's the actual production docker-compose.yml that runs this very site (mostlylucid.net):
services:
# Main ASP.NET Core application
mostlylucid:
image: scottgal/mostlylucid:latest
restart: always
healthcheck:
test: [ "CMD", "curl", "-f -K", "https://mostlylucid:7240/healthy" ]
interval: 30s
timeout: 10s
retries: 5
labels:
- "com.centurylinklabs.watchtower.enable=true"
env_file:
- .env
environment:
- Auth__GoogleClientId=${AUTH_GOOGLECLIENTID}
- Auth__GoogleClientSecret=${AUTH_GOOGLECLIENTSECRET}
- Auth__AdminUserGoogleId=${AUTH_ADMINUSERGOOGLEID}
- SmtpSettings__UserName=${SMTPSETTINGS_USERNAME}
- SmtpSettings__Password=${SMTPSETTINGS_PASSWORD}
- Analytics__UmamiPath=${ANALYTICS_UMAMIPATH}
- Analytics__WebsiteId=${ANALYTICS_WEBSITEID}
- ConnectionStrings__DefaultConnection=${POSTGRES_CONNECTIONSTRING}
- TranslateService__ServiceIPs=${EASYNMT_IPS}
- Serilog__WriteTo__0__Args__apiKey=${SEQ_API_KEY}
- Markdown__MarkdownPath=${MARKDOWN_MARKDOWNPATH}
volumes:
- /mnt/imagecache:/app/wwwroot/cache
- /mnt/logs:/app/logs
- /mnt/markdown:/app/markdown
- ./mostlylucid.pfx:/app/mostlylucid.pfx
- /mnt/articleimages:/app/wwwroot/articleimages
- /mnt/mostlylucid/uploads:/app/wwwroot/uploads
networks:
- app_network
depends_on:
- db
# PostgreSQL database
db:
image: postgres:16-alpine
ports:
- 5266:5432 # Custom external port to avoid conflicts
env_file:
- .env
networks:
- app_network
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 5s
timeout: 5s
retries: 5
volumes:
- /mnt/umami/postgres:/var/lib/postgresql/data
restart: always
# Cloudflare tunnel for secure external access
cloudflared:
image: cloudflare/cloudflared:latest
command: tunnel --no-autoupdate run --token ${CLOUDFLARED_TOKEN}
env_file:
- .env
restart: always
networks:
- app_network
# Umami analytics
umami:
image: ghcr.io/umami-software/umami:postgresql-latest
env_file: .env
environment:
DATABASE_URL: ${DATABASE_URL}
DATABASE_TYPE: ${DATABASE_TYPE}
HASH_SALT: ${HASH_SALT}
APP_SECRET: ${APP_SECRET}
TRACKER_SCRIPT_NAME: getinfo
API_COLLECT_ENDPOINT: all
depends_on:
- db
labels:
- "com.centurylinklabs.watchtower.enable=true"
networks:
- app_network
restart: always
# Translation service (CPU-limited for resource management)
easynmt:
image: easynmt/api:2.0.2-cpu
volumes:
- /mnt/easynmt:/cache/
deploy:
resources:
limits:
cpus: "4.0" # Prevent translation service from consuming all CPU
networks:
- app_network
# Caddy reverse proxy with automatic HTTPS
caddy:
image: caddy:latest
ports:
- 80:80
- 443:443
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
networks:
- app_network
restart: always
# Seq centralized logging
seq:
image: datalust/seq
container_name: seq
restart: unless-stopped
environment:
ACCEPT_EULA: "Y"
SEQ_FIRSTRUN_ADMINPASSWORDHASH: ${SEQ_DEFAULT_HASH}
volumes:
- /mnt/seq:/data
networks:
- app_network
# Prometheus metrics collection
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- prometheus-data:/prometheus
- ./prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
labels:
- "com.centurylinklabs.watchtower.enable=true"
networks:
- app_network
# Grafana visualization
grafana:
image: grafana/grafana:latest
container_name: grafana
labels:
- "com.centurylinklabs.watchtower.enable=true"
volumes:
- grafana-data:/var/lib/grafana
networks:
- app_network
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
# Host metrics exporter
node_exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
command:
- '--path.rootfs=/host'
networks:
- app_network
restart: unless-stopped
volumes:
- '/:/host:ro,rslave'
# Automatic container updates
watchtower:
image: containrrr/watchtower
container_name: watchtower
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_LABEL_ENABLE=true
command: --interval 300 # Check every 5 minutes
volumes:
grafana-data:
caddy_data:
caddy_config:
prometheus-data:
networks:
app_network:
driver: bridge
Key Production Patterns:
app_network for simplicity/mnt/*) for persistent data you need to backup/access.env file (never committed to git)services:
web:
depends_on:
db:
condition: service_healthy # Wait for health check
redis:
condition: service_started # Just wait for start
Docker Compose orchestrates startup order. The condition: service_healthy requires the database health check to pass before starting the web app.
# .env file (never commit to git!)
DB_PASSWORD=super_secret_password
SMTP_PASSWORD=another_secret
services:
web:
environment:
- DB_PASSWORD=${DB_PASSWORD} # From .env file
- STATIC_VALUE=production # Hardcoded
env_file:
- .env # Load entire file
For production secrets, use Docker Secrets or external secret managers (AWS Secrets Manager, Azure Key Vault, HashiCorp Vault).
services:
db:
volumes:
# Named volume (managed by Docker)
- postgres_data:/var/lib/postgresql/data
web:
volumes:
# Bind mount (maps host directory to container)
- ./data/markdown:/app/Markdown
- ./logs:/app/logs
Named volumes:
Bind mounts:
networks:
frontend:
driver: bridge
backend:
driver: bridge
services:
web:
networks:
- frontend
- backend
db:
networks:
- backend # Not exposed to frontend
Networks provide isolation. Here, the database is only accessible to backend services, not directly exposed.
services:
db:
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
Health checks allow Docker to:
# Start all services (detached)
docker-compose up -d
# Start specific services
docker-compose up -d web db
# View logs (all services)
docker-compose logs -f
# View logs (specific service)
docker-compose logs -f web
# Stop services (containers remain)
docker-compose stop
# Stop and remove containers
docker-compose down
# Stop, remove containers, and remove volumes
docker-compose down -v
# Rebuild and restart
docker-compose up -d --build
# Scale a service
docker-compose up -d --scale worker=3
# Execute command in running service
docker-compose exec web /bin/bash
# Run one-off command
docker-compose run --rm web dotnet ef database update
Separate concerns with multiple compose files:
docker-compose.yml (base):
services:
web:
build: .
environment:
- ASPNETCORE_ENVIRONMENT=Development
docker-compose.override.yml (development - auto-merged):
services:
web:
volumes:
- .:/app # Live code reloading
ports:
- "5000:8080"
docker-compose.prod.yml (production):
services:
web:
image: registry.example.com/myapp:${VERSION}
restart: always
deploy:
replicas: 3
resources:
limits:
cpus: '2'
memory: 2G
# Development (base + override)
docker-compose up -d
# Production
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Machine learning, scientific computing, and video processing applications often require GPU acceleration. Docker supports NVIDIA GPUs through the NVIDIA Container Toolkit.
# Install NVIDIA Container Toolkit (Ubuntu/Debian)
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
# Configure Docker daemon
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
# Test GPU access
docker run --rm --gpus all nvidia/cuda:12.6.0-base-ubuntu24.04 nvidia-smi
# Use NVIDIA CUDA base image
FROM nvidia/cuda:12.6.0-cudnn-runtime-ubuntu24.04
# Install Python
RUN apt-get update && apt-get install -y \
python3.12 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Install PyTorch with CUDA support
COPY requirements.txt .
RUN pip3 install --no-cache-dir torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
# Copy application
COPY . .
# Test GPU on container start
RUN python3 -c "import torch; print(f'CUDA available: {torch.cuda.is_available()}'); print(f'GPU: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else \"None\"}')"
ENTRYPOINT ["python3", "train.py"]
# Run with all GPUs
docker run --gpus all myapp:gpu
# Run with specific GPUs
docker run --gpus '"device=0,2"' myapp:gpu
# Run with GPU memory limits
docker run --gpus all --memory=16g myapp:gpu
services:
ml-trainer:
build:
context: .
dockerfile: Dockerfile.gpu
image: myapp:gpu
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all # or specific count: 1, 2, etc.
capabilities: [gpu]
volumes:
- ./models:/app/models
- ./data:/app/data
environment:
- NVIDIA_VISIBLE_DEVICES=all
- CUDA_VISIBLE_DEVICES=0,1 # Use GPUs 0 and 1
Here's a real production example from mostlylucid-nmt, a neural machine translation service I built that powers auto-translation on this blog.
The project demonstrates:
services:
translation:
image: scottgal/mostlylucid-nmt:gpu
container_name: translation-gpu
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
environment:
- MODEL_FAMILY=opus-mt
- FALLBACK_MODELS=mbart50,m2m100
- CUDA_VISIBLE_DEVICES=0
- LOG_LEVEL=info
volumes:
- model_cache:/app/cache # Persistent model storage
ports:
- "8888:8888"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8888/health"]
interval: 30s
timeout: 10s
retries: 3
volumes:
model_cache:
For environments without GPUs, the same service runs on CPU:
services:
translation:
image: scottgal/mostlylucid-nmt:cpu
container_name: translation-cpu
environment:
- MODEL_FAMILY=opus-mt
- FALLBACK_MODELS=mbart50,m2m100
volumes:
- model_cache:/app/cache
ports:
- "8888:8888"
restart: unless-stopped
volumes:
model_cache:
Available image variants:
scottgal/mostlylucid-nmt:gpu - CUDA 12.6 with PyTorch GPU support (~5GB)scottgal/mostlylucid-nmt:cpu - CPU-only, smaller footprint (~2.5GB)scottgal/mostlylucid-nmt:gpu-min - Minimal GPU build, no preloaded models (~4GB)scottgal/mostlylucid-nmt:cpu-min - Minimal CPU build (~1.5GB)Key features:
/health and /ready for orchestratorsSee the full project on GitHub for Dockerfile examples, build scripts, and API documentation.
Modern applications need to run on multiple architectures: x86_64 (AMD64) for servers, ARM64 for Raspberry Pi and Apple Silicon Macs, sometimes even ARM32 for embedded devices.
# Problem: Image built on M1 Mac won't run on Linux server
docker build -t myapp:latest . # Builds for ARM64
docker push myapp:latest
# Server tries to run it... error: "exec format error"
# Solution: Build for multiple platforms
docker buildx build --platform linux/amd64,linux/arm64 -t myapp:latest --push .
Docker Buildx is included in Docker Desktop. For Linux:
# Verify buildx is available
docker buildx version
# Create a new builder instance
docker buildx create --name multiarch --driver docker-container --use
# Inspect and bootstrap the builder
docker buildx inspect --bootstrap
# List available platforms
docker buildx inspect | grep Platforms
Most Dockerfiles work without changes, but here are some tips:
# Use official multi-arch base images
FROM mcr.microsoft.com/dotnet/aspnet:9.0 AS base
# For platform-specific operations, use build arguments
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "Building on $BUILDPLATFORM for $TARGETPLATFORM"
# Install architecture-specific dependencies
RUN if [ "$TARGETPLATFORM" = "linux/arm64" ]; then \
apt-get update && apt-get install -y some-arm64-package; \
elif [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
apt-get update && apt-get install -y some-amd64-package; \
fi
# Build and push for AMD64 and ARM64
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t myregistry/myapp:latest \
-t myregistry/myapp:1.0.0 \
--push \
.
# Build without pushing (loads into local Docker)
# Note: Can only load one platform at a time
docker buildx build \
--platform linux/amd64 \
-t myapp:latest \
--load \
.
# Build and export to tar files
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t myapp:latest \
-o type=tar,dest=./myapp.tar \
.
Unfortunately, Docker Compose doesn't directly support buildx. Workarounds:
Option 1: Pre-build images
# Build multi-arch images first
docker buildx build --platform linux/amd64,linux/arm64 -t myapp:latest --push .
# Then use in compose
services:
web:
image: myapp:latest # Already built for multiple architectures
Option 2: Build script
#!/bin/bash
# build-multiarch.sh
docker buildx build --platform linux/amd64,linux/arm64 \
-t myregistry/web:latest \
-f web/Dockerfile \
--push \
web/
docker buildx build --platform linux/amd64,linux/arm64 \
-t myregistry/worker:latest \
-f worker/Dockerfile \
--push \
worker/
docker-compose pull # Pull the multi-arch images
docker-compose up -d
GitHub Actions workflow for multi-architecture builds:
name: Build and Push Multi-Arch Images
on:
push:
branches: [ main ]
tags: [ 'v*' ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: myregistry/myapp
tags: |
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=sha,prefix={{branch}}-
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=registry,ref=myregistry/myapp:buildcache
cache-to: type=registry,ref=myregistry/myapp:buildcache,mode=max
This workflow:
The mostlyucid-nmt translation service demonstrates sophisticated Docker build strategies that optimize for different deployment scenarios. This project showcases how to build and distribute multiple image variants from a single codebase.
The project produces eight different image variants to cover different use cases:
| Image Tag | Architecture | Accelerator | Model Preloading | Size | Use Case |
|---|---|---|---|---|---|
:cpu (:latest) |
AMD64, ARM64 | CPU | Yes (~100 models) | ~2.5GB | Production CPU deployments |
:cpu-min |
AMD64, ARM64 | CPU | No | ~1.5GB | Dynamic model loading |
:gpu |
AMD64 only | NVIDIA CUDA | Yes (~100 models) | ~5GB | Production GPU deployments |
:gpu-min |
AMD64 only | NVIDIA CUDA | No | ~4GB | GPU with dynamic models |
Why multiple variants?
The Dockerfiles use strategic layer ordering to maximize build cache reuse:
Standard CPU Dockerfile structure:
FROM python:3.12-slim
# Layer 1: System dependencies (rarely changes)
RUN apt-get update && apt-get install -y --no-install-recommends \
git curl \
&& rm -rf /var/lib/apt/lists/*
# Layer 2: Python dependencies (changes when requirements.txt updates)
WORKDIR /app
nts-prod.txt .
RUN pip install --no-cache-dir -r requirements-prod.txt
# Layer 3: Model preloading (slowest layer, changes when model list changes)
ARG PRELOAD_LANGS="de,es,fr,it,nl,pl,pt,ru,zh,ja,ar,uk"
ARG PRELOAD_PAIRS=""
RUN python -c "from easynmt import EasyNMT; \
import os; \
langs = os.environ.get('PRELOAD_LANGS', '').split(','); \
pairs = os.environ.get('PRELOAD_PAIRS', '').split(',') if os.environ.get('PRELOAD_PAIRS') else []; \
model = EasyNMT('opus-mt'); \
[model.translate('warmup', source_lang='en', target_lang=lang) for lang in langs if lang]; \
[model.translate('warmup', source_lang=src, target_lang=tgt) for pair in pairs for src,tgt in [pair.split('->')] if pair]"
# Layer 4: Application code (changes frequently, rebuilds in seconds)
COPY app.py .
COPY src/ ./src/
# Runtime configuration
EXPOSE 8000
ENV PYTHONUNBUFFERED=1 \
MODEL_CACHE_DIR=/app/models \
WEB_CONCURRENCY=4
CMD ["gunicorn", "app:app", "--bind", "0.0.0.0:8000", \
"--worker-class", "uvicorn.workers.UvicornWorker", \
"--timeout", "300", "--graceful-timeout", "5"]
Why this ordering matters:
| Layer | Rebuild Frequency | Build Time | Cache Hit Rate |
|---|---|---|---|
| Base image | Never (unless Python version changes) | 0s | 99% |
| System deps | Rarely | ~30s | 95% |
| Python deps | Occasionally (dependency updates) | ~60s | 80% |
| Model preload | Rarely (model list changes) | 5-10min | 90% |
| App code | Every commit | ~3s | 10% |
Result: Most builds complete in ~3 seconds using cached layers!
CPU Dockerfile:
FROM python:3.12-slim
# CPU-optimized PyTorch (smaller, no CUDA)
RUN pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
# CPU performance tuning
ENV OMP_NUM_THREADS=4 \
MKL_NUM_THREADS=4 \
EASYNMT_BATCH_SIZE=16
GPU Dockerfile (Dockerfile.gpu):
FROM nvidia/cuda:12.6.2-cudnn-runtime-ubuntu24.04
# Install Python 3.12
RUN apt-get update && apt-get install -y \
python3.12 python3-pip \
&& rm -rf /var/lib/apt/lists/*
# CUDA-enabled PyTorch matching CUDA 12.6
RUN pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
# GPU performance tuning
ENV CUDA_VISIBLE_DEVICES=0 \
EASYNMT_BATCH_SIZE=64 \
EASYNMT_MODEL_ARGS='{"torch_dtype":"fp16"}' \
MAX_INFLIGHT_TRANSLATIONS=1
# Runtime needs GPU access
# docker run --gpus all scottgal/mostlylucid-nmt:gpu
Key differences:
python:3.12-slim (300MB) vs nvidia/cuda:12.6.2-cudnn-runtime (1.8GB)Dockerfile.arm64 includes ARM-specific tuning:
FROM python:3.11-slim-bookworm # Debian Bookworm has better ARM support
# Minimal dependencies (ARM devices often storage-constrained)
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc g++ git curl \
&& rm -rf /var/lib/apt/lists/*
# ARM performance tuning (Raspberry Pi 4/5 with 4GB RAM)
ENV OMP_NUM_THREADS=3 \
MKL_NUM_THREADS=3 \
OPENBLAS_NUM_THREADS=3 \
EASYNMT_BATCH_SIZE=4 \
MAX_CACHED_MODELS=1 \
MEMORY_CRITICAL_THRESHOLD=70.0
# Single worker (memory constrained)
CMD ["gunicorn", "app:app", "-w", "1", \
"--bind", "0.0.0.0:8000", \
"--timeout", "300"]
ARM-specific optimizations:
Dockerfile.min approach:
FROM python:3.12-slim
# Dependencies only, NO model preloading
COPY requirements-prod.txt .
RUN pip install --no-cache-dir -r requirements-prod.txt
# Application code
COPY app.py .
COPY src/ ./src/
# Models loaded on-demand from volume
ENV MODEL_CACHE_DIR=/models
VOLUME ["/models"]
Usage with volume mount:
# Create persistent model cache
docker volume create translation-models
# First run: downloads models on-demand
docker run -d \
-v translation-models:/models \
-p 8888:8000 \
scottgal/mostlylucid-nmt:cpu-min
# Subsequent runs: reuse cached models
# Same volume mount, instant startup
Minimal vs Full trade-offs:
| Aspect | Full Image | Minimal Image |
|---|---|---|
| Image size | 2.5GB | 1.5GB |
| Build time | 5-10 minutes | 30 seconds |
| First start | Instant translation | 1-2 min model download |
| Subsequent starts | Instant | Instant (if volume persists) |
| Disk usage | 2.5GB per container | 1.5GB + shared volume |
| Development | Slow iteration | Fast iteration |
| Production | No network dependency | Needs volume persistence |
build-all.sh builds all variants with consistent versioning:
#!/bin/bash
# Generate version tags
VERSION=$(date +%Y%m%d.%H%M%S)
GIT_SHA=$(git rev-parse --short HEAD)
# Enable buildx for multi-platform
docker buildx create --use --name multiarch 2>/dev/null || true
echo "Building all variants for mostlylucid-nmt:${VERSION}"
# CPU Full (multi-platform)
docker buildx build \
--platform linux/amd64,linux/arm64 \
-f Dockerfile \
-t scottgal/mostlylucid-nmt:cpu \
-t scottgal/mostlylucid-nmt:latest \
-t scottgal/mostlylucid-nmt:${VERSION} \
--build-arg VERSION=${VERSION} \
--build-arg VCS_REF=${GIT_SHA} \
--push \
.
# CPU Minimal (multi-platform)
docker buildx build \
--platform linux/amd64,linux/arm64 \
-f Dockerfile.min \
-t scottgal/mostlylucid-nmt:cpu-min \
-t scottgal/mostlylucid-nmt:${VERSION}-min \
--build-arg VERSION=${VERSION} \
--push \
.
# GPU Full (AMD64 only)
docker build \
-f Dockerfile.gpu \
-t scottgal/mostlylucid-nmt:gpu \
-t scottgal/mostlylucid-nmt:${VERSION}-gpu \
--build-arg VERSION=${VERSION} \
--build-arg PRELOAD_LANGS="de,es,fr,it,nl,pl,pt,ru,zh,ja,ar,uk" \
--push \
.
# GPU Minimal (AMD64 only)
docker build \
-f Dockerfile.gpu.min \
-t scottgal/mostlylucid-nmt:gpu-min \
--push \
.
echo "All builds complete!"
echo "Images tagged with version: ${VERSION}"
Build automation benefits:
Scenario 1: Development (fast iteration)
# Use minimal image with volume mount
docker run -d \
-v ./models:/models \
-v ./src:/app/src \
-p 8888:8000 \
scottgal/mostlylucid-nmt:cpu-min
# Edit code locally, container picks up changes
# Models cached in volume, no re-download
Scenario 2: Production CPU (high availability)
# Use full image, no volume needed
docker run -d \
--restart always \
--memory=2g \
--cpus=2 \
-p 8888:8000 \
scottgal/mostlylucid-nmt:cpu
# Models preloaded, instant translation
# No external dependencies
Scenario 3: Production GPU (high throughput)
# docker-compose.yml
services:
translation:
image: scottgal/mostlylucid-nmt:gpu
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
environment:
- CUDA_VISIBLE_DEVICES=0
- EASYNMT_BATCH_SIZE=64
Scenario 4: ARM64 edge deployment (Raspberry Pi)
# Automatic platform selection
docker run -d \
--restart always \
--memory=3g \
-p 8888:8000 \
scottgal/mostlylucid-nmt:cpu
# Docker pulls ARM64 variant automatically
# Optimized for ARM performance
These patterns from mostlyucid-nmt demonstrate production-ready Docker practices applicable to any complex application with multiple deployment scenarios.
.NET 9 introduces significant improvements to container support, making it easier than ever to containerize .NET applications without even writing a Dockerfile.
With .NET 9, you can publish a containerized application directly:
# Publish as a container image (no Dockerfile needed!)
dotnet publish --os linux --arch x64 -p:PublishProfile=DefaultContainer
# Specify image name and tag
dotnet publish \
--os linux \
--arch x64 \
-p:PublishProfile=DefaultContainer \
-p:ContainerImageName=myapp \
-p:ContainerImageTag=1.0.0
# Multi-architecture
dotnet publish --os linux --arch arm64 -p:PublishProfile=DefaultContainer
Add to your .csproj:
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>net9.0</TargetFramework>
<!-- Container Configuration -->
<ContainerImageName>myapp</ContainerImageName>
<ContainerImageTag>$(Version)</ContainerImageTag>
<ContainerRegistry>myregistry.azurecr.io</ContainerRegistry>
<!-- Base image (defaults to mcr.microsoft.com/dotnet/aspnet:9.0) -->
<ContainerBaseImage>mcr.microsoft.com/dotnet/aspnet:9.0-alpine</ContainerBaseImage>
<!-- Container runtime configuration -->
<ContainerWorkingDirectory>/app</ContainerWorkingDirectory>
<ContainerPort>8080</ContainerPort>
<ContainerEnvironmentVariable Include="ASPNETCORE_ENVIRONMENT">Production</ContainerEnvironmentVariable>
<!-- User (security best practice) -->
<ContainerUser>app</ContainerUser>
<!-- Labels -->
<ContainerLabel Include="org.opencontainers.image.description">My awesome app</ContainerLabel>
<ContainerLabel Include="org.opencontainers.image.source">https://github.com/me/myapp</ContainerLabel>
</PropertyGroup>
</Project>
Then publish:
dotnet publish -p:PublishProfile=DefaultContainer
<PropertyGroup>
<!-- Enable trimming to remove unused code -->
<PublishTrimmed>true</PublishTrimmed>
<TrimMode>full</TrimMode>
<!-- OR use Native AOT for even smaller, faster images -->
<PublishAot>true</PublishAot>
</PropertyGroup>
Results:
Traditional Dockerfile approach:
FROM mcr.microsoft.com/dotnet/sdk:9.0 AS build
WORKDIR /src
COPY ["MyApp.csproj", "."]
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /app/publish
FROM mcr.microsoft.com/dotnet/aspnet:9.0
WORKDIR /app
COPY --from=build /app/publish .
ENTRYPOINT ["dotnet", "MyApp.dll"]
Built-in container approach:
# Just this!
dotnet publish -p:PublishProfile=DefaultContainer
Both produce similar images, but the built-in approach is simpler and more maintainable.
Use Dockerfile when:
Use built-in when:
.NET 9 supports "chiseled" Ubuntu images - ultra-minimal base images:
<PropertyGroup>
<ContainerBaseImage>mcr.microsoft.com/dotnet/aspnet:9.0-noble-chiseled</ContainerBaseImage>
</PropertyGroup>
Benefits:
Trade-offs:
Perfect for production where security and size matter most.
.NET Aspire is an opinionated, cloud-ready stack for building distributed applications. Think of it as Docker Compose on steroids, specifically designed for .NET microservices.
Aspire provides:
An Aspire solution has three project types:
1. Create Aspire project:
dotnet new aspire-starter -n MyDistributedApp
cd MyDistributedApp
2. App Host (MyDistributedApp.AppHost/Program.cs):
var builder = DistributedApplication.CreateBuilder(args);
// Add Redis cache
var cache = builder.AddRedis("cache");
// Add PostgreSQL database
var db = builder.AddPostgres("postgres")
.AddDatabase("mydb");
// Add API service (references cache and db)
var api = builder.AddProject<Projects.MyApi>("api")
.WithReference(cache)
.WithReference(db);
// Add frontend (references API)
builder.AddProject<Projects.MyWeb>("web")
.WithReference(api);
builder.Build().Run();
3. Use in your service:
// MyApi/Program.cs
var builder = WebApplication.CreateBuilder(args);
// Aspire automatically configures these based on AppHost
builder.AddServiceDefaults();
builder.AddRedisClient("cache");
builder.AddNpgsqlDbContext<MyDbContext>("mydb");
var app = builder.Build();
app.MapDefaultEndpoints(); // Health, metrics, etc.
4. Run everything:
dotnet run --project MyDistributedApp.AppHost
Aspire launches:
Docker Compose:
Aspire:
Use both: Aspire for local development, generates Docker Compose/Kubernetes for production.
Generate deployment manifests:
# Generate Docker Compose
dotnet run --project MyDistributedApp.AppHost -- \
--publisher compose \
--output-path ../deploy
# Generate Kubernetes manifests
dotnet run --project MyDistributedApp.AppHost -- \
--publisher manifest \
--output-path ../deploy/k8s
Then deploy:
# Docker Compose
docker-compose -f deploy/docker-compose.yml up -d
# Kubernetes
kubectl apply -f deploy/k8s/
Pre-built integrations make adding services trivial:
// Add various backing services
var redis = builder.AddRedis("cache");
var postgres = builder.AddPostgres("db").AddDatabase("mydb");
var rabbitmq = builder.AddRabbitMQ("messaging");
var mongodb = builder.AddMongoDB("mongo").AddDatabase("docs");
var sql = builder.AddSqlServer("sql").AddDatabase("business");
// Add Azure services
var storage = builder.AddAzureStorage("storage");
var cosmos = builder.AddAzureCosmosDB("cosmos");
var servicebus = builder.AddAzureServiceBus("messaging");
// Use in services
builder.AddProject<Projects.MyService>("service")
.WithReference(redis)
.WithReference(postgres)
.WithReference(rabbitmq);
Running a full observability stack like the one above requires significant resources. If you're self-hosting on a VPS with 4GB RAM or an old laptop, here are practical strategies to reduce resource consumption while maintaining functionality.
For local development, you don't need the full production stack. The devdeps-docker-compose.yml approach runs only what you need:
services:
smtp4dev:
image: rnwood/smtp4dev
ports:
- "3002:80"
- "2525:25"
volumes:
- e:/smtp4dev-data:/smtp4dev
restart: always
postgres:
image: postgres:16-alpine
container_name: postgres
ports:
- "5432:5432"
env_file:
- .env
volumes:
- e:/data:/var/lib/postgresql/data
restart: always
Why this works for development:
See the full development dependencies guide for setup instructions.
For a budget VPS (2-4GB RAM), prioritize essential services:
services:
# Core application
mostlylucid:
image: scottgal/mostlylucid:latest
restart: always
env_file: .env
volumes:
- ./markdown:/app/markdown
- ./logs:/app/logs
networks:
- app_network
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 256M
# Database only
db:
image: postgres:16-alpine
env_file: .env
volumes:
- db_data:/var/lib/postgresql/data
networks:
- app_network
deploy:
resources:
limits:
memory: 512M
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 30s
timeout: 5s
retries: 3
# Caddy for HTTPS
caddy:
image: caddy:latest
ports:
- 80:80
- 443:443
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
networks:
- app_network
deploy:
resources:
limits:
memory: 128M
volumes:
db_data:
caddy_data:
networks:
app_network:
What's removed and alternatives:
docker logs, or Seq Cloud free tierStart minimal, add services as needed:
Stage 1: Core (512MB-1GB VPS)
# Just app + database + reverse proxy
docker-compose up -d mostlylucid db caddy
Stage 2: Add Observability (2GB VPS)
# Add lightweight monitoring
docker-compose up -d mostlylucid db caddy seq
# Use Seq free 10GB/month
Stage 3: Full Stack (4GB+ VPS)
# Add everything
docker-compose up -d
services:
myapp:
image: postgres:16-alpine # 50% smaller than postgres:16
# vs
image: postgres:16 # Full Debian base
Savings: Alpine images are 50-70% smaller
Instead of one database per service, use one PostgreSQL instance with multiple databases:
db:
image: postgres:16-alpine
# One instance, multiple databases
# Umami, Mostlylucid, etc. all share this PostgreSQL
Savings: 400MB RAM per additional database you consolidate
easynmt:
deploy:
resources:
limits:
cpus: "2.0" # Don't let translation consume all CPU
reservations:
cpus: "0.5" # Guarantee minimum
This prevents background jobs from starving your web application.
As covered in ImageSharp with Docker, mounting cache directories prevents unnecessary reprocessing:
mostlylucid:
volumes:
- /mnt/imagecache:/app/wwwroot/cache # ImageSharp cache persists across restarts
Why it matters: Without this, every container restart regenerates all thumbnails/processed images.
| Service | Self-Hosted RAM | External Alternative |
|---|---|---|
| Seq | ~500MB | Seq Cloud (10GB/month free) |
| Prometheus + Grafana | ~600MB | Grafana Cloud (free tier) |
| Umami | ~200MB | Plausible (paid) or self-host elsewhere |
Strategy: Offload observability to free tiers, keep core application on your VPS.
For infrequently used services like translation:
# Create a separate compose file
# translation-compose.yml
services:
easynmt:
image: easynmt/api:2.0.2-cpu
ports:
- "8888:8888"
volumes:
- /mnt/easynmt:/cache/
# Only run when needed
docker-compose -f translation-compose.yml up -d
# Translate your content
# ...
# Shut down when done
docker-compose -f translation-compose.yml down
Savings: 1-2GB RAM when translation service isn't running
Here's what runs on a $6/month Hetzner VPS (2 vCPU, 4GB RAM):
# Minimal production compose
services:
mostlylucid:
image: scottgal/mostlylucid:latest
restart: always
env_file: .env
volumes:
- /mnt/markdown:/app/markdown
- /mnt/logs:/app/logs
- /mnt/imagecache:/app/wwwroot/cache
depends_on:
- db
db:
image: postgres:16-alpine
env_file: .env
volumes:
- /mnt/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 30s
cloudflared:
image: cloudflare/cloudflared:latest
command: tunnel run --token ${CLOUDFLARED_TOKEN}
restart: always
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
WATCHTOWER_CLEANUP: "true"
WATCHTOWER_LABEL_ENABLE: "true"
command: --interval 3600 # Check once per hour, not every 5 minutes
Total resource usage:
What's different:
docker logs + occasional grep)Without Prometheus/Grafana, use these free alternatives:
Health monitoring:
# Simple health check script
#!/bin/bash
while true; do
curl -f http://localhost/healthz || echo "Health check failed!" | mail -s "Alert" you@example.com
sleep 300
done
Log monitoring:
# Watch for errors
docker-compose logs -f --tail=100 | grep -i error
# Email on critical errors
docker-compose logs -f | grep -i "critical" | while read line; do
echo "$line" | mail -s "Critical Error" you@example.com
done
Resource usage:
# Quick resource check
docker stats --no-stream
# Pretty output
docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
Now let's reimagine the entire stack using .NET Aspire. This gives you all the orchestration benefits with better .NET integration and an amazing developer experience.
First, create the Aspire App Host:
# Add Aspire to your solution
dotnet new aspire-apphost -n Mostlylucid.AppHost
cd Mostlylucid.AppHost
Mostlylucid.AppHost/Program.cs:
var builder = DistributedApplication.CreateBuilder(args);
// PostgreSQL with persistent data
var postgres = builder.AddPostgres("postgres")
.WithDataVolume() // Persistent storage
.WithPgAdmin(); // Optional: PgAdmin for database management
var mostlylucidDb = postgres.AddDatabase("mostlylucid");
var umamiDb = postgres.AddDatabase("umami");
// Seq for centralized logging
var seq = builder.AddSeq("seq")
.WithDataVolume();
// Redis for caching (if needed)
var redis = builder.AddRedis("cache")
.WithDataVolume()
.WithRedisCommander(); // Optional: Redis Commander UI
// Main blog application
var mostlylucid = builder.AddProject<Projects.Mostlylucid>("web")
.WithReference(mostlylucidDb)
.WithReference(seq)
.WithReference(redis)
.WithEnvironment("TranslateService__Enabled", "false") // Disable for dev
.WithHttpsEndpoint(port: 7240, name: "https");
// Umami analytics
var umami = builder.AddContainer("umami", "ghcr.io/umami-software/umami", "postgresql-latest")
.WithReference(umamiDb)
.WithEnvironment("DATABASE_TYPE", "postgresql")
.WithEnvironment("TRACKER_SCRIPT_NAME", "getinfo")
.WithEnvironment("API_COLLECT_ENDPOINT", "all")
.WithHttpEndpoint(port: 3000, name: "http");
// Translation service (CPU version, with resource limits)
var translation = builder.AddContainer("easynmt", "easynmt/api", "2.0.2-cpu")
.WithDataVolume("/cache")
.WithHttpEndpoint(port: 8888, name: "http")
.WithEnvironment("MODEL_FAMILY", "opus-mt");
// Scheduler service (Hangfire background jobs)
var scheduler = builder.AddProject<Projects.Mostlylucid_SchedulerService>("scheduler")
.WithReference(mostlylucidDb)
.WithReference(seq);
// Prometheus for metrics
var prometheus = builder.AddContainer("prometheus", "prom/prometheus", "latest")
.WithDataVolume()
.WithBindMount("./prometheus.yml", "/etc/prometheus/prometheus.yml")
.WithHttpEndpoint(port: 9090);
// Grafana for visualization
var grafana = builder.AddContainer("grafana", "grafana/grafana", "latest")
.WithDataVolume()
.WithHttpEndpoint(port: 3001)
.WithEnvironment("GF_SECURITY_ADMIN_PASSWORD", builder.Configuration["Grafana:AdminPassword"] ?? "admin");
builder.Build().Run();
Create Mostlylucid.ServiceDefaults project:
// Extensions.cs
public static class Extensions
{
public static IHostApplicationBuilder AddServiceDefaults(this IHostApplicationBuilder builder)
{
// OpenTelemetry
builder.Services.AddOpenTelemetry()
.WithMetrics(metrics =>
{
metrics.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddRuntimeInstrumentation();
})
.WithTracing(tracing =>
{
if (builder.Environment.IsDevelopment())
{
tracing.SetSampler(new AlwaysOnSampler());
}
tracing.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddEntityFrameworkCoreInstrumentation();
});
// Health checks
builder.Services.AddHealthChecks()
.AddCheck("self", () => HealthCheckResult.Healthy(), tags: new[] { "live" });
return builder;
}
public static IApplicationBuilder MapDefaultEndpoints(this WebApplication app)
{
app.MapHealthChecks("/healthz");
app.MapHealthChecks("/ready", new HealthCheckOptions
{
Predicate = check => check.Tags.Contains("ready")
});
return app;
}
}
var builder = WebApplication.CreateBuilder(args);
// Add Aspire service defaults (telemetry, health checks)
builder.AddServiceDefaults();
// Add services
builder.AddNpgsqlDbContext<MostlylucidDbContext>("mostlylucid");
builder.AddRedisClient("cache");
// Existing service registrations...
// builder.Services.AddControllersWithViews();
// etc...
var app = builder.Build();
// Map Aspire default endpoints
app.MapDefaultEndpoints();
// Existing middleware...
app.Run();
Development Experience:
dotnet run --project Mostlylucid.AppHost starts everything.env jugglingProduction Deployment:
Generate deployment manifests from Aspire:
# Generate Docker Compose
dotnet run --project Mostlylucid.AppHost -- \
--publisher compose \
--output-path ./deploy
# This creates a production-ready docker-compose.yml
cd deploy
docker-compose up -d
Generated docker-compose.yml (simplified):
services:
postgres:
image: postgres:16
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres-data:/var/lib/postgresql/data
mostlylucid-db:
image: postgres:16
# Database initialization
seq:
image: datalust/seq:latest
environment:
ACCEPT_EULA: Y
volumes:
- seq-data:/data
web:
image: scottgal/mostlylucid:latest
environment:
ConnectionStrings__mostlylucid: Host=postgres;Database=mostlylucid;Username=postgres;Password=${POSTGRES_PASSWORD}
ConnectionStrings__cache: cache:6379
depends_on:
- postgres
- cache
- seq
cache:
image: redis:7-alpine
volumes:
- redis-data:/data
# ... other services
| Feature | Docker Compose | .NET Aspire |
|---|---|---|
| Setup | Manual YAML | C# code with IntelliSense |
| Service Discovery | Manual env vars | Automatic |
| Observability | Bring your own | Built-in (OpenTelemetry) |
| Development | docker-compose up |
dotnet run + Dashboard |
| Debugging | Attach to container | F5 in Visual Studio |
| Logs | docker logs |
Centralized dashboard |
| Tracing | Setup manually | Automatic distributed tracing |
| Production | Use YAML directly | Generate manifests |
| Learning Curve | YAML syntax | C# you already know |
Use Aspire when:
Use Docker Compose when:
Use Both:
This blog's Docker journey shows typical progression:
Lessons learned:
latest).dockerignore to exclude unnecessary files.env for secrets (never commit)depends_on with health conditions.dockerignore generously# Check logs
docker logs container-name
# Common issues:
# 1. Port already in use
docker ps | grep 8080 # Find conflicting container
docker stop conflicting-container
# 2. Missing environment variables
docker inspect container-name | grep Env
# 3. Failed health check
docker inspect container-name | grep Health -A 20
# Enable BuildKit for faster builds
export DOCKER_BUILDKIT=1
# Use build cache
docker build --cache-from myapp:latest -t myapp:latest .
# Check what's taking time
docker build --progress=plain -t myapp:latest .
# Containers can't communicate
# Solution: Ensure they're on the same network
docker network ls
docker network inspect network-name
# DNS not working
# Container names are DNS names within Docker networks
docker exec web ping db # Should work if both on same network
# Permission denied on volume
# Solution: Match user IDs
FROM ubuntu
RUN useradd -u 1000 appuser # Match host user ID
USER appuser
Docker has evolved from a simple containerization tool to a comprehensive platform for building, shipping, and running modern applications. This guide has taken you from fundamentals to production-ready deployments, with real-world examples from running mostlylucid.net.
For Beginners:
For Self-Hosters on Budget VPS:
For Production Deployments:
For .NET Developers:
For Advanced Use Cases:
This blog's Docker evolution mirrors typical progression:
| Stage | Services | RAM Usage | Complexity |
|---|---|---|---|
| Stage 1 (July 2024) | App + Tunnel | ~300MB | Simple |
| Stage 2 (Aug 2024) | + Dev dependencies | ~500MB | Learning |
| Stage 3 (Aug 2024) | + Volume fixes | ~500MB | Debugging |
| Stage 4 (Nov 2024) | Full observability | ~3.5GB | Production |
| Stage 5 (Today) | + Aspire option | Variable | Modern |
The pattern: Start simple, solve problems as they arise, add complexity only when needed.
Depending on your path:
Official Documentation:
This Blog's Docker Series:
Related Projects:
Remember: The best architecture is the one that meets your needs without unnecessary complexity. Start simple, measure, optimize when you hit actual bottlenecks - not imagined ones.
Happy containerizing! 🐳
© 2025 Scott Galloway — Unlicense — All content and source code on this site is free to use, copy, modify, and sell.