How to Build a Docker Image You Can Actually Run in Production (Not Just on Your Laptop)

A practical guide covering ten essential practices for building production-ready Docker images, from choosing the right base image and multi-stage builds to security hardening and health checks.

If you write a Dockerfile, chances are it works. But the real question isn't whether it works. The real question is: will it still work a week from now, on a different server, in CI/CD? Will it work a year later if the base image gets updated?

This article is a practical guide. No abstract theory — just concrete recommendations that will make your Docker image predictable, resilient, and secure.

1. Choose the Right Base Image

The problem: Many developers start with something like python:3.12 or node:20 without thinking about what's actually inside. These "full" images are based on Debian or Ubuntu and include a massive set of packages you'll never need: compilers, man pages, debugging tools, and much more.

What's the harm?

  • Image size exceeds 1 GB
  • A larger attack surface — more packages means more potential vulnerabilities
  • Slower builds and deploys

The solution:

  • Use -slim variants (e.g. python:3.12-slim) — these contain only the minimum required for running the language runtime
  • Use -alpine images for maximum lightness — but with caveats (more on that in section 12)
  • Apply multi-stage builds (section 3) to separate build-time and runtime dependencies

A simple rule of thumb: the fewer things in the image, the fewer things can break.

2. Never Use the latest Tag

latest is a trap. It seems convenient — you always get the "newest" version. But in practice:

  • Today python:latest points to 3.12, tomorrow it might point to 3.13
  • The same tag on different machines can resolve to different images
  • Reproducibility goes out the window — you can't recreate the exact same build

What to do:

  • Always specify a concrete version: python:3.12.1-slim
  • For maximum predictability, pin the SHA256 digest: python@sha256:abc123...
  • In CI/CD, tag your images explicitly: myapp:1.2.3, not myapp:latest

Version pinning is one of the simplest things you can do, and it eliminates an entire class of "it worked yesterday" problems.

3. Use Multi-Stage Builds

Multi-stage builds are one of Docker's most powerful features. The idea is simple: you use one stage to build your application and its dependencies, then copy only the artifacts you need into a clean, minimal final image.

Here's an example for a Python application:

FROM python:3.12.1-slim AS builder
WORKDIR /install
RUN apt-get update && apt-get install -y build-essential
COPY requirements.txt .
RUN pip install --upgrade pip && \
    pip wheel --no-deps --wheel-dir /wheels -r requirements.txt

FROM python:3.12.1-slim
ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1
WORKDIR /app
COPY --from=builder /wheels /wheels
COPY requirements.txt .
RUN pip install --no-deps --no-index --find-links=/wheels -r requirements.txt
COPY . .
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]

What's happening here:

  • The first stage (builder) installs build tools and compiles dependencies into wheel files
  • The second stage starts fresh from a clean slim image
  • Only the pre-built wheels are copied over — no compilers, no build tools, no junk

Benefits:

  • The final image is dramatically smaller
  • No unnecessary packages in production
  • Faster builds thanks to better layer caching

4. Use a .dockerignore File

This is a must-have that many people forget about. Without .dockerignore, Docker copies everything from your build context into the image — including things that have no business being there.

A basic .dockerignore file:

.git
__pycache__/
*.pyc
.env
.vscode/
node_modules/
*.log
tests/
.pytest_cache/

Why this matters:

  • Smaller build context = faster builds
  • Secrets (.env) don't accidentally end up in the image
  • Cache invalidation works correctly — unrelated file changes won't bust the cache

Think of .dockerignore as .gitignore for your Docker builds. If you wouldn't commit it, you probably shouldn't ship it in a container either.

5. Be Careful with Volume Mounts

A common gotcha: you mount a host directory into a container with -v $(pwd):/app, and suddenly files that were supposed to be in the image are gone — overwritten by whatever's on the host.

How this causes problems:

  • In development, your host directory might have different files than what was built into the image
  • Permissions can get mangled between host and container filesystems
  • Data written by the container can disappear when the container is removed

Best practices:

  • Use named volumes for persistent data:
docker volume create mydata
docker run -v mydata:/data myapp
  • Set proper ownership and permissions in the Dockerfile before the volume is mounted
  • In development, use bind mounts consciously and understand what you're overriding

6. Understand CMD vs. ENTRYPOINT

This is one of the most commonly misunderstood areas of Dockerfile design.

  • CMD specifies the default command, but it gets replaced entirely if you pass arguments to docker run
  • ENTRYPOINT defines the main executable, and arguments from docker run get appended to it

The recommended pattern:

ENTRYPOINT ["python", "app.py"]
CMD ["--debug"]

This way:

  • docker run myapp runs python app.py --debug
  • docker run myapp --verbose runs python app.py --verbose

The key insight: ENTRYPOINT gives you predictability. The container always runs the same program. CMD gives you flexibility for default arguments that can be overridden.

If you only use CMD, anyone can accidentally replace your entire startup command with something unexpected.

7. Don't Run as Root

By default, processes inside a Docker container run as root. This is a serious security concern: if an attacker manages to escape the container, they have root access to the host.

The fix is simple:

RUN useradd -m myuser
USER myuser

Add this near the end of your Dockerfile, after installing packages and copying files. From that point on, all commands run as the unprivileged user.

Things to watch out for:

  • Make sure the application directory is owned by the new user
  • If you need to bind to ports below 1024, you'll need additional capabilities or a reverse proxy
  • Some base images already include a non-root user — check before creating another one

Running as a non-root user is one of the easiest security wins you can get. There's almost never a good reason not to do it.

8. Add a HEALTHCHECK

Docker knows when a container crashes. But it doesn't know when your application is alive but not functioning — when it's stuck in an infinite loop, when the database connection is dead, or when it's returning 500 errors on every request.

That's what HEALTHCHECK is for:

HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
  CMD curl -f http://localhost:8000/health || exit 1

What this does:

  • Every 30 seconds, Docker calls the health endpoint
  • If it fails 3 times in a row (with a 10-second timeout), the container is marked as unhealthy
  • Orchestrators like Docker Swarm or Kubernetes can then restart or replace the container

Tips:

  • Create a lightweight /health endpoint that checks critical dependencies (database connection, etc.)
  • Don't make the health check too heavy — it runs frequently
  • If you don't have curl in your image, use a Python or Node one-liner instead

9. Optimize Layer Caching

Docker caches each layer of your image. If nothing in a layer has changed, Docker reuses the cached version instead of rebuilding it. This can speed up builds dramatically — but only if you structure your layers correctly.

The golden rule: put things that change less often at the top, and things that change frequently at the bottom.

For a typical application:

# Dependencies change rarely — cache this layer
COPY requirements.txt .
RUN pip install -r requirements.txt

# Application code changes often — this layer gets rebuilt
COPY . .

If you put COPY . . before the pip install, every code change would invalidate the dependency cache and force a full reinstall. That turns a 5-second build into a 5-minute build.

Other caching tips:

  • Combine related RUN commands with && to reduce the number of layers
  • Clean up temporary files in the same layer they're created: apt-get install -y pkg && rm -rf /var/lib/apt/lists/*
  • Use --mount=type=cache in BuildKit for persistent caches across builds

10. Docker in CI/CD

The CI/CD environment has its own set of challenges:

  • Tag images explicitly: Use semantic versions (myapp:1.2.3) or commit SHAs (myapp:a1b2c3d), never latest
  • Never embed secrets in the image: No API keys, database passwords, or certificates in the Dockerfile. Use build secrets, environment variables, or a secrets manager at runtime
  • Use .dockerignore: CI runners often have extra files (logs, test reports, coverage data) that shouldn't end up in the image
  • Scan for vulnerabilities: Integrate tools like Trivy, Snyk, or Docker Scout into your pipeline to catch known CVEs before deployment

A CI/CD pipeline should produce an image that is identical every time it's built from the same input. If it doesn't, you have a reproducibility problem that will bite you eventually.

11. Reduce Image Size

Smaller images mean faster pulls, faster deploys, lower storage costs, and a smaller attack surface. Here are specific techniques:

  • Remove build dependencies after use: If you installed build-essential to compile something, remove it in the same RUN layer (or better yet, use multi-stage builds)
  • Clean package manager caches: apt-get clean && rm -rf /var/lib/apt/lists/*
  • Use --no-cache flags: pip install --no-cache-dir prevents pip from storing downloaded packages
  • Prefer slim base images: They're usually 3-5x smaller than full images
  • Audit your layers: Use docker history or tools like dive to see what's taking up space

Every megabyte matters when you're deploying dozens of containers across multiple environments.

12. Alpine Is Not a Silver Bullet

Alpine Linux is famously tiny — its base image is around 5 MB. It's tempting to use it for everything. But there's a catch: Alpine uses musl instead of glibc.

What this means in practice:

  • Some C libraries and Python packages that depend on glibc won't work or need to be compiled from source
  • Compilation on Alpine is often significantly slower because pre-built wheels aren't available
  • Subtle runtime bugs can appear due to differences between musl and glibc behavior

When Alpine makes sense:

  • For Go applications (statically compiled, no libc dependency)
  • For simple applications with no native dependencies
  • When you truly need the smallest possible image and have tested thoroughly

When to avoid Alpine:

  • Python applications with C extensions (NumPy, pandas, etc.)
  • Applications that depend on specific glibc behavior
  • When you don't want to spend time debugging musl-related issues

In most cases, slim images offer the best balance between size and compatibility. They're larger than Alpine but come with far fewer surprises.

Conclusion

Building a Docker image isn't just about getting your code to run in a container. It's about creating a deployment artifact that is predictable, secure, and maintainable.

Here's the checklist:

  • Use minimal base images and pin their versions
  • Leverage multi-stage builds to keep production images clean
  • Add .dockerignore to exclude unnecessary files
  • Don't run as root
  • Add health checks
  • Structure layers for optimal caching
  • Never store secrets in images
  • Test with the same image in all environments

Each of these practices is simple on its own. Together, they transform a fragile local Dockerfile into a production-grade artifact that you can deploy with confidence.

The small things you skip today become the production incidents you debug tomorrow.