6 Docker Features for Advanced Usage

Six Docker capabilities that go beyond basic container operations: .dockerignore tricks, multi-stage builds, BuildKit mount types, health checks, log drivers, and secure credential storage.

Docker has become such a routine tool that many developers stop at the basics: write a Dockerfile, build an image, run a container. But under the hood lie features that can significantly speed up builds, reduce image size, and improve operational reliability. In this article, we'll look at six Docker capabilities ranging from commonly known to rarely used, each of which deserves a place in your toolkit.

1. File Exclusion During Builds (.dockerignore)

Before sending the build context to the Docker daemon, Docker looks for a .dockerignore file in the context root. All files and directories matching its patterns are excluded from the context before being sent to the builder. This reduces the context size, speeds up the build, and prevents sensitive or unnecessary files from leaking into the image.

Here's an example of a typical .dockerignore:

# comment - will be ignored by docker
**/.git
**/node_modules/
**/LICENSE
**/.vscode
**/npm-debug.log
**/coverage
**/logs
**/.editorconfig
**/.aws
temp?
*.md
!README*.md

The syntax supports wildcards (*), directory recursion (**), single-character matching (?), and negation (!) to re-include previously excluded patterns.

A lesser-known feature: you can create separate ignore files for multiple Dockerfiles. If your project structure looks like this:

├── index.ts
├── src/
├── docker
│  ├── build.Dockerfile
│  ├── build.Dockerfile.dockerignore
│  ├── lint.Dockerfile
│  ├── lint.Dockerfile.dockerignore
│  ├── test.Dockerfile
│  └── test.Dockerfile.dockerignore
├── package.json
└── package-lock.json

Each Dockerfile gets its own ignore file, named <dockerfile-name>.Dockerfile.dockerignore. This means your build image can exclude test fixtures, your test image can exclude build artifacts, and your lint image can exclude everything except source code.

2. Multi-Stage Builds

Multi-stage builds let you separate compilation from runtime in a single Dockerfile. The idea is simple: the first stage installs development tools and builds your application; the second stage copies only the resulting artifacts into a minimal base image.

A basic example with Go:

FROM golang:1.24 AS build
WORKDIR /app
COPY . .
RUN go build -o /app/bin/app

FROM alpine:3.20
COPY --from=build /app/bin/app /usr/local/bin/app
CMD ["app"]

The first stage uses golang:1.24 (which includes the Go compiler, linker, and standard library — hundreds of megabytes), while the final image is based on alpine:3.20 (about 5 MB). The result: a production image containing only the compiled binary and a minimal OS layer.

A more production-ready version with dependency caching and a non-root user:

FROM golang:1.24 AS build
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /app/bin/app ./cmd/app

FROM alpine:3.20
RUN adduser -D appuser
COPY --from=build /app/bin/app /usr/local/bin/app
USER appuser
CMD ["/usr/local/bin/app"]

By copying go.mod and go.sum first and running go mod download before copying the rest of the source code, Docker can cache the dependency download layer. Only when your dependencies actually change does this layer need to rebuild.

3. Mount Types in BuildKit

BuildKit — Docker's modern build backend — offers four mount types that solve common build problems without leaving artifacts in the final image.

Bind mounts temporarily attach host directories into the build container:

# syntax=docker/dockerfile:1.3
FROM node:16

RUN --mount=type=bind,source=./tools,target=/app/tools \
    ./tools/setup.sh

The tools directory is available during the RUN command but doesn't become part of the image layer. At runtime, bind mounts work similarly:

docker run -it --mount type=bind,src="$(pwd)",target=/src ubuntu bash

Or with the shorter volume syntax:

docker run -it \
  -v "$(pwd):/src" \
  ubuntu bash

Cache mounts persist directories between builds to accelerate dependency installation:

# syntax=docker/dockerfile:1.3
FROM python:3.11

WORKDIR /app

COPY requirements.txt .

RUN --mount=type=cache,target=/root/.cache/pip \
    pip install -r requirements.txt

COPY . .

RUN python setup.py install

The pip cache persists between builds, so packages that haven't changed don't need to be re-downloaded. This can turn a 3-minute dependency installation into a 10-second operation on subsequent builds.

SSH mounts grant temporary access to SSH keys for cloning private repositories:

# syntax=docker/dockerfile:1.4
FROM ubuntu:22.04

RUN --mount=type=ssh git clone git@github.com:myorg/private.git /src/private

The SSH agent is forwarded into the build container for the duration of the RUN command, then removed. Your private key never touches the image.

Secret mounts securely pass credentials without embedding them in image layers:

docker build --secret id=mysecret,src=/local/path/to/secret.txt .
RUN --mount=type=secret,id=mysecret \
    some-command-that-needs-secret /run/secrets/mysecret

The secret file is available at /run/secrets/mysecret during the build step, but never appears in the image history or any layer.

4. Container Health Checks

The HEALTHCHECK instruction tells Docker how to verify that a container is still functioning correctly. Without it, Docker only knows whether the main process is running — not whether it's actually serving requests.

HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
  CMD curl -f http://localhost:8080/ || exit 1

This configuration checks the container every 30 seconds, allows 10 seconds for each check to complete, gives the container 15 seconds to start up before counting failures, and marks it unhealthy after 3 consecutive failures.

A container can be in one of three health states: starting, healthy, or unhealthy. Health check commands should be lightweight and deterministic, returning 0 for success and any non-zero value for failure.

Orchestrators like Docker Swarm and Kubernetes can use health status for automated recovery — restarting unhealthy containers or routing traffic away from them. Even without an orchestrator, health checks provide visibility: docker ps shows the health status, and docker inspect reveals the full health check history.

5. Log Drivers

By default, Docker captures container stdout and stderr as JSON files. But this is just one of many available log drivers. Depending on your infrastructure, you might prefer:

  • syslog — forwards logs to a syslog server
  • journald — integrates with systemd's journal
  • fluentd — sends logs to a Fluentd collector for aggregation
  • gelf — ships logs to Graylog or compatible systems
  • awslogs — sends logs directly to AWS CloudWatch

You can set the log driver per container:

docker run --log-driver=syslog nginx:1.27

Or configure a default for all containers in /etc/docker/daemon.json:

{
  "log-driver": "local",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}

The local driver is a good default for production: it uses compressed rotation, limiting disk usage while retaining enough history for debugging. Setting max-size and max-file prevents a verbose container from filling your disk.

6. Credential Storage Security

When you run docker login, your credentials are stored in ~/.docker/config.json. By default, they're base64-encoded — which is not encryption. Anyone with read access to that file can decode your credentials instantly.

Credential helpers delegate storage to OS-native secure vaults:

  • macOS: Keychain via osxkeychain
  • Windows: Windows Credential Manager via wincred
  • Linux: pass (GPG-encrypted password store) or secretservice (GNOME Keyring)

For macOS, the configuration is minimal:

{ "credsStore": "osxkeychain" }

For Linux using pass, setup requires a few more steps:

# install pass and gpg (Debian/Ubuntu)
sudo apt-get update
sudo apt-get install -y pass gpg

# initialize pass store
gpg --full-generate-key
gpg --list-keys
pass init "your-GPG_KEY_ID"

# download docker-credential-pass helper
curl -L -o /usr/local/bin/docker-credential-pass \
  https://github.com/docker/docker-credential-helpers/releases/download/v0.9.3/docker-credential-pass-v0.9.3.linux-amd64
chmod +x /usr/local/bin/docker-credential-pass

# configure Docker
mkdir -p ~/.docker
cat > ~/.docker/config.json <<'EOF'
{ "credsStore": "pass" }
EOF

After this setup, docker login stores credentials in your GPG-encrypted password store rather than in a plaintext-equivalent file. This is especially critical in CI/CD environments and shared machines.

Conclusion

None of these features are new, and none require exotic tooling. They're built into Docker, documented, and ready to use. The difference between a basic Docker workflow and a professional one often comes down to knowing these capabilities exist and applying them consistently. Faster builds, smaller images, better security, and more reliable services — the investment in learning these features pays for itself quickly.

FAQ

What is this article about in one sentence?

This article explains the core idea in practical terms and focuses on what you can apply in real work.

Who is this article for?

It is written for engineers, technical leaders, and curious readers who want a clear, implementation-focused explanation.

What should I read next?

Use the related articles below to continue with closely connected topics and concrete examples.