Okay, confession: I was going to write this post two years ago. Then I dockerized a Laravel app, watched the image balloon to 1.8 GB, and decided I wasn’t qualified to write about Docker yet. Two years and roughly a hundred Dockerfiles later, I have opinions. Not a lot of them. Most “best practices” lists read like someone paraphrased the Docker docs and added “2026” to the title. I’d rather tell you the handful of things I actually do on every project, why, and the 2020-era advice I’ve quietly stopped caring about.
This is a working-developer list, not a conference talk. If you ship containers to production, some of this will be old news. Some of it should be.
Multi-stage builds, with the boring layout that actually works
If you only take one thing from this post: multi-stage builds aren’t optional. A builder stage compiles, a runtime stage copies just the artifact. The part that trips people up isn’t the pattern. It’s layering the pattern so the cache actually helps you.
Here’s the template I paste into almost every Node or Go project:
# --- deps stage: only changes when dependencies change
FROM node:22-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
# --- build stage: inherits cached deps
FROM node:22-alpine AS build
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# --- runtime stage: nothing but what production needs
FROM node:22-alpine AS runtime
WORKDIR /app
ENV NODE_ENV=production
COPY --from=build /app/dist ./dist
COPY --from=build /app/node_modules ./node_modules
COPY package.json ./
USER node
CMD ["node", "dist/index.js"]
Three stages, not two. The deps stage is the point: it only rebuilds when package.json or the lockfile changes. Your app code can change every ten seconds and the dependency layer stays cached. In a Laravel project the same shape works with composer install. I used to inline the install step in the build stage and wonder why CI was slow. That’s why.
Pin base images, but stop pinning by digest in app repos
This one I’ve gone back and forth on. The consensus advice is “pin by SHA256 digest, always.” I tried it. It made my life worse.
What I do now: pin to a minor version tag (node:22.11-alpine, not node:22-alpine or node:22.11.0-alpine@sha256:...). You get reproducible-enough builds and still get patch updates when you rebuild. If you need byte-exact reproducibility for compliance or audited environments, you pin digests. But then you need Dependabot digest updates or something similar, because otherwise you’ll sit on an unpatched base image for six months and not notice.
The honest answer: pick a policy per repo. A payments service I maintain pins by digest with automated PRs. A throwaway internal tool pins to :22-alpine and rebuilds weekly via a scheduled CI job. Both are fine.
Stop running as root. Actually stop.
I know this is lecture material, but I still see it in code review weekly. Every Dockerfile should end with a non-root user. Node images ship with a node user ready to go, so it’s one line:
USER node
For Go or Rust binaries where you control the image from scratch, create the user explicitly:
FROM alpine:3.20 AS runtime
RUN addgroup -S app && adduser -S -G app app
COPY --from=build --chown=app:app /src/target/release/myapp /usr/local/bin/
USER app
ENTRYPOINT ["myapp"]
The Docker security docs cover the reasoning better than I can. If you can’t run as non-root because your app writes to /var/lib/something, fix the app, don’t skip the user. “Temporary” container permissions become permanent the moment something ships.
Smaller images are a feature, not a flex
The size-obsession crowd loves posting screenshots of 8 MB Go containers. Fun, but the actual reason to care about image size is cold-start latency and pull costs on autoscaling. A 200 MB image pulled 40x a minute across a fleet is a real number on your AWS bill.
What actually helps:
- Alpine or distroless for the runtime stage. Alpine is fine for most things. Distroless (
gcr.io/distroless/nodejs22-debian12,gcr.io/distroless/static-debian12for Go) is smaller and has a smaller attack surface, but you can’tdocker exec -it ... shinto it when debugging, which you will want to do eventually. Pick your trade-off. I use distroless for Go services and Alpine for Node. - Use
.dockerignorelike you mean it.node_modules,.git,coverage/,*.log, local.envfiles. Every file you don’t copy is one that can’t bloat your image or leak into it. I’ve pulled AWS keys out of other people’s containers because their.dockerignorewas empty. - Don’t install build tooling in the runtime stage. No
npm install -g, noapt install build-essential, no interactive shells. If you think you need curl to run a healthcheck, useHEALTHCHECKwith your app’s own/healthendpoint instead.
The size flex is silly. The discipline behind the size is what matters. The Docker build best practices guide is still the cleanest primer on this.
BuildKit features I’d have to be pried away from
Modern Docker (buildx / BuildKit) gives you two things that paid for themselves the first week.
Cache mounts let you cache things like ~/.cache/pip or /root/.npm across builds without baking them into layers:
RUN --mount=type=cache,target=/root/.npm \
npm ci
CI goes from 90 seconds of npm install to about 12. The BuildKit cache mount docs show the syntax for pip, cargo, apt, and go. Use it everywhere.
Secret mounts mean you can pass an NPM_TOKEN or a private registry key into a single RUN step without it landing in any layer:
RUN --mount=type=secret,id=npm_token \
NPM_TOKEN=$(cat /run/secrets/npm_token) npm ci
Pair this with DOCKER_BUILDKIT=1 (default on recent Docker versions) and build-arg passing from CI. It replaces the old pattern of copying .npmrc in and deleting it later, which, spoiler, leaves the token in an intermediate layer anyway.
Compose files are still where real projects go sideways
I’ve become more relaxed about Kubernetes and more particular about docker-compose.yml. Two specific things I do now.
First, explicit healthchecks on anything another service depends on. A Postgres container with healthcheck: pg_isready lets me use depends_on.condition: service_healthy so my app doesn’t race the database at startup. Five lines of YAML. Has saved me more flaky CI runs than anything else.
Second, I stopped sharing a single docker-compose.yml between local dev and production. Local gets docker-compose.yml plus docker-compose.override.yml (auto-merged), with bind mounts and debug ports. Production uses a docker-compose.prod.yml composed in explicitly with -f. Slightly more typing, and it avoids the “works on my machine, breaks in staging because of a volume mount” class of bug. I wrote a bit about keeping clean boundaries between environments in my API design notes; the same instinct applies to infra.
Things I used to do that I’ve quietly dropped
In the spirit of admitting things.
- Fat
ENTRYPOINTshell scripts that do migrations, seeding, and starting the app. Too clever. I split these into separate compose services now, or use init containers on Kubernetes. Anentrypoint.shthat does three things fails in three places and logs in one. - Obsessing over the exact order of every
COPYline. The 2019 blog posts treated layer ordering like a sacred rite. With BuildKit’s DAG-based builds and cache mounts, micro-optimising layer order matters much less than it used to. Get the dependency step cached early and move on. - Shipping a single “dev” image with every tool installed. Delightful for a week, unmaintainable by month three. I use a small runtime image and a separate
devstage (viatarget: devin compose) when I want hot reload and debuggers.
What to actually do this week
If your Dockerfiles are older than a year, open your most-deployed service and check three things.
- Are you on BuildKit? (
docker buildx versionshould work.) If not, enable it. The cache-mount change alone is worth an afternoon. - Does your image run as non-root? Grep your Dockerfile for
USER. If there’s noUSERline, add one before your next deploy. - Is your
.dockerignoreempty or missing? Write one. At minimum:node_modules,.git,.env*,*.log,coverage/,dist/if you build inside the image.
That’s a 20-minute PR for most projects and it’ll outlive any 2026-flavoured “top 10 Docker tips” post. If you want a look at the work behind the opinions here, I keep notes on my project page; a lot of my consulting time goes to shipping containerised services for teams who don’t want to think about Docker every sprint. The best Dockerfile is the one nobody on the team has opened in a year.