Skip to content
← Back to blog

Four Ways to Deploy a Bun API: What I Picked for Honu and Why

John Fay John Fay ·
TL;DR: Compiled tied distroless on throughput, matched alpine on size, and beat everything on attack surface. I originally picked distroless for the operational comfort, then talked myself out of that and shipped the compiled binary. Both are defensible — here's the honest tradeoff. Also, I almost published wrong numbers because I forgot to set CPU limits.
bun docker devops performance honu

The image I almost kept

When I first containerized Honu (released, but not in its official home yet), the default oven/bun:1-slim Dockerfile felt fine. It worked. The image was 318 MB. Nobody complained.

Meanwhile, on another project I’ve been running a bun build --compile single binary on chainguard/glibc-dynamic for a while — no Bun runtime in the final image at all. I picked that approach originally to satisfy a security-compliance requirement at my day job, and it just stuck. But honestly, I’d never pressure-tested why it was the right choice versus the alternatives. I knew it passed the audit. I didn’t know if it was actually faster, smaller, or just performatively secure.

Honu was a good excuse to find out. So I benchmarked all four real options against each other. The answer wasn’t what I expected.

Michael Scott - Tell me more

The contenders

  1. oven/bun:1-slim — the default. Debian-slim base, full Bun runtime, has apt and a shell. 318 MB.
  2. oven/bun:1-distroless — Bun on Google’s distroless. No shell, no apt, no /sbin. 214 MB.
  3. oven/bun:1-alpine — Bun on Alpine + musl. Has a busybox shell + apk. 193 MB.
  4. bun build --compile + chainguard/glibc-dynamic — a single self-extracting binary on a hardened minimal base. 193 MB. No shell, no Bun binary at all.

The same Stage 1 builds them all (full oven/bun:1, frozen lockfile install, vite build for the SPA, bun build for the API). Only Stage 2 differs.

The setup

Containers limited to 2 CPU and 512 MB RAM via cgroups. 1k-request warm-up, then 10k requests at 50 concurrency against /api/v1/health. Five runs each, first run discarded as a warm-up outlier.

ImageSizeCold startRPSRSS
slim (default)318 MB256 ms613 rps70 MB
distroless214 MB253 ms735 rps70 MB
alpine193 MB269 ms662 rps68 MB
compiled193 MB257 ms735 rps65 MB

A few things stood out:

  • Cold start is essentially identical across all four. This surprised me — I expected --compile to win meaningfully here. It doesn’t, because the slim image’s Bun runtime starts so fast that there’s no daylight to win.
  • Distroless was faster than slim on the same Bun runtime. ~20% higher throughput on identical code. Best guess: less init / cgroup overhead on a tighter base.
  • The compiled binary matched distroless. No throughput penalty for embedding the runtime, contrary to my first sloppy measurement.
  • Alpine sat in the middle. musl libc costs ~10% throughput on this workload vs glibc, but you save ~20 MB.

Jim - Mildly impressed

What I picked for Honu

I went back and forth on this twice. The first draft of this post had me landing on distroless. Then I sat with the numbers for a day and shipped the compiled binary instead. Both are defensible — let me walk through the actual trade-off.

The case for distroless — the safer evolutionary upgrade. Same Dockerfile shape as slim, just a smaller Stage 2. If a Bun CVE drops, you rebuild Stage 2 only and your bundle is unchanged. When something goes weird in prod, docker run --entrypoint sh oven/bun:1 against the same Bun version gets you an interactive shell to poke at things. Source maps are a sibling file Sentry can read normally. It’s the boring, low-risk move, and there’s nothing wrong with that.

The case for compiled — wins on the metrics that actually show up in a benchmark. Tied for fastest throughput. Smallest image at 193 MB. Tightest attack surface in the lineup: no Bun binary in $PATH, no shell, no busybox, fewer CVEs to track at all. One file, runtime embedded, no install dance.

What made me flip:

  • The operational concerns I weighted toward distroless are real but mostly matter in a team context with on-call rotations. Honu is solo, I push direct to main, and I can rebuild quickly when something breaks.
  • bun build --compile is younger and corners still bite — we hit one mid-build where import.meta.dir reads through the embedded virtual FS, so a package.json read silently failed under --compile until I refactored to a JSON import. With bun run index.js on a normal file, that whole class of issue doesn’t exist. That’s a real maturity tax. I just decided I’d rather pay it once than carry the larger attack surface forever.
  • Patching Bun-runtime CVEs is genuinely cheaper with distroless (rebuild Stage 2, done). With compiled, every Bun update means re-running --compile. Small operational tax, but a real one.
  • Source-map symbolication has historically been more fragile with compiled binaries than with bun run dist/index.js. This has improved, but it’s a thing to verify before you bet on it. I verified.

If I were doing this for a team with a rotation, I’d probably still pick distroless. For a solo deploy where I own the whole stack, compiled is the bolder call that wins on size and security if you’re comfortable owning the maturity tradeoffs of --compile. I’m comfortable.

The contrarian take: Alpine. If you run a lot of small instances and image storage adds up, it’s the same 193 MB as compiled with a full shell in the image — easier to debug, slightly slower (~10% throughput hit from musl), larger attack surface. If you don’t want to deal with --compile at all but still want the smallest image, that’s the move.

The benchmark mistake I almost shipped

Here’s the embarrassing part. My first pass on this had the compiled binary at roughly half the throughput of slim. I was about to publish a blog post titled something like “the compiled approach looks great but actually loses.” Good story, wrong story.

The problem: I was running unlimited containers on my laptop. The compiled binary’s first run hit a cold filesystem cache and spent its entire benchmark window paying for that miss while every other image had already been pulled and cached during earlier test runs. No CPU limits, no warmup pass, no run discarding. Pure measurement noise dressed up as a result.

Two fixes turned it around:

  1. CPU and memory limits via cgroups. Without these, the host’s idle cores let any container look fast, and the bottleneck becomes whatever else is running on your laptop. With them, you measure the container, not the host.
  2. A warmup pass plus discarding the first run. Filesystem caches, JIT warmup, and connection pool initialization all front-load cost into run #1. If you don’t throw it away, it dominates.

After that, the compiled binary was tied with distroless. The “2× slower” finding was entirely an artifact of how I was measuring.

Dwight - False

If you’re benchmarking containers, do it under cgroup limits and discard your first run. Otherwise you’re benchmarking your laptop’s mood.

Caveats

  • This is a single-host laptop benchmark on a workload that’s mostly “JSON 200 OK”. Real production traffic with sqlite contention and AI calls will look different — and Honu has both, so I’ll re-measure under realistic load before I trust these numbers in anger.
  • Numbers are for [email protected] and oven/bun:1-* images circa April 2026. Bun’s runtime moves fast; re-check on minor bumps.
  • “Distroless wins” is a conclusion about this workload (small Hono API, sqlite-bound, mostly handler dispatch). A CPU-heavy workload might prefer a different base. A workload that needs glibc-only native modules definitely will.

The takeaway

If you’re on the default oven/bun:1-slim and you’ve never thought about it, you have two clearly better options waiting for you:

  • Distroless if you want a free win with zero behavioral change. Same Dockerfile shape, smaller image, higher throughput, smaller attack surface. The boring upgrade. There’s nothing wrong with boring.
  • Compiled binary on chainguard/glibc-dynamic if you’re willing to absorb the maturity tax of --compile in exchange for the smallest image, tightest attack surface, and same throughput as distroless. The bolder upgrade. Solo or small-team deploys are where this shines.

I picked compiled for Honu after first writing this post recommending distroless. Both calls are defensible. Pick the one that matches your actual deploy constraints, not the one that sounds coolest on Twitter 🎉

Michael Scott - I knew exactly what to do