Forty-one to eighty-eight
Showrunner’s mobile PageSpeed score was 41 (boo!). It is now 88 (yay?). This is a technical note on how that happened, mostly because the most useful lesson from the whole exercise was far from what I expected, and maybe that will help someone who goes Googling looking for a solution.
Showrunner is built on Next.js, React, TypeScript and Tailwind, with Supabase as the backing store. The public pages stream server-rendered content, the admin side is where I actually write posts. I’d noticed that the site’s Performance score wasn’t terrific: 41/100. I know enough not to obsess too much on the specific score, but that it could be drastically improved by focusing on the specific metrics that contribute to the big number.
I had some ideas of where to start, but for this I recruited both Claude Code and OpenAI’s Codex into this process at various points, whether diagnosing issues and suggesting approaches, or carrying out the larger structural changes.
When you break down that score of 44, the starting position wasn’t a disaster. CLS was already zero. Images were AVIF/WebP. next/image was everywhere it was supposed to be — except, crucially, inside markdown content, where the rendered HTML was being injected via dangerouslySetInnerHTML and bypassing the image pipeline entirely. Claude flagged this as the likely LCP bottleneck. It was right.
Codex proposed a four-part plan. I agreed on the targets, pushed back on the scope (a “homepage-only” markdown renderer, when the cards in question render on /writing, /tag/[slug], /search, and inside the load-more flow), and asked for the changes to be sequenced so each one could be attributed to a number. That last bit turned out to matter more than anything else.
The first four
Deleting app/loading.tsx was a one-line change. The root loading skeleton was wrapping the entire public tree, which meant Lighthouse was measuring the skeleton as the LCP candidate rather than the actual timeline content streaming in behind it. Score climbed into the mid-50s.
Wrapping getAllSiteSettings in React 19’s cache() deduped three Supabase round-trips per request down to one. Small but real.
Rewriting getTimelineItems to use limit + 1 instead of count: "exact" saved Postgres a full row count on every query. The performance win was minor; this was more of a hygiene change.
Then the big one: overriding md.renderer.rules.image so markdown-rendered images emitted /_next/image URLs with srcset, sizes, dimensions, and fetchpriority for the first image on a priority card. (Cards are the React components that render each post in the timeline — one per post type, so a TextPostCard, an AlbumPostCard, a LetterboxdCard, and so on.) The dimensions came from the upload handler reading them out of the buffer at the time of upload — stored in posts.metadata.inline_images, JSONB, no migration required. After this landed: Performance 65, LCP down from 15.2s to 4.5s.
The change that did the work
PageSpeed at 65 was respectable, but it wasn’t good.
Wiring in the bundle analyzer made the next move obvious. A 109 KB chunk on the public bundle was entirely markdown-it and its dependencies — markdown-it-footnote, entities, linkify-it, punycode, mdurl. The homepage was shipping a full markdown parser to the client, even though the server already had all the markdown and was perfectly capable of rendering it. The cards were importing renderMarkdown directly, the dynamic-imported LoadMoreButton was prefetching them, and so the parser was riding along with every public page.
The fix was conceptually simple and structurally invasive: render HTML at the query layer, attach it to the post as content_html, and strip the @/lib/markdown import out of every card component. Seven cards lost their markdown imports. RecapPostCard lost its editorial-slicing helper (the slicing now happens server-side, once). VideoPostCard’s embed extraction simplified for the same reason. The RSS route got a { optimizeImages: false } flag so feed readers receive absolute URLs rather than /_next/image paths.
Bundle analyzer confirmed it: markdown-it gone from public bundles, shipping only with the admin editor. Performance jumped from 65 to 88. LCP 4.5s to 2.9s. TBT nearly halved — exactly what you’d expect from pulling a parser off the main thread.
This was the whole phase, really. The rest was setup and cleanup.
The lesson worth writing down
At 88, PageSpeed flagged “613 KiB unused JavaScript” and “99 KiB unused CSS.” I went looking for what was left to cut. There wasn’t much. Total production CSS was 8.7 KB gzipped — Lighthouse’s “99 KiB unused” figure was larger than the entire stylesheet, an artefact of how it counts(?). The top four JS chunks were framework, react-dom-client, the Next.js client runtime, and main. All untrimmable. We were sitting on the framework floor.
The only real lever left was Google Analytics. afterInteractive runs gtag.js as soon as React hydration finishes; lazyOnload waits for window.onload. Deferring further was the first thing that occurred to me. I changed two strategy attributes.
Performance went from 88 to 67. TBT went from 250ms to 870ms. Oh dear.
The theory had been plausible: get GA out of the hydration window, free up the main thread, save some TBT. The measurement said the opposite. On Lighthouse’s throttled mobile, lazyOnload shifted GA’s parse and eval to land inside the TBT measurement window, where it counted as blocking. afterInteractive had let it ride alongside hydration, which Lighthouse already budgets for. The counterintuitive answer was the right one: defer less, not more.
Reverted, documented, moved on.
What I’d take from this
Sequence changes and measure each one. Without the per-change attribution, I’d never have known the server-side markdown render was worth twenty-three points on its own — and worth the structural disruption it caused.
The framework floor is real. Once React and Next.js dominate the bundle, Lighthouse’s “unused JS” metric becomes misleading. It’s coverage noise rather than a lever.
“Homepage-only” is almost always the wrong scope boundary when components are shared. The pre-rendering refactor worked because it was the same code path everywhere.
And measurement beats intuition, every time. The lazyOnload reversal cost twenty-one points on a change that any reasonable person would have shipped without checking. That one’s now an invariant in CLAUDE.md, recorded specifically so I don’t try it again, and nor does Claude.