Skip to content

technology

7 posts

Forty-one to eighty-eight

Showrunner’s mobile PageSpeed score was 41 (boo!). It is now 88 (yay?). This is a technical note on how that happened, mostly because the most useful lesson from the whole exercise was far from what I expected, and maybe that will help someone who goes Googling looking for a solution.

Showrunner is built on Next.js, React, TypeScript and Tailwind, with Supabase as the backing store. The public pages stream server-rendered content, the admin side is where I actually write posts. I’d noticed that the site’s Performance score wasn’t terrific: 41/100. I know enough not to obsess too much on the specific score, but that it could be drastically improved by focusing on the specific metrics that contribute to the big number.

I had some ideas of where to start, but for this I recruited both Claude Code and OpenAI’s Codex into this process at various points, whether diagnosing issues and suggesting approaches, or carrying out the larger structural changes.

When you break down that score of 44, the starting position wasn’t a disaster. CLS was already zero. Images were AVIF/WebP. next/image was everywhere it was supposed to be — except, crucially, inside markdown content, where the rendered HTML was being injected via dangerouslySetInnerHTML and bypassing the image pipeline entirely. Claude flagged this as the likely LCP bottleneck. It was right.

Codex proposed a four-part plan. I agreed on the targets, pushed back on the scope (a “homepage-only” markdown renderer, when the cards in question render on /writing, /tag/[slug], /search, and inside the load-more flow), and asked for the changes to be sequenced so each one could be attributed to a number. That last bit turned out to matter more than anything else.

The first four

Deleting app/loading.tsx was a one-line change. The root loading skeleton was wrapping the entire public tree, which meant Lighthouse was measuring the skeleton as the LCP candidate rather than the actual timeline content streaming in behind it. Score climbed into the mid-50s.

Wrapping getAllSiteSettings in React 19’s cache() deduped three Supabase round-trips per request down to one. Small but real.

Rewriting getTimelineItems to use limit + 1 instead of count: "exact" saved Postgres a full row count on every query. The performance win was minor; this was more of a hygiene change.

Then the big one: overriding md.renderer.rules.image so markdown-rendered images emitted /_next/image URLs with srcset, sizes, dimensions, and fetchpriority for the first image on a priority card. (Cards are the React components that render each post in the timeline — one per post type, so a TextPostCard, an AlbumPostCard, a LetterboxdCard, and so on.) The dimensions came from the upload handler reading them out of the buffer at the time of upload — stored in posts.metadata.inline_images, JSONB, no migration required. After this landed: Performance 65, LCP down from 15.2s to 4.5s.

The change that did the work

PageSpeed at 65 was respectable, but it wasn’t good.

Wiring in the bundle analyzer made the next move obvious. A 109 KB chunk on the public bundle was entirely markdown-it and its dependencies — markdown-it-footnote, entities, linkify-it, punycode, mdurl. The homepage was shipping a full markdown parser to the client, even though the server already had all the markdown and was perfectly capable of rendering it. The cards were importing renderMarkdown directly, the dynamic-imported LoadMoreButton was prefetching them, and so the parser was riding along with every public page.

The fix was conceptually simple and structurally invasive: render HTML at the query layer, attach it to the post as content_html, and strip the @/lib/markdown import out of every card component. Seven cards lost their markdown imports. RecapPostCard lost its editorial-slicing helper (the slicing now happens server-side, once). VideoPostCard’s embed extraction simplified for the same reason. The RSS route got a { optimizeImages: false } flag so feed readers receive absolute URLs rather than /_next/image paths.

Bundle analyzer confirmed it: markdown-it gone from public bundles, shipping only with the admin editor. Performance jumped from 65 to 88. LCP 4.5s to 2.9s. TBT nearly halved — exactly what you’d expect from pulling a parser off the main thread.

This was the whole phase, really. The rest was setup and cleanup.

The lesson worth writing down

At 88, PageSpeed flagged “613 KiB unused JavaScript” and “99 KiB unused CSS.” I went looking for what was left to cut. There wasn’t much. Total production CSS was 8.7 KB gzipped — Lighthouse’s “99 KiB unused” figure was larger than the entire stylesheet, an artefact of how it counts(?). The top four JS chunks were framework, react-dom-client, the Next.js client runtime, and main. All untrimmable. We were sitting on the framework floor.

The only real lever left was Google Analytics. afterInteractive runs gtag.js as soon as React hydration finishes; lazyOnload waits for window.onload. Deferring further was the first thing that occurred to me. I changed two strategy attributes.

Performance went from 88 to 67. TBT went from 250ms to 870ms. Oh dear.

The theory had been plausible: get GA out of the hydration window, free up the main thread, save some TBT. The measurement said the opposite. On Lighthouse’s throttled mobile, lazyOnload shifted GA’s parse and eval to land inside the TBT measurement window, where it counted as blocking. afterInteractive had let it ride alongside hydration, which Lighthouse already budgets for. The counterintuitive answer was the right one: defer less, not more.

Reverted, documented, moved on.

What I’d take from this

Sequence changes and measure each one. Without the per-change attribution, I’d never have known the server-side markdown render was worth twenty-three points on its own — and worth the structural disruption it caused.

The framework floor is real. Once React and Next.js dominate the bundle, Lighthouse’s “unused JS” metric becomes misleading. It’s coverage noise rather than a lever.

“Homepage-only” is almost always the wrong scope boundary when components are shared. The pre-rendering refactor worked because it was the same code path everywhere.

And measurement beats intuition, every time. The lazyOnload reversal cost twenty-one points on a change that any reasonable person would have shipped without checking. That one’s now an invariant in CLAUDE.md, recorded specifically so I don’t try it again, and nor does Claude.

NMS Ceefax

nmsceefax.co.uk

A fully working Ceefax service, kept up to date with real news, weather, sport and TV listings, viewable through an interactive on-screen remote that behaves exactly as you remember. Nathan Dane has been building this since 2015; what started as a home-broadcast hobby in his attic grew into custom PHP scrapers, hand-soldered VBIT-Pi inserter boards, and eventually a YouTube stream of Pages From Ceefax decoded through period-correct hardware. Page 302 for the football.

There’s a Good Reason You Can’t Concentrate

www.nytimes.com

Cal Newport argues that just as diet and exercise became cultural common sense in a generation, we need a similar shift around “mental fitness” by treating sustained attention as something to train rather than concede. The framing is useful, though it covers ground Nicholas Carr mapped fifteen years ago in “The Shallows” (the net as “an interruption system, a machine geared for dividing attention”). I am unsure whether a fitness-style cultural shift can actually take hold when the incentives of every device in your pocket run the other way.

The Last Good Thing

theamericanscholar.org

Jess Love acquires a binder of 92 DVDs from a neighbour’s porch and talks herself into a moral framework around physical media, parenting, and opting out of streaming. The real subject is the reminiscence bump: why the technologies of your formative years feel not just familiar but correct, and how nostalgia oscillates between protective instinct and self-delusion.

Endgame for the Open Web

anildash.com

Anil Dash on how the open web is being eroded. At the end he talks about building “good AI” and alternative infrastructures. I wonder what concrete coordination, funding models and incentive structures would enable those projects to scale and compete with Big Tech before the open web’s critical institutions collapse?

Showrunner

A personal publishing system built to scratch a very specific itch: I wanted a Tumblr-style blog with editorial design sensibilities, and nothing that existed was quite right.

Showrunner is a headless CMS backed by Supabase and served through Next.js. It supports the post types I actually use—text, links, quotes, photos, albums—and pulls in activity from Letterboxd, Goodreads, Backloggd, and Pinboard to create a unified timeline of everything I’m reading, watching, playing, and bookmarking.

The design takes cues from Frank Chimero and Max MacWright: warm off-white backgrounds, a serif body column, restrained typography. Dark mode, naturally. The kind of site that looks like it was made by a person rather than a platform.

Some details worth noting:

  • Bluesky cross-posting, so I don’t have to choose between owning my content and participating in public conversation
  • Full-text search across posts and external feeds
  • Monthly Last.fm listening recaps generated automatically
  • Album posts enriched with metadata from Last.fm and MusicBrainz
  • Letterboxd reviews enriched with TMDB director credits and tag scraping

The whole thing was built collaboratively with Claude Code—architecture decisions, implementation, the editorial restyle, all of it. That process deserves its own post at some point.

The name is borrowed from television production. The showrunner is the person responsible for the creative direction of a series. Felt appropriate for a system whose entire purpose is giving one person control over how their work appears on the web.