Note: This post was written with AI assistance (Claude). The opinions and experiences are entirely my own.


I do a lot of things. Networks, firewalls, servers, Kubernetes, web development — I can work across all of it. The constraint isn't capability, it's time. Ten-hour work days, two hours of driving, and whatever sleep I can manage leaves about four or five hours to myself each evening. That's the window for everything else — eating, existing, talking to people, and building things.

When three websites needed to either get built or significantly overhauled in the same stretch of time, I needed to move fast. That's where Claude came in — not as a replacement for knowing what I was doing, but as a way to turn a clear technical direction into working code without spending three evenings on boilerplate. I'd describe what I wanted, review what came out, correct it, and iterate. The decisions were mine. The typing was shared.

Here's what shipped.


cmunroe.us — Migrating Away from Hugo

This site used to be a Hugo static site hosted on GitLab Pages. It worked fine, but I'd hit the ceiling of what I wanted to do with it. Hugo is great if you want a static blog and nothing else — but I wanted server-side rendering, a contact form, an RSS feed, a real sitemap, JSON-LD structured data, and a portfolio section with live screenshots. Static generation was fighting me at every step.

So I rewrote it from scratch in Node.js and Express. No framework, no CMS, no build step. Markdown posts live in content/posts/ as plain files with YAML frontmatter. The server reads them at startup, caches them in memory, and renders HTML on request. Gray-matter for frontmatter parsing, marked for the Markdown-to-HTML conversion.

The things I added that Hugo would have made painful:

  • Server-rendered post pages with a table of contents generated from headings, reading time estimates, and related posts matched by overlapping tags
  • Portfolio with live screenshots — fetched from the WordPress mshots service, cached to disk, served through an internal API endpoint so the browser never makes external requests. A polling loop on the portfolio page swaps placeholders for real screenshots once they're ready.
  • GitLab commit graph in the sidebar — pulled from the public calendar JSON endpoint and rendered as a colour-coded grid of the last 13 weeks
  • Contact form with rate limiting, nodemailer for SMTP, and a dedicated /api/screenshot/:name/status endpoint that the client polls

The whole thing deploys as a Docker image to K3s. Two replicas in production, one in staging, Keel watching the registry for updates.


heatherdesmond.com — Real Estate, Live Data

This one was already running but needed meaningful improvements. Heather is a Broker Associate at Home & Ranch Sotheby's International Realty on the Central Coast.

The interesting technical piece here is the data pipeline. The brokerage doesn't have an open public API, so the app scrapes their GraphQL endpoint hourly, extracts listing data, and stores it in PostgreSQL. Each listing gets a server-rendered property detail page with photos, maps, price history, and school district info. There's a subscriber list for email alerts when new listings hit, and a notification digest that batches new listings into a single email per subscriber rather than hammering inboxes.

The scraping part is the part that bites you. The brokerage CDN rate-limits aggressively — hit it too fast and you get HTTP 429s, which means you end up storing lower-quality fallback images instead of the full-res CDN URLs you wanted. There's a 500ms delay between photo scrape requests and a flag that tells the upsert to preserve existing DB photos if the scrape returned nothing for that listing. Small thing, matters a lot.

California real estate law requires agent and brokerage DRE license numbers on every page and in every outbound email. That's not optional, so it's baked into every template. The footer, the email headers, every property page — all of it carries both numbers.


addisonrealm.com — Cosmetologist Portfolio

Addison is a cosmetologist in Ventura, CA. She needed a site that could showcase her work, explain her services, and let clients reach her.

This one is the simplest of the three architecturally — Node.js and Express serving static HTML, with a PostgreSQL-backed contact form and a lookbook gallery. The interesting piece is the image handling. Clients send iPhone photos which are several megabytes each. The server runs sharp on startup to generate optimised thumbnails and resized versions, processing them one at a time to avoid OOMKill from concurrent full-resolution decodes. The lookbook page loads thumbnails and opens full-size optimised images in a lightbox with keyboard navigation and swipe support.

The contact form pre-selects the service dropdown based on a URL query param — the services page links to /contact/?service=extensions and so on. Small thing, but it makes the form feel intentional rather than generic.

One legal note worth mentioning: cosmetology licenses in California expire, and the site carries the license number. There's a note in my ops documentation to remind Addison to renew before June 2026. The kind of thing that's obvious until you forget it.


The Common Thread

All three sites share the same general architecture — Node.js, Express 5, no frontend build step, Kubernetes deployment, Cloudflare Tunnel for external access. The consistency makes maintenance easier. When I fix something in one codebase, I know where to look in the others.

What I did differently this time compared to past projects: I added test coverage from the start. Each app now has a Vitest + Supertest suite covering the key API routes and validation paths, and a CI job in the GitLab pipeline that runs tests before the Docker image builds. Not comprehensive, but enough to catch the obvious regressions.

The other thing I leaned into was proper mobile support from the beginning rather than bolting it on later. Hamburger nav, 44px minimum touch targets, sensible breakpoints, container padding that actually works on a 375px screen. It's easy to build something that looks fine on a 1440p monitor and falls apart on a 375px phone when that's where all your testing happens. Made a point of testing on mobile early this time.


Three sites live in production in one week is a reasonable pace. The K3s cluster is doing what it's supposed to do — new deployments show up in under a minute after a push to the registry, Keel handles the rollouts, Cloudflare handles the edge. The stack is boring in the best way.

What's next: PostgreSQL backups, Loki log aggregation, and probably a few more posts about things that went wrong.