I Let AI Build My Personal Site

5 min read

I’ve been meaning to rebuild my personal site for years. The old one was a template I barely touched. It worked, but it didn’t feel like mine.

So I tried something different. I used AI to build the whole thing.

Not “AI-generated slop pasted into a repo.” I mean a real collaboration. I planned the architecture, made the design calls, and directed every decision. Claude Code wrote the code, proposed ideas, and moved fast enough that I could iterate in real time instead of getting stuck on implementation details.

Here’s what that looked like.

The stack

Astro + Tailwind. Static site, zero unnecessary JavaScript. Markdown blog posts. Deployed to Netlify. Nothing fancy, which was the point. I wanted the site to feel crafted, not over-engineered.

The process

I started with a clear picture of what I wanted: monospace typography, warm neutrals, dark mode, and a layout that doesn’t look like every other dev blog. The kind of site that signals “engineer who cares about design” without screaming about it.

The workflow wasn’t just “prompt and pray.” I used a structured planning system called GSD (Get Shit Done) that breaks work into milestones, phases, and plans. Each phase goes through a discuss-plan-execute loop. Before writing any code, there’s a research step, a context-gathering session where I lock in decisions, and a detailed plan with acceptance criteria. Then executor agents run the plan, a code reviewer checks the output, and a verifier confirms the phase goal was actually met.

It sounds heavy, but it’s the “plan deliberately” part of my philosophy. Once the plan exists, execution is fast. We shipped the foundation in a single session. Design tokens, color palette, typography, responsive layout, dark mode with an anti-FOUC script so there’s no white flash on load. Then blog pages, about page, RSS feed, Shiki syntax highlighting for code blocks.

The interesting part wasn’t the speed. It was the iteration loop.

Where AI actually helped

The best moments were the ones where I could say “I don’t like this” and see three alternatives in seconds. We went through four accent colors before landing on dusty rose. I tried five different glyphs for the section markers before picking ✦. Each iteration was a rebuild and a preview, not a conversation about hypotheticals.

That feedback loop changes how you make decisions. You stop debating and start seeing. When it costs nothing to try something, you try more things. And you end up somewhere you wouldn’t have planned your way to.

AI also caught things I would have missed. View transitions broke the dark mode toggle because the event listener was bound to DOMContentLoaded, which only fires once. The fix was two lines, but I wouldn’t have found the root cause as fast on my own.

After each phase, a separate reviewer agent scanned the code for bugs and security issues, and a verifier agent checked whether the phase actually achieved its goal (not just whether tasks were completed). That distinction matters. “All tasks done” and “the feature works” are different things.

Once the features were built, I ran a design critique from Impeccable that evaluated the interface across visual hierarchy, typography, color usage, AI-slop detection, and ten other dimensions. It flagged that the about page photo was creating an oval instead of a circle (wrong aspect ratio), that post titles were competing with the accent glyph (both dusty rose, diluting the hierarchy), and that the reading page lacked visual rhythm between sections. Concrete, specific feedback that I could act on immediately. Not “consider improving the layout” but “make the photo w-36 h-36, change post titles to text-heading with hover:text-accent, add border separators and counts to reading sections.”

That kind of structured design review is something I’d normally skip on a personal site. Having a tool that runs it in seconds means it actually happens.

Where I had to push back

AI defaults to safe choices. Generic copy, conventional layouts, cautious suggestions. Every time I accepted the first suggestion, the site got blander. The good version came from pushing back.

“That color is too common.” “Remove the em dashes, they feel AI-generated.” “The tagline is redundant, delete it.” Those calls made the difference between a site that looks AI-built and one that looks intentional.

The lesson: AI is a builder, not a decision-maker. It moves fast in the direction you point it. If you don’t point it somewhere specific, it’ll give you something average.

What I shipped

  • A homepage with a stacked hero, a Currently section with work history, and a Writing + Reading grid
  • An about page with a real bio, not placeholder text
  • A reading page with books grouped by status (pulled from my Goodreads shelf)
  • Per-post Open Graph images generated at build time with satori + sharp
  • Site-wide view transitions with dark mode persistence
  • An RSS feed that preserves syntax highlighting

All of it static. All of it fast. All of it built in a couple of sessions.

What I’d tell someone trying this

Plan first. The reason this worked is that I knew what I wanted before I started building. AI amplifies your direction. If your direction is vague, it amplifies vagueness.

Have opinions. Say no to suggestions that feel generic. Push for the version that feels like you.

Ship fast, iterate in the browser. Don’t spend time imagining what something will look like. Build it, look at it, change it. The cost of trying is almost zero now.

And don’t hide the fact that you used AI. The craft is in the decisions, not in typing the code.

Tools used