Experiments

Thoughts, prototypes, and works in progress.

6 Feb 2026

The Enchanted Forest and the Extremely Important Pizza Pie: Building a Text Adventure for the Kid Who Thought Books Were Actual Magic

When I was ten years old, I discovered choose-your-own-adventure books and became convinced they involved some form of sorcery. The premise seemed impossible. You make a choice at the end of a page, flip to page 47 or page 83, and the story just continues perfectly from there. How did the book know what I picked? I genuinely spent weeks trying to figure out the trick. I was not the sharpest tool in the shed. Eventually it dawned on me that the authors had simply written multiple paths and the page numbers were just addresses. The revelation was both disappointing and fascinating.

That memory resurfaced recently when I decided to build The Enchanted Forest and the Extremely Important Pizza Pie, a React-based text adventure game inspired by those same books. This time I would be the one creating the illusion of magic, except with TypeScript instead of numbered pages.

The technical foundation is React 19 and TypeScript. I architected the narrative engine as a directed graph system where story logic lives completely separate from the UI implementation. The story data is structured as a strongly-typed collection of nodes. Each node contains narrative content, decision branches, and state triggers. TypeScript proved invaluable here. By enforcing strict type safety on node connections, I effectively eliminated dead ends and broken links at compile time. The user never hits a choice that leads nowhere because the compiler would not let me build the app in the first place.

State management uses React Context to handle dynamic elements like character selection and the Companion system. This allows the application to track complex user progress. Unlocking the Jazz Bees or rescuing Sir Quackalot gets instantly reflected in the HUD and available narrative options. The rendering layer stays lightweight, focusing solely on presenting the current state.

All the visuals were created using Google Gemini. I fed it prompts for whimsical forest scenes, quirky characters, and environmental details. The generated assets gave the game a cohesive retro aesthetic without requiring any illustration skills on my part.

Aesthetically, I focused on a retro-polished user experience. The interface features a terminal-inspired layout with pixel-perfect asset rendering using image-rendering: pixelated in CSS. The visual storytelling matches the whimsy of the writing. Everything feels like it belongs in the same universe, even though the narrative paths can diverge wildly depending on player choices.

This project demonstrates how data-driven architecture can empower creative content. The separation between story data and presentation logic means I can add entire new narrative branches without touching the UI code. It scales. It is robust under the hood while remaining magical on the screen, which feels appropriate given my childhood confusion about how any of this worked in the first place.

The Enchanted Forest and the Extremely Important Pizza Pie: Building a Text Adventure for the Kid Who Thought Books Were Actual Magic screenshot 1
View Project
React 19TypeScriptInteractive FictionGoogle Gemini
23 Dec 2025

A vibe-coded re-imagining of my first BASIC video game

In 1998, I spent a week hunched over a computer in my high school lab writing my first game in BASIC. The car was the letter H. The road edges were strings of angle brackets. The road meandered down the screen, and you steered left or right to avoid crashing into the barriers. It was crude, but it was mine, and it taught me everything about loops, conditionals, and the satisfaction of making something interactive.

In 2025, I decided to revisit that experience out of genuine curiosity thorugh the lens of agent-assisted coding. What took me seven days of trial and error in 1998 took less than an hour this time around.

The result became Neon Rider, a vertical-scrolling arcade racer inspired by games like Road Fighter and OutRun. I used to manually debug collision detection line by line. Now I can describe what I want and iterate rapidly. The Web Audio API let me synthesize sound effects in code. CSS gave me CRT scanlines and neon glow effects that would have been impossible in my BASIC days.

I was able to compress the barrier between imagination and implementation even as he principles remain unchanged - game loops, state management, player feedback. The time from concept to playable prototype collapsed in a way that felt almost magical.

What fascinates me most is how the core understanding I gained from that week in 1998 remains just as essential today. The fundamentals of programming and game design have not changed, even as the tools have evolved beyond recognition.

A vibe-coded re-imagining of my first BASIC video game screenshot 1
View Project
GenAIWeb Audio APIGame DevRetro UI
13 Nov 2025

DreamWeaver: When Bedtime Stories Needed Their Own App

For months, my five-year-old daughter wanted the same story every night. Adventures featuring SparklyButt the sparkling dinosaur and Ronald the T-Rex who wore spaceship pajamas. When my son turned three and wanted to join the ritual, the storytelling expanded to include new friends like Slice the pizza and Boo the ghost.

I started using Claude to help create these stories, feeding in scenarios and character descriptions each night. It worked beautifully until around the 35th story, when the chat session struggled to keep track of details. Character descriptions drifted. What started as creative fun became a tedious game of reminding the system who each character was.

That weekend, I built DreamWeaver. A web app to store character specifications exactly as I wanted them, define story scenarios, and generate both narratives and accompanying images without friction.

The biggest surprise was visual consistency. Language models handle character traits reasonably well, but image generation is far less forgiving. The tenth image of SparklyButt would look nothing like the first. The solution required creating highly specific image prompts for each character, detailed enough to produce 95 to 99 percent visual similarity every time, regardless of context.

The app evolved through real bedtime testing. The character manager became essential for consistency. The silliness slider emerged from observing how my kids' moods varied wildly. Some nights called for gentle tales, others demanded maximum chaos. Building with a responsive framework taught me how differently the same interface needs to behave across devices. Bedtime happens on a phone, but editing characters works better on a tablet, and managing the story library makes more sense on desktop.

Night mode was my crash course in accessible design, born from necessity after the first bedtime attempt left me temporarily blinded by a blazing white screen in an otherwise dark room. Nothing says sweet dreams quite like retinal burn. This became my first real exercise in implementing proper day and night modes, complete with gentle transitions and carefully calibrated contrast ratios. I focused on tactile elements like sticker shadows and paper textures to make the interface feel warm.

I cannot share the app publicly yet. There are practical hurdles around API key management and usage limits that remain unresolved. Since this is a personal project built for two very specific critics, those issues take a back seat to features they actually request.

Watching their faces light up when illustrations of their favorite characters appear makes every iteration worthwhile.

DreamWeaver: When Bedtime Stories Needed Their Own App screenshot 1
DreamWeaver: When Bedtime Stories Needed Their Own App screenshot 2
DreamWeaver: When Bedtime Stories Needed Their Own App screenshot 3
DreamWeaver: When Bedtime Stories Needed Their Own App screenshot 4
ReactPrompt EngineeringAccessibilityTailwind CSS
8 Oct 2025

Building in Public: A Portfolio for the AI-Assisted Era

I have spent years in engineering leadership roles, managing teams, and systems. I had never put up a proper web presence. Building a good looking, functional portfolio site that passed my own bar always felt like it would take weeks of effort I could never justify. Then I started experimenting with prompt-first development, and the calculus changed entirely.

This site exists for two reasons. First, to finally establish a presence on the web after years of telling myself I would get around to it. Second, to serve as a living showcase for my ongoing experiments with generative AI tools and coding workflows that would have seemed like science fiction just a few years ago.

My current toolkit is a mix-and-match approach across several platforms. Claude Code handles most of the heavy architectural decisions and refactoring. Google AI Studio helps me iterate on experimental features quickly. I bounce between Gemini and ChatGPT depending on which model handles a particular problem domain better. I choose whichever tool fits the specific task at hand.

I built this website on Next.js 14 with Tailwind CSS, chosen for performance and maintainability. I structured the site with a content-first architecture. Profile data, project details, and work experience all live in strongly-typed TypeScript files separate from the UI code. This means I can update my resume or add new experiments without touching any interface logic. Sustainability matters when you are building something meant to evolve over years.

I added Framer Motion for subtle entrance animations and obsessed over details that often get overlooked. I did not want my static website to 'look' static, so fluidity in motion was very important visually. Scroll margins ensure deep links do not hide behind the header and a custom lightbox lets visitors explore project images without losing their place. I am happy how all these micro-interactions add up to something that feels crafted.

The Geocities easter egg deserves special mention. In the mid to late 1990s, I was utterly fascinated with web design and development, spending ungodly amounts of hours in Dreamweaver and Flash, experimenting with 3D rotating images, color gradients, dark themes, and of course, the iconic Under Construction GIFs. I heard a podcast where the host had added a similar retro mode to their site, and it sparked an idea. What if I tested whether all the components on my site could transform simultaneously at the click of a button? The feature started as a goof, a technical challenge more than anything else. I built a global context system that injects an entire overlay layer complete with spinning earth GIFs, construction signs, and raw neon colors. It transforms the entire site into a chaotic homage to 1999 for anyone curious enough to discover it.

What strikes me most about this process is how the bottleneck has shifted. The constraint has become taste, direction, and knowing what I want to build. The tools handle the mechanical translation of intent into code with remarkable fidelity. My role has become more curator than constructor, which feels oddly appropriate for someone finally putting together a portfolio after twenty years of building other people's systems.

Building in Public: A Portfolio for the AI-Assisted Era screenshot 1
Next.jsTailwind CSSGenerative AICursorClaudeGemini