ThermalDrift: How I Brought My App Idea to Life Solo Using AI as My Product Team
- Feb 25
- 5 min read
Starting Solo with AI as My Team
I had an app idea and, for the first time, I didn't feel like I needed a team to get it off the ground. Within a few hours I had a working prototype on my iPhone. Over the past week I've been experimenting and iterating on that prototype — refining the mechanics, hardening the experience, and preparing it for usability testing. The difference between this and every shelved idea I've had before? I used Claude as my entire product and engineering team.

The Idea: Navigation as a Feeling
The concept started simple. What if instead of following a map, you navigated somewhere the way you did as a kid playing "you're getting warmer"? No arrows, no turn-by-turn instructions — just a sense of whether you're heading toward something or away from it.
I called it Thermal Drift. The premise: pick a type of destination (coffee, food, nature, a surprise), and let the app guide you purely through temperature metaphors and haptic feedback. Move toward the destination and your phone gets "hotter." Drift away and it goes cold. No map visible. Eyes up. Exploring.
The product philosophy was baked into the name itself. In electronics, thermal drift describes how components behave differently as temperature changes — they wander. That's exactly what the app encourages: intentional wandering. The inefficiency is the point.
Defining the Problem Before Writing a Line of Code
Before any development, I needed to pressure-test the concept. This is where Claude became less of a tool and more of a thought partner.
I used it to stress-test the core value proposition: Who actually wants inexact navigation? We worked through the target user — not someone optimizing a commute, but someone looking for a spontaneous, discovery oriented experience. A "thermal drifter." That user identity reframed the product's biggest apparent weakness (imprecision) as its defining feature.
We also defined what success looks like: an eyes-up navigation experience where users engage with their surroundings instead of their screen. That north star shaped every subsequent decision.
Translating Concept into Architecture
With the product brief clear, I worked with Claude to define the UX structure. We landed on three screens:
Destination Picker — where users choose a category (coffee, food, nature, culture, shopping, or surprise) before committing to a drift. Each mystery category triggers a confirmation dialog, building a small ritual of commitment before the experience begins.
Temperature Navigator — the core experience. No map. Just real-time feedback: a temperature reading, color gradients shifting from cool to warm, and six distinct haptic patterns ranging from "frozen" to "burning" depending on how accurately you're tracking toward the destination.
Arrival Screen — the payoff moment, when you've found your destination through feel rather than instruction. The interaction model was driven by a logarithmic distance scale so feedback would feel proportional — the haptics wouldn't fire identically at 200 meters and 20 meters. That calibration came out of Claude helping me think through the math behind the metaphor.
Building It: Solo PM, AI Engineering
I'm not a developer. What I am is a product manager who knows how to specify behavior clearly — and it turns out that's most of what you need when Claude is writing the code.
The app was built in SwiftUI. Claude handled the implementation of CLLocationManager for GPS, the haptic throttling logic to prevent system overload, and the exponential backoff error handling for location failures. Each session followed a pattern: I'd describe the behavior I wanted in product terms, Claude would translate that into working Swift, and we'd iterate until it matched the spec.
There were real technical challenges. Metal API validation caused instability during active navigation sessions. Rather than getting stuck, we diagnosed the issue, implemented a temporary workaround to preserve development momentum, and flagged it for a proper fix. That pragmatism — knowing when to move forward versus when to dig in — is something I had to actively manage even with an AI collaborator.
What a Week of Iteration Validated (and What It Didn't)
The core mechanic works. The temperature and haptic feedback system is legible — users understand what it's telling them without instruction. The "getting warmer" intuition is deeply embedded from childhood, which does a lot of the onboarding work for free.
What iteration surfaced was that app stability during active navigation sessions needed hardening. That became the priority: comprehensive crash prevention, timer cleanup to prevent memory leaks, and debug logging to identify failure patterns before they hit users.
The branding also went through iteration. Early names leaned too heavily on temperature metaphors alone. The breakthrough was combining the thermal concept with movement — "drift" — which captured both the mechanism and the intended behavior. The taglines that resonated most were the ones that implied permission:
Navigate by feel, not by map. You're getting warmer.
What This Process Actually Looks Like
People tend to imagine AI-assisted development as either magic (fully automated) or trivial (just autocomplete). The reality is more like having a highly capable collaborator who needs good direction.
The leverage came from being specific. Vague prompts produce vague outputs. Describing behavior in product language — "the haptic pattern should escalate as the user gets within 50 meters, with a distinct pulse for each 10-meter increment" — produced usable code. Describing it as "make it feel warmer when you're close" did not.
The other leverage point was knowing what I was responsible for: the product decisions. Claude could implement a haptic pattern, but I had to decide which one served the experience. Claude could structure three screens, but I had to know that three was right and what each screen needed to accomplish.
Where It Stands
Thermal Drift is currently in usability testing, with performance and UI refinement as the primary focus. The goal at this stage is ensuring the core interaction holds up across different users and environments — that the feedback loops are intuitive, the app is stable, and nothing in the interface gets in the way of the experience.
Visual and audio design will be refined after usability testing wraps. That's an intentional sequencing decision: there's no point polishing aesthetics before the functional experience is locked. Once the interaction model is proven out, the sensory layer — sound design, visual polish, motion — gets its own pass before App Store submission.
What started as a thought experiment — what if navigation felt like a game? — is now something you can hold in your hand and walk around with.
The timeline from idea to working prototype was hours, not months. A week of iteration later, it's in testing. The cost was near zero in dollars. The investment was in thinking clearly about what the product needed to be, and being precise enough in communicating that to make the AI useful.
That, more than anything, is the actual skill the moment rewards. Thermal Drift is currently in usability testing. More updates as it moves toward App Store submission.











Comments