Case study

A big AI answer should become reusable project memory.

This example is based on a real product session: a builder wanted to know why a location-sharing map did not feel as alive as Find My or Life360.

Prompt

The user asked both workers.

"The map does not feel like Find My or Life360. What can you find? No changes, just responses."

Claude

Product and UI read.

Claude focused on the feel: pin complexity, visual restraint, place clustering, and the difference between a calm map and a busy status surface.

Codex

Technical architecture read.

Codex focused on update cadence, prediction, battery tradeoffs, sync behavior, and why a map can look smooth but still feel stale.

Fob synthesis

One decision candidate.

Verdict: the product was optimized as a battery-friendly family status map, not a literal live-dot tracker.

Recommendation: add a distinct live mode instead of replacing the settled map. That gives immediacy when someone is being watched while keeping the current battery story intact.

The important part

Fob made the answer actionable.

Save decisionRecord "live mode is a distinct mode" so future prompts do not reopen the same debate.
Debate fixesAsk Claude and Codex to pressure-test UI restraint versus sync and battery risk.
Build handoffCreate a copy-ready packet for ChatGPT, Gemini, Cursor, or another session.

What this looked like in Fob

The answer becomes a project asset.

Fob memory screenshot showing pinned notes and decisions after importing a long AI answer.
Real local screenshot from a safe demo project seeded with the case-study decision.

Proof point

This is the product: continuity, not chatter.

Without Fob, the user has a long answer and has to decide what to do next. With Fob, the answer becomes saved context, a decision candidate, a follow-up path, and a handoff for the next AI tool.