Building AI Product Sense with a Personal OS
Why you should be building your own Personal OS and how to get started
Meta just added a new interview to their product management hiring process. They call it “Product Sense with AI,” and it signals something important about where the industry is headed. Adding an interview is expensive - it changes hiring pipelines, requires training dozens of interviewers, and fundamentally redefines what capabilities matter for the role. When Meta does this, other companies will follow.
I’ve spent the last three months building what I call a personal operating system - not because I needed a better productivity tool, but because I wanted to understand how AI products actually work. Funnily enough, my friend Tal Raviv did the same thing, arriving at a completely different implementation. When we compared systems on our prep call for a workshop we’re teaching together, we realized we’d both been running the same experiment: using ourselves as test subjects to build AI product sense.
Meta added “Product Sense with AI” as a new interview because they’re testing if you’ve built intuition, understand primitives, can design products that leverage what AI actually does well versus what sounds good in pitch decks. You build that intuition the same way you learn anything technical: hands-on learning with the exposed wires version.
Check out the link below for the recorded session:
What is a Personal OS?
A personal operating system is how you manage your work (tasks, priorities, context, goals) in a way that an AI can operate on. Most people use Notion or Todoist or Linear. Those are great productivity tools, but they’re built for humans reading lists and clicking checkboxes.
A personal OS is built for both humans and AI. You maintain it as markdown files on your computer. When you need to make decisions—what to work on, how to prioritize, what context matters—you ask the AI. It reads your files, synthesizes patterns, suggests next steps. You make the final calls.
The shift here is you’re no longer just managing tasks anymore: you’re managing context.
This is how modern AI products work under the hood. Notion AI reads your workspace. ChatGPT reads your conversation history. Cursor reads your codebase. When you build your own personal OS, you’re learning the same primitives these products use—context management, retrieval, tool calling, reasoning.
You’re also building AI product sense by becoming your own first user.
The Architecture
PersonalOS/
│
├── BACKLOG.md # Raw brain dump
├── CLAUDE.md # AI instructions
├── Goals.md # Quarterly objectives
│
├── Tasks/ # Individual task files
│ ├── Prep_for_Lightning_Session.md
│ │ ├── YAML frontmatter (priority, status, category, time)
│ │ └── Task content
│ └── ...
│
└── Knowledge/ # Context for AI
├── meeting_transcript_tal_prep.md
├── writing_samples.md
└── ...The entire system runs on markdown files (text files I can open in any editor), move between machines, and version control with git. I use Obsidian to view the markdown more cleanly.
BACKLOG.md holds my brain dump. When I’m in a meeting and someone mentions something I need to follow up on, I add a line: “Prep for stakeholder conversation.” Takes five seconds, then I return my attention to the meeting.
The Tasks/ folder contains individual task files that get created from backlog processing. Each file has YAML frontmatter with metadata that looks like this:
title: Prep for Lightning Session demo
category: outreach
priority: P0
status: n
estimated_time: 60The Knowledge/ folder holds context: meeting transcripts from Zoom calls, writing samples that capture my style, Goals.md with my quarterly objectives. When the AI processes my backlog, it reads this context to make better decisions about categorization and priority.
CLAUDE.md or AGENTS.md contains instructions for the AI in plain English: “You are a personal productivity assistant. You organize backlog items into tasks, prioritize based on Goals.md, and suggest daily focus. Always ask clarifying questions for vague items.”
The architecture stays simple, and the AI does the heavy lifting when it reads and manipulates these files.
The Workflow
During my Lightning Session demo, I walked through exactly how this works in practice. I had three items in my backlog: prepare for a stakeholder conversation, prep for the Lightning Session happening that day, and follow up on H1 planning. Raw notes I’d jotted down between meetings.
I opened Claude Code in the same directory and typed: “triage my backlog.”
The agent read BACKLOG.md, then my CLAUDE.md instructions, then my Goals.md file. It scanned the Knowledge/ folder and found transcripts from recent meetings, including my prep call with Tal. Then it did something that still impresses me every time : it asked clarifying questions.
“When is the lightning session? What’s the Lightning Session topic?”
I answered: happening right now, about my personal OS demo.
I could see its thinking by hitting Control-O in Claude Code. “Lightning session is high priority—happening now, relates to personal OS demo. Stakeholder conversation is P1—important but not urgent, next week. Need to pull context from meeting transcripts with Tal.”
Then it created task files. It read through my transcript with Tal and populated the Lightning Session task with a five-step demo script based on our actual conversation - and dropped me in exactly there where I was in the live demo flow!
The next question revealed the real power of the system. I typed: “What should I work on today?”
The agent didn’t re-read every file. Instead, it called an MCP server—Model Context Protocol, basically a tool the AI can invoke. The MCP queried task metadata in under a second: filter for priority P0 and P1, status not done, return matching files. Then it read just those four tasks and cross-referenced with my Goals.md.
Response in two seconds: “Based on your goals (become VP of product, ship 3 features, build relationships): Finish Lightning Session demo right now, then prep for Maven cohort this afternoon, then review stakeholder deck for next week.”
The Bitter Lesson Applied
Tal and I independently converged on similar architectures. When we compared notes, we realized we’d both been applying Rich Sutton’s paper “The Bitter Lesson” without planning to. Sutton’s key line: “Building in how we think does not work in the long run.”
In AI research, approaches that rely on general methods (search, learning, computation) beat approaches that encode human knowledge. For example, chess programs that search millions of positions beat programs with handcrafted strategies.
Applied to personal operating systems, this means: don’t build complex tagging taxonomies or elaborate workflows that mirror your mental model. Give the AI raw context, add minimal structure, let it synthesize patterns, and you make the final calls.
I don’t tell the AI “outreach tasks happen in mornings.” I give it my calendar (meetings clustered 2-5pm), my energy patterns (”I focus better early”), and my goals (build relationships). It figures out: suggest outreach tasks for 9-11am when you have energy and no meetings. The system adapts when my calendar changes or goals shift. I didn’t hardcode the rules.
Context Management as a Core Skill
Three months in to using this system, I realized my job changed. I became a context manager.
Every decision and meeting now entails:" what information to capture, where to store it, how to structure it, when to summarize versus keep detailed, what to include in each AI request.
My Knowledge/ folder has meeting transcripts, writing samples, Goals.md, course notes. What’s not in there: random Slack messages, every email, calendar invites, temporary notes. The filter for me is: will the AI benefit from having this when making decisions? If yes, it’s context. If no, it’s noise.
How to get started
Start simple. Make a folder on your computer. Add BACKLOG.md and Goals.md. Open it in Claude Code or Cursor. Type: “help me organize my thoughts.” Watch what it does. Read the tool calls. See which files it opens. Notice when it succeeds and fails.
The Personal OS I built is open source. Start there if you want to try this yourself. Or build your own from scratch!
Tal and I are covering more how to build AI Product Sense in our live 2 day workshop here. This will be a hands on session where we help you build your own personal OS - check it out here!




I didn't realize it but I've built something similar using codex in the CLI as well. Instead of markdown though I've actually created a local HTML file that I can open up (built using shadcn components) so that I have a really basic UI to work in. That being said my main modality for interacting is still using voice to talk on my MacBook and the system will process details in the background to clean things up.
I've also setup application passwords in Notion that allows me to read/write to databases as needed. What's nice about this is that as I'm having random thoughts throughout the day, I can just open Notion using a hot button on my phone and then share a raw though. The system on my local computer runs a job that will scan for changes and then take actions depending on which statuses I'm using.
So far it's been working extremely well and I'm happy about it.
Super useful for anyone who works with AI - building, strategizing or driving adoption.