← Back to Blog
patterns9 min read

Your AI Product Manager Should Run While You Sleep

We automated 80% of our product management — competitive intel, market signals, and strategy synthesis — using a cron-driven AI PM that runs asynchronously. Here's why the PM doesn't belong in your code editor, and how to build one that actually works.

February 14, 2026·by Joan
pm-engineproduct-managementasynccroncompetitive-intelmarket-signalsautomationagent-architecture

"Builders need codebase context. PMs need market context. Different moats, different homes."

The Problem With AI Product Managers

Here's a scene that plays out in every AI-assisted startup: you're deep in a coding session, your AI agent has the full codebase in context, and you think — "I should also have it do some competitive research. And maybe check what users are saying on Reddit. And update the product strategy."

So you ask your coding agent to do product management. It dutifully opens a browser, searches for competitors, writes a mediocre analysis, and — worst of all — you've now burned half your context window on market research instead of shipping code.

We did this for weeks at MonkeyRun before we realized the fundamental mistake: the PM and the builder have different moats, and they need different homes.

Two Moats, Two Homes

A builder agent's moat is codebase context. It knows the Prisma schema, the component hierarchy, the RSC boundaries, the TypeScript contracts. This context is expensive to build and easy to destroy. Every token spent on market research is a token not spent on understanding the code.

A PM agent's moat is market context. It knows the competitive landscape, user sentiment, pricing trends, feature gaps, and strategic positioning. This context doesn't need the codebase — it needs the internet, industry reports, and user feedback channels.

Putting both in the same context window is like asking a surgeon to also do the hospital's financial planning during an operation. Both are important. Neither benefits from sharing the same attention.

The builder lives in Cursor. The PM lives in cron.

The Async PM Engine

MonkeyRun's PM engine runs on a weekly rhythm, completely asynchronous from any coding session:

Monday: Competitive Scan

The PM agent wakes up and researches:

  • What did competitors ship this week?
  • Any new entrants in our space?
  • Pricing changes across the market?
  • What are users saying about competitor products?

It writes findings to docs/product/pm-engine/COMPETITIVE_SCAN.md — overwritten weekly so it's always current state, not history.

Wednesday: Market Signals

The PM agent scans broader market context:

  • Trending discussions in our space (Hacker News, Reddit, Twitter)
  • New tools or frameworks that affect our stack
  • Regulatory or platform changes that matter
  • User feedback patterns from support channels

Findings go to docs/product/pm-engine/MARKET_SIGNALS.md.

Friday: Synthesis + Brief

This is the valuable part. The PM agent reads Monday's competitive scan, Wednesday's market signals, the current FEATURES.yaml (what's shipped vs. planned), and the COO_STATUS.md (project health). Then it synthesizes:

  • Strategic recommendations: What should we build next and why?
  • Positioning updates: How should we talk about what we've built?
  • Risk flags: What competitors are doing that threatens our position?
  • Opportunity flags: What gaps in the market can we fill?

The synthesis goes to docs/product/pm-engine/PM_BRIEF.md. The CEO reviews it over the weekend. On Monday, the builder agent reads the brief during its session startup and knows what matters this week — without spending a single token on research.

Stage-Aware Intelligence

Here's a subtlety most AI PM setups miss: the PM's job changes based on the company's stage.

At pre-seed (where MonkeyRun is now), the PM should focus on:

  • Validating that the problem exists
  • Understanding who else is solving it
  • Identifying the smallest thing to build that proves the thesis
  • Monitoring early user signals for product-market fit indicators

At seed stage, the focus shifts to:

  • Competitive positioning and differentiation
  • Feature prioritization based on user retention data
  • Pricing strategy and willingness-to-pay research
  • Go-to-market channel analysis

At growth stage, it shifts again to:

  • Market share tracking
  • Expansion revenue opportunities
  • Platform and ecosystem strategy
  • Defensive moat analysis

Our PM engine is configured with the current stage, and its research prompts, synthesis framework, and recommendation style all adapt accordingly. A pre-seed PM that writes growth-stage strategy is wasting everyone's time.

The Marketing-Reality Audit

One of the most valuable outputs of the PM engine isn't strategy — it's honesty.

Every Friday, after the PM synthesis, the COO runs a marketing-reality audit. It:

  1. Reads FEATURES.yaml — the source of truth for what's shipped, what's planned, and what's in development
  2. Fetches the live marketing site
  3. Compares every claim against reality
  4. Flags three categories:
    • Overselling (trust risk) — marketing claims we have it, we don't
    • Underselling (missed opportunity) — shipped but not marketed
    • Roadmap drift — stale "coming soon" promises that were cut or deprioritized

The first time we ran this audit on Halo's marketing site, it found 5 oversells. Features described as "available" that were still in development. "Coming soon" items that had been quietly deprioritized. One pricing claim that was flat-out wrong.

This is the kind of drift that happens in every startup. Features get cut but the marketing page doesn't update. Someone writes "coming soon" and forgets about it. The PM engine catches it automatically, every week, before users notice.

The 80/20 Split

The PM engine automates roughly 80% of product management work:

  • Competitive monitoring (automated)
  • Market signal detection (automated)
  • Strategy synthesis (automated)
  • Marketing-reality auditing (automated)
  • Feature status tracking (automated via FEATURES.yaml)

The remaining 20% is what only a human can do:

  • Talking to users. No AI agent can replace a conversation with someone who's actually using your product.
  • Strategic intuition. The PM engine surfaces data and patterns. The CEO adds the "I have a feeling about this" that comes from domain expertise and taste.
  • Partnership decisions. Who to work with, who to avoid, what deals to take — these require human judgment and relationship context.
  • Pivoting. The PM engine optimizes within a strategy. Deciding to change the strategy entirely is a human call.

This is the right split. The AI handles the research grind — the competitive scans, the trend monitoring, the data synthesis. The human adds the 20% that requires judgment, relationships, and taste. Neither is sufficient alone.

The Handoff Chain

The PM engine doesn't exist in isolation. It feeds a chain:

PM Engine (async, cron)
  → PM_BRIEF.md
    → CEO reviews (weekend)
      → Builder agent reads during session startup
        → Informs what to build this week
          → Marketing agent reads for positioning
            → COO propagates insights across projects

Every link in this chain is a file. No chat, no clipboard, no "let me summarize what the PM said." The PM writes a file. The CEO annotates it. The builder reads it. The marketing agent reads it. The COO reads it.

File-based handoffs at every step. This is the same coordination pattern we use for everything at MonkeyRun — and it works because files are persistent, auditable, and don't depend on anyone's context window.

Why Not Just Use a Dashboard?

You might be thinking: "This is just a dashboard with extra steps." It's not, for two reasons:

1. Synthesis, not just data. A dashboard shows you charts. The PM engine tells you what the charts mean. "Competitor X shipped feature Y, which directly competes with our planned feature Z. Recommendation: accelerate Z or differentiate on approach." That's not a dashboard — that's a product manager.

2. Stage-aware recommendations. A dashboard doesn't know you're pre-seed. It shows the same metrics to a 3-person startup and a 300-person company. The PM engine adapts its analysis, its recommendations, and its urgency calibration to your actual stage.

Try It Yourself

If you want to build an async PM engine:

  1. Separate the PM from the builder. Don't use your coding agent for market research. Give the PM its own runtime — a cron job, a scheduled workflow, a separate agent session.

  2. Pick a weekly rhythm. Monday (competitive), Wednesday (market), Friday (synthesis) works for us. Adjust to your pace — the key is consistency, not the specific days.

  3. Use FEATURES.yaml as source of truth. Every feature has a status: shipped, in-dev, planned, cut. The PM reads this. The marketing audit reads this. Everyone reads this. One file, one truth.

  4. Configure for your stage. A pre-seed PM and a growth-stage PM ask different questions. Be explicit about what stage you're at and what the PM should focus on.

  5. Keep the human in the loop. The PM engine writes briefs. The human reads them, adds judgment, and decides. Don't automate the decision — automate the research that informs it.

The goal isn't to replace product management. It's to make the research part run while you sleep, so when you sit down to build, you already know what matters.


The async PM engine runs across all MonkeyRun projects. The marketing-reality audit has caught oversells on every project that has a public-facing site. See The Model for how it fits into the full system, or read Why We Stopped Delegating for the companion piece on builder-vs-dispatcher architecture.

View the interactive visual version of this post →