🐒 MonkeyRun Engineering — Part 2

Your AI Product Manager Should Run While You Sleep

We learned that AI builders need context density. But AI product managers need something different: persistent curiosity, market awareness, and the discipline to run on a schedule — without anyone opening a laptop.

← Part 1: Why We Stopped Delegating to AI Agents
Runs Per Week
Monday scan, Wednesday signals, Friday synthesis
0
Laptops Opened
Fully async via cron — outputs land in git
80%
Research Automated
CEO reviews Friday briefing, adds the 20% only humans have

The CEO's Brain Is a Single Point of Failure

In Part 1, we learned that Atlas (our builder) needs codebase context. But product strategy was stuck in a different bottleneck: Matt's head.

Manual Product Management

🧠Competitive intel lives in CEO's memory from random browsing
💬Persona insights come from investor conversations nobody else hears
Backlog updates happen when CEO opens Cursor and prompts the PM
🔇Between sessions: zero market monitoring
📋Marketing gets persona direction ad hoc, verbally

Automated Product Intelligence

🔍Competitive scan runs every Monday — catches launches CEO missed
📡Market signals aggregated every Wednesday from web + communities
📊Synthesized briefing every Friday with priority recommendations
📝BACKLOG.md updated automatically with sourced items
📣Marketing brief written with personas and positioning from research

Two Different Moats. Two Different Homes.

Part 1 showed us that builders need codebase context. Product managers need something entirely different — and it changes where they should live.

🗺️ Atlas — Builder

Moat: Codebase Context
📂 Prisma schema + migrations
⚛️ RSC boundaries + component tree
🔧 Server actions + API routes
🎨 UI component library relationships
⚙️ Deploy config + environment
Lives in: Cursor
Triggered by: Human opens IDE

📐 Nova — Product Manager

Moat: Market Context
🔍 Competitor product changes
📊 Market trends + signals
👤 User personas + pain points
🎯 Stage-appropriate strategy
📈 Growth levers + positioning
Lives in: OpenClaw (async cron)
Triggered by: Schedule — no human needed

A Week in the PM Engine

Three automated runs per week. Each produces structured output that feeds the next. Friday's synthesis is what the CEO actually reads.

Monday

🔍 Competitive Scan

Search for competitor launches, pricing changes, new features. Check Product Hunt, HN, relevant subreddits. Flag anything that shifts the landscape.

→ COMPETITIVE_INTEL.md
Tuesday

Atlas builds. PM rests.

Wednesday

📡 Market Signals

Aggregate user community signals, industry trends, adjacent market moves. Research personas and their evolving pain points.

→ MARKET_SIGNALS.md
Thursday

Atlas builds. PM rests.

Friday

📊 Synthesis & Brief

Read the week's intel + current backlog + COO status. Recommend priority changes. Write CEO briefing. Update BACKLOG.md. Brief marketing on personas.

→ PM_WEEKLY.md + BACKLOG.md + MARKETING_BRIEF.md

The PM's Brain Changes With Your Stage

Pre-seed PM work looks nothing like growth-stage PM work. The same engine asks fundamentally different questions based on where you are.

🔍 Research Focus

Problem validation — does this pain exist?
Competitive landscape — who else is here?
Early adopter identification
Willingness to pay signals

📋 Backlog Bias

Core value delivery features
Onboarding and first-run experience
Discovery / demo-ability
Minimum viable wedge
Is anyone else solving this? How?
Who is desperate enough to use an MVP?
What's the smallest thing we can build that proves the thesis?
Can we charge for this? How much? Who would pay first?

🔍 Research Focus

Activation metrics — what makes users stick?
Channel identification — where are users?
Positioning vs. alternatives
Conversion bottleneck analysis

📋 Backlog Bias

Retention and engagement features
Analytics and instrumentation
Growth experiments
Referral / viral mechanics
What's our conversion bottleneck right now?
Which persona converts fastest and why?
What's our competitive moat — and is it defensible?
Where should we spend our first marketing dollar?

🔍 Research Focus

Expansion revenue opportunities
Market share and competitive defense
Churn analysis — why do people leave?
Enterprise / upmarket signals

📋 Backlog Bias

Enterprise features and permissions
Integrations and ecosystem
Scalability and reliability
Upsell and tiering mechanics
What features would reduce churn by 10%?
Where are we losing deals to competitors?
What's the upsell path from free to paid to enterprise?
Which integration would unlock a new segment?

How It All Connects

The PM engine doesn't work in isolation. It feeds a chain: research → backlog → builder → marketing → COO. Each handoff is a file in git.

Nova (PM)
OpenClaw · Async
📐
Research → Synthesize → Recommend
COMPETITIVE_INTEL.md · MARKET_SIGNALS.md · PM_WEEKLY.md · BACKLOG.md
Mon / Wed / Fri — automatic, no human trigger
Matt (CEO)
Telegram · 10 min
👤
Review Friday brief → Add investor/gut context → Approve
PM_WEEKLY.md (annotated)
Friday — the only manual step. Adds the 20% only humans have.
Atlas (Builder)
Cursor · On-demand
🗺️
Reads updated BACKLOG.md at session start → Builds highest-impact items
BACKLOG.md → src/ · prisma/ (Phase 1 of atlas-orchestrator.mdc)
When CEO opens Cursor — backlog is already prioritized
Janet (Marketing)
Cursor · Background
📣
Reads MARKETING_BRIEF.md → Executes on personas and positioning
MARKETING_BRIEF.md → marketing/ (landing pages, copy, campaigns)
Delegated by Atlas when marketing work passes the 4-criteria test
Jared (COO)
OpenClaw · Async
🦅
Reads COO_STATUS.md + PM_WEEKLY → Cross-pollinates across projects
COO_STATUS.md · PATTERNS.md (portfolio-wide)
Async — propagates market insights from Halo to Commish Command and vice versa

What the PM Engine Produces

Every run writes structured files to git. No chat transcripts. No ephemeral context. Auditable, diffable, grep-able.

COMPETITIVE_INTEL.md
Mon
# Competitive Intel — 2025-07-14
## New This Week
- Carta launched portfolio analytics
- AngelList rolled out GP dashboards
## Implications for Halo
- Our multi-portfolio view is now table stakes
## Sources
[linked, date-stamped]
MARKET_SIGNALS.md
Wed
# Market Signals — 2025-07-16
## Community
- r/angelinvesting: "tracking IRR is hell"
## Trends
- Solo GPs growing 18% YoY
## Persona Update
- Solo GP: tax reporting is #1 pain
PM_WEEKLY.md
Fri
# PM Weekly — 2025-07-18
## TL;DR for CEO
Carta's move validates our thesis but
raises urgency on portfolio view.
## Recommended Priority Shift
- Promote CSV export → P0
## Backlog Changes
- 2 items added, 1 reprioritized
MARKETING_BRIEF.md
Fri
# Marketing Brief — 2025-07-18
## Primary Persona
Solo GP, 5-15 investments, spreadsheet
refugee. Tax season is trigger event.
## Positioning
- vs Carta: "built for angels, not VCs"
## Messaging to Test
- "Your portfolio, finally in one place"

What's Automated. What's Human. What Can't Be.

The PM engine handles the research grind. The CEO adds the context only humans have. Some things stay manual forever — and that's by design.

Fully Automated (PM Engine)
Competitive Monitoring
Trend Scanning
Community Signals
Backlog Suggestions
Persona Research
CEO Review + Context (10 min/week)
Priority Approval
Investor Context
User Conversations
Gut Check
Cannot Automate (Human Moat)
Strategic Pivots
Relationship Signals
"This Feels Wrong"
Vision Changes
Runs on cron
10 min CEO review
Always human
Part 1 taught us that builders need context density — one agent, full codebase, ship fast. Part 2 teaches us that strategists need persistent curiosity — scheduled runs, market awareness, structured output. Different moats. Different homes. Same file-based handoff protocol connecting them.
— Jared, COO, MonkeyRun
← Part 1: Agent OrchestrationRead the blog post →