The Problem
MonkeyRun has 6 active projects. Each project has its own agent team. Each team discovers patterns, ships features, and occasionally breaks things. When Hopper discovers that FastAPI silently redirects HTTP behind an HTTPS proxy in Commish Command, Atlas in Halo needs to know about it before he hits the same wall.
In a traditional company, this is a Slack message. In a bigger company, it's a Confluence page nobody reads. In an enterprise, it's an event streaming platform with Kafka, consumer groups, and a team of three to maintain it.
We're a pre-seed startup studio where the employees are AI agents. We needed something between "nothing" and "Kafka."
The Solution: Files, Hashes, and a Cron Job
The MonkeyRun Coordination System is a Python script that runs every 3 minutes via cron. It:
- Scans every project for changes to operational files
- Detects what changed using MD5 hashes
- Analyzes new patterns for cross-project relevance
- Queues propagation recommendations for human approval
- Updates a dashboard
- Exits
That's it. No message broker. No database. No Docker containers. Just cron, python3, and markdown files.
*/3 * * * * cd ~/projects/MonkeyRun/_coordination-system && python3 coordination_cron.py >> logs/coordination-cron.log 2>&1
How It Maps to Real Event Streaming
If you squint, this system has all the components of a proper event streaming architecture:
| Kafka Concept | Our Version |
|---|---|
| Topics | Monitored file types (PATTERNS.md, WIP.md, COO_STATUS.md) |
| Events | File changes detected via hash comparison |
| Producers | Agents writing to operational files |
| Consumers | The coordination script reading those files |
| Consumer offsets | .coordination_state.json (persisted hashes between runs) |
| Event log | events/YYYY-MM-DD.md (one file per day, append-only) |
| Stream processing | Pattern analysis with applicability scoring |
| Pub/sub | Pattern propagation to target projects |
| Monitoring dashboard | Auto-generated HTML dashboard |
The key insight: every agent already writes to files as part of their normal workflow. PATTERNS.md, WIP.md, COO_STATUS.md — these aren't special event payloads. They're the actual operational artifacts. The coordination system just watches them.
What Gets Monitored
Every MonkeyRun project follows the same file structure:
docs/operations/
├── PATTERNS.md # Cross-project learnings (high priority)
├── WIP.md # Active agent sessions (high priority)
├── COO_STATUS.md # Project health (medium priority)
├── RUNBOOKS.md # Operational procedures (low priority)
Plus FEATURES.yaml and MEMORY.md at the project root.
The system auto-discovers projects — drop a new project into the MonkeyRun directory with the standard file structure, and it starts getting monitored on the next 3-minute scan.
The Pattern Propagation Engine
This is where it gets interesting. When PATTERNS.md changes in any project, the system doesn't just log "file changed." It analyzes the content of the change.
Step 1: Extract new patterns. The system parses the markdown to find new ### Pattern Name sections.
Step 2: Score applicability. Each pattern gets a score from 0 to 1 based on:
- High-value keywords like "security," "agent," "coordination," "MCP" (+0.3 each)
- Medium-value keywords like "deployment," "database," "config" (+0.15 each)
- Penalties for project-specific content (-0.1)
- Bonuses for security patterns (+0.4), agent coordination (+0.3), and recommendation patterns (+0.2)
Step 3: Find target projects. For each pattern, the system checks which other projects could benefit. It skips projects that already have similar patterns and scores compatibility based on tech stack.
Step 4: Queue for approval. Patterns with high scores get queued. You can review them:
python3 coordination_cron.py --list # See pending
python3 coordination_cron.py --approve 3 # Approve pattern #3
python3 coordination_cron.py --reject 5 # Reject pattern #5
Or, if you're feeling brave:
python3 coordination_cron.py --auto-propagate # Auto-apply score >= 0.8
When a pattern is approved, the system adapts it for the target project (updating source attribution) and inserts it into the target's PATTERNS.md. Backups are created before any modification.
The Dashboard
Every scan regenerates an HTML dashboard (cron_dashboard.html) that you can open in a browser:
- Stats grid: Total scans, events detected, pending approvals, propagations made
- Pending patterns: Name, score, source project, target projects, reasoning
- Recent events: What changed in the last scan
- Commands reference: Quick copy-paste for common operations
It's a static HTML file with inline CSS. Dark theme. Auto-refreshes every 60 seconds. No JavaScript framework. No build step. Just a file that the cron job overwrites every 3 minutes.
What We've Learned
After running this system for a week:
59 scans. 46 events detected. 35 patterns propagated.
That's 35 times a useful pattern from one project was automatically identified, scored, and propagated to other projects. Without this system, those patterns would have stayed siloed in the project that discovered them.
The Numbers That Matter
- 9 patterns currently pending approval — the system generates more recommendations than we can review, which is the right problem to have
- Average pattern score: 0.85 — the scoring algorithm is surprisingly good at identifying transferable patterns
- Zero false propagations — nothing has been propagated that broke a target project
What Works Well
File-based everything. The entire system runs on files that agents already create. No new data formats, no new protocols, no new infrastructure. Agents write markdown. The system reads markdown.
Hash-based change detection. Simple, reliable, and stateless between runs. If the cron job crashes, the next run picks up exactly where it left off because the state file has the last known hashes.
Human-in-the-loop propagation. Auto-propagation is available but optional. The default is to queue patterns for review. This prevents the system from propagating patterns that are technically high-scoring but contextually wrong.
Auto-discovery. New projects get monitored automatically. No configuration needed beyond following the standard file structure.
What Doesn't Work (Yet)
No real-time notifications. The 3-minute scan interval means changes aren't detected instantly. For WIP conflicts (two agents editing the same file), 3 minutes can be too slow. We're considering adding a file watcher for high-priority files.
Pattern deduplication is basic. The system checks if a target project "already has" a pattern by looking for similar section headers. It doesn't do semantic comparison. Two patterns about the same concept with different titles will both get queued.
The dashboard is read-only. You can't approve or reject patterns from the dashboard — you have to use the CLI. A web-based approval flow would be nice.
Why Not Just Use Kafka?
Because we're a pre-seed startup studio with zero revenue, and our "employees" are AI agents running in Cursor sessions. The overhead of running Kafka (or even Redis Streams, or even a SQLite-based event system) is not justified when:
- Our event volume is ~15 events per day
- Our "producers" already write to files
- Our "consumers" are a single Python script
- Our "processing" is pattern matching on markdown
The system cost: $0/month. It runs on the same Mac that runs everything else. The cron job uses negligible CPU. The state file is 12KB.
When we need real event streaming — when we have 50 projects, or real-time requirements, or multiple consumers — we'll upgrade. But right now, cron and markdown are doing the job.
The Silicon Valley Parallel
In the show, there's a recurring bit where Pied Piper builds increasingly complex infrastructure to solve problems that could be handled with simpler tools. Our coordination system is the opposite — we built the simplest possible thing that works, and we'll add complexity only when the simple thing breaks.
It's Big Head energy. It just works, and nobody's entirely sure why it works as well as it does.
Try It Yourself
If you're running multiple AI agent projects and want cross-project coordination:
- Standardize your operational file structure across projects
- Write a script that hashes those files and detects changes
- Run it on a cron schedule
- Add pattern analysis if you want automatic propagation
- Generate a dashboard so you can see what's happening
You don't need Kafka. You need cron and diff.
The MonkeyRun Coordination System has been running since February 2026. It monitors 6 projects, scans every 3 minutes, and has propagated 35 patterns across the portfolio. Total infrastructure cost: $0.