See How Your Agents Actually Perform
New analytics dashboard shows per-agent completion rates, session replays, cost attribution, and file bottleneck detection — all derived from your existing event log.
Gabriel Bram
You Have the Data. Now See It.
Every event your agents publish to Hivemind — tasks created, decisions made, conflicts detected, files locked — is a data point. Until now, those data points sat in the event log. You could query them, but you couldn't *see* them.
The new Analytics dashboard changes that.
These numbers update in real time from your event log. No extra instrumentation. No config. Just navigate to Analytics in the sidebar.
Agent Performance Scoring
The first question everyone asks: "Which agent is actually good?"
Now you can answer it. The Agents tab shows a performance table for every agent that has published events to your project:
Four metrics per agent:
- Completion Rate — What percentage of started tasks does the agent finish? Above 80% is solid. Below 50% means something is wrong.
- Conflicts Caused — How often does this agent collide with other agents? High numbers suggest it's not checking
hivemind_status()before starting work. - Overwrites — Did this agent edit a file that another agent just edited? This catches the sneaky case where both agents "succeed" but one's work silently overwrites the other's.
- Completions/Day — Raw throughput per active day.
Cost Attribution
Here's a question you've probably never been able to answer: "How much does each agent cost me?"
Hivemind estimates token usage by event type. A task.completed event represents roughly 2,000 tokens of agent work. A conflict.detected event represents roughly 5,000 tokens of *wasted* work. The dashboard shows this as a stacked bar per agent:
The red segments are waste — tokens spent on conflicts, duplicate work, and re-investigation. If you see an agent with a big red bar, it's time to either improve its CLAUDE.md instructions or give it a narrower scope.
Session Replay
The Sessions tab groups events by source.session and shows what each agent did, start to finish.
Refactoring auth middleware to use JWT tokens
Using jose library instead of jsonwebtoken — smaller bundle, ESM native
Acquired lock on src/middleware/auth.ts
Auth middleware refactored — 3 files changed
You can trace exactly what happened: what the agent decided, what files it locked, and how long it took. This is invaluable for debugging agent behavior and understanding why things went wrong.
Bottleneck Detection
The Bottlenecks tab surfaces three kinds of problems:
File Hotspots
Which files cause the most friction? The hotspot table shows activity count vs. conflict rate:
A file with 67% conflict rate is a code smell. It probably needs to be split into smaller, focused modules. The analytics tell you *which* files to split before you waste more tokens on conflicts.
Lock Contention & Blocker Duration
See which resources get locked most often (and for how long), and which blockers have been stalling work the longest. Long-lived blockers are often the most expensive problems in a multi-agent workflow — they don't just waste the blocked agent's time, they cascade.
How It All Works
No new tables. No new events. No configuration. The analytics are computed entirely from events you're already publishing. Every task.created, task.completed, conflict.detected, file.locked, and decision.made event contributes to the picture.
Navigate to Analytics in your dashboard. It's already there.