Back to Blog
Announcement

See How Your Agents Actually Perform

New analytics dashboard shows per-agent completion rates, session replays, cost attribution, and file bottleneck detection — all derived from your existing event log.

GB

Gabriel Bram

February 21, 20268 min read

You Have the Data. Now See It.

Every event your agents publish to Hivemind — tasks created, decisions made, conflicts detected, files locked — is a data point. Until now, those data points sat in the event log. You could query them, but you couldn't *see* them.

The new Analytics dashboard changes that.

5
Active Agents
87%
Avg Completion
3
Conflicts (7d)
48k
Est. Tokens Used

These numbers update in real time from your event log. No extra instrumentation. No config. Just navigate to Analytics in the sidebar.

Agent Performance Scoring

The first question everyone asks: "Which agent is actually good?"

Now you can answer it. The Agents tab shows a performance table for every agent that has published events to your project:

Agent Performance — Last 30 Days
claude-main
92%
24 tasks 0 conflicts
cursor-agent
78%
18 tasks 2 conflicts
codex-cli
45%
11 tasks 5 conflicts

Four metrics per agent:

  • Completion Rate — What percentage of started tasks does the agent finish? Above 80% is solid. Below 50% means something is wrong.
  • Conflicts Caused — How often does this agent collide with other agents? High numbers suggest it's not checking hivemind_status() before starting work.
  • Overwrites — Did this agent edit a file that another agent just edited? This catches the sneaky case where both agents "succeed" but one's work silently overwrites the other's.
  • Completions/Day — Raw throughput per active day.

Cost Attribution

Here's a question you've probably never been able to answer: "How much does each agent cost me?"

Hivemind estimates token usage by event type. A task.completed event represents roughly 2,000 tokens of agent work. A conflict.detected event represents roughly 5,000 tokens of *wasted* work. The dashboard shows this as a stacked bar per agent:

Estimated Token Usage by Agent
claude-main 52k tokens
cursor-agent 38k tokens (6k waste)
codex-cli 29k tokens (15k waste)

The red segments are waste — tokens spent on conflicts, duplicate work, and re-investigation. If you see an agent with a big red bar, it's time to either improve its CLAUDE.md instructions or give it a narrower scope.

Session Replay

The Sessions tab groups events by source.session and shows what each agent did, start to finish.

Session Timeline — claude-main — 23 min
task.created 14:02:31

Refactoring auth middleware to use JWT tokens

decision.made 14:05:12

Using jose library instead of jsonwebtoken — smaller bundle, ESM native

file.locked 14:06:44

Acquired lock on src/middleware/auth.ts

task.completed 14:25:03

Auth middleware refactored — 3 files changed

You can trace exactly what happened: what the agent decided, what files it locked, and how long it took. This is invaluable for debugging agent behavior and understanding why things went wrong.

Bottleneck Detection

The Bottlenecks tab surfaces three kinds of problems:

File Hotspots

Which files cause the most friction? The hotspot table shows activity count vs. conflict rate:

src/config.ts
67% conflicts
src/db/schema.ts
33% conflicts
src/routes/api.ts
8% conflicts

A file with 67% conflict rate is a code smell. It probably needs to be split into smaller, focused modules. The analytics tell you *which* files to split before you waste more tokens on conflicts.

Lock Contention & Blocker Duration

See which resources get locked most often (and for how long), and which blockers have been stalling work the longest. Long-lived blockers are often the most expensive problems in a multi-agent workflow — they don't just waste the blocked agent's time, they cascade.

How It All Works

No new tables. No new events. No configuration. The analytics are computed entirely from events you're already publishing. Every task.created, task.completed, conflict.detected, file.locked, and decision.made event contributes to the picture.

Navigate to Analytics in your dashboard. It's already there.

analyticsobservabilityperformancesession-replay