Analytics

For Coding Agents

analytics.md

AGNT Analytics

Your agents are handling messages, completing tasks, and serving users. AGNT Analytics gives you fleet-level visibility into all of it — message volumes, task completion rates, active users, assistant performance, and trend analysis over configurable time windows.

This is your operational dashboard. For investigating what happened during a specific LLM call, see AGNT Traces.

Why AGNT Analytics

You can't improve what you can't measure. Once your agents are running in production, you need answers to questions like:

  • How many messages did my agents handle this week?
  • Which assistants are most active?
  • Who are my top users?
  • Are task completion rates trending up or down?
  • Did something break overnight?

AGNT Analytics provides these answers through seven focused endpoints, all queryable by time period, assistant, and user. No third-party analytics platform required — the data is already flowing through your AGNT account.

Quick Start

Get an operational overview

bash
curl "https://api.agnt.ai/analytics/overview?period=week" \
  -H "Authorization: Bearer $TOKEN"

Response:

json
{
  "success": true,
  "data": {
    "period": "week",
    "startDate": "2026-02-22T00:00:00.000Z",
    "endDate": "2026-03-01T00:00:00.000Z",
    "activeUsers": 142,
    "events": {
      "incomingMessages": 8420,
      "outgoingMessages": 8105,
      "tasksInitiated": 312,
      "tasksCompleted": 298,
      "total": 17135
    },
    "resources": {},
    "realtime": {},
    "computedAt": "2026-03-01T12:00:00.000Z"
  }
}
bash
curl "https://api.agnt.ai/analytics/rolling?window=7" \
  -H "Authorization: Bearer $TOKEN"

Identify your most active users

bash
curl "https://api.agnt.ai/analytics/users/top?startDate=2026-02-01&endDate=2026-03-01&limit=10" \
  -H "Authorization: Bearer $TOKEN"

Core Concepts

Periods and Windows

Most analytics endpoints accept a period parameter (day, week, month) or explicit startDate/endDate ranges. These define the time window for aggregation.

Rolling window metrics (GET /analytics/rolling) let you track trends over a sliding window — default 7 days, max 90. This is the best way to spot gradual changes that per-period snapshots might miss.

Event Types

The system tracks four atomic event types that roll up into all higher-level metrics:

EventWhat it measures
incomingMessagesMessages received from users
outgoingMessagesMessages sent by agents
tasksInitiatedTasks created and started
tasksCompletedTasks that reached completion

Assistant-Scoped Metrics

Filter any endpoint by the assistant parameter to isolate metrics for a specific assistant. This is how you compare performance across different agent personalities — which assistant handles the most messages, which has the highest task completion rate, which is trending up or down.

Snapshots for Trend Analysis

GET /analytics/snapshots returns periodic metric snapshots over a date range. These are pre-computed summaries you can export to your BI tools for longer-term trend analysis and executive reporting.

API Reference

Base URL: https://api.agnt.ai

All endpoints require Management auth (Bearer token).

MethodPathDescription
GET/analytics/overviewOverview metrics
GET/analytics/metricsDetailed metrics by period
GET/analytics/rollingRolling window metrics
GET/analytics/eventsEvent log
GET/analytics/assistantsAssistant activity metrics
GET/analytics/users/topTop users by activity
GET/analytics/snapshotsPeriodic metric snapshots

GET /analytics/overview

Returns a high-level summary for the specified period.

ParameterRequiredTypeDescription
periodNostringday, week, or month (default: week)

Response fields: period, startDate, endDate, activeUsers, events (with incomingMessages, outgoingMessages, tasksInitiated, tasksCompleted, total), resources, realtime, computedAt.

GET /analytics/metrics

Returns detailed metrics for a specific date range.

ParameterRequiredTypeDescription
periodYesstringday, week, or month
startDateYesstringISO 8601 start date
endDateYesstringISO 8601 end date
assistantNostringFilter by assistant ID

Response fields: period, periodKey, startDate, endDate, activeUsers, incomingMessages, outgoingMessages, tasksInitiated, tasksCompleted.

GET /analytics/rolling

Returns metrics over a sliding window.

ParameterRequiredTypeDescription
windowNonumberWindow size in days (default: 7, max: 90)
assistantNostringFilter by assistant ID

GET /analytics/events

Returns the raw event log with filtering and pagination.

ParameterRequiredTypeDescription
startDateYesstringISO 8601 start date
endDateYesstringISO 8601 end date
eventTypeNostringFilter by event type
assistantNostringFilter by assistant ID
userNostringFilter by user ID
limitNonumberResults per page
skipNonumberNumber of results to skip

GET /analytics/assistants

Returns activity metrics per assistant.

ParameterRequiredTypeDescription
startDateYesstringISO 8601 start date
endDateYesstringISO 8601 end date

GET /analytics/users/top

Returns most active users.

ParameterRequiredTypeDescription
startDateYesstringISO 8601 start date
endDateYesstringISO 8601 end date
assistantNostringFilter by assistant ID
limitNonumberNumber of users to return

GET /analytics/snapshots

Returns periodic metric snapshots for trend analysis.

ParameterRequiredTypeDescription
startDateYesstringISO 8601 start date
endDateYesstringISO 8601 end date

For Coding Agents

AGNT Analytics is your monitoring layer. Use it to detect problems early and track the impact of changes.

Pattern: Automated regression detection

  1. Poll GET /analytics/rolling on a schedule to track completion rates and cost trends.
  2. When metrics degrade, use GET /analytics/events to narrow down the time window.
  3. Correlate with traces — switch to AGNT Traces to find specific failing LLM calls within the affected window.
  4. Diff the trace against the current prompt to see if a prompt change caused the regression.
  5. Open a playground session, iterate, and deploy the fix.

Pattern: Cost optimization

  1. Use GET /analytics/metrics to identify high-cost assistants and time periods.
  2. Drill into traces for expensive calls via AGNT Traces.
  3. Open playground sessions and test with cheaper models or shorter prompts.
  4. Compare token counts between the original trace and your playground run.
  5. Save and publish when you find a configuration that maintains quality at lower cost.

What to monitor

  • Rolling completion rates — track tasksCompleted / tasksInitiated over time. A downward trend is your earliest signal of a quality regression.
  • Message volume spikes — sudden increases in incomingMessages may indicate a new integration, a marketing push, or a retry loop.
  • Assistant activity distribution — use GET /analytics/assistants to ensure load is balanced and no assistant is unexpectedly idle.

For Product Teams

  • Daily standups. Pull GET /analytics/overview?period=day for yesterday's numbers — messages handled, tasks completed, active users. One API call, one slide.
  • Weekly trend reviews. Use GET /analytics/rolling?window=7 to see how metrics are trending. Compare week-over-week to spot gradual shifts.
  • Adoption tracking. GET /analytics/users/top shows who's actually using your agents. Low usage from a segment you expected to adopt? That's a product signal, not a support ticket.
  • Assistant comparison. Filter any endpoint by assistant to compare agent performance. Which assistant handles the most volume? Which has the best completion rate?
  • Executive reporting. GET /analytics/snapshots provides periodic metric snapshots you can export to your BI tools for longer-term trend analysis.
  • Incident investigation. When something goes wrong, start with Analytics to identify the scope (how many users, which assistants, what time window), then drill into AGNT Traces for individual execution details.