Changelog

Tool Call Analytics

Tool calls are now a first-class object in Grepture. Track call volume, p50/p95 latency, and error rates per tool across your entire traffic history.

Tool calls used to be buried inside raw response JSON. Now every tool your agents invoke is extracted into a dedicated table and ready to query.

Per-tool stats — Call volume, p50/p95 latency, error rate, and orphan count (tool calls that never got a result) for every tool your agents use. Sortable, filterable, rolled up across any time range.

Volume over time — A stacked chart of tool call volume broken down by the tools you're running the most. Spot a runaway retry loop or a tool being hit more than you expected at a glance.

Historical coverage — We backfilled your existing traffic, so the analytics reach all the way back. No waiting for new data to accumulate.

Every provider, every shape — OpenAI Chat Completions, the OpenAI Responses API, Anthropic Messages, and streaming (SSE) variants of each. One view, all providers.

Find it under Analytics → Tools.

What's next

This is the foundation. With tool calls in a real table, we can reason about the whole agent loop — not just individual LLM calls. On the roadmap:

  • Agent runs as a first-class unit — group LLM calls and tool calls into logical runs with step counts, terminal state, and cost rollups.
  • Tool-layer safety checks — detect credential-shaped arguments heading to external APIs, and flag prompt-injection-style content inside tool results before your agent acts on it.

More soon.