[COST TRACKING]

Know what every AI call costs.

Per-request cost attribution across every provider. See exactly where your tokens go, which models burn the most, and where to optimize.

What it does

Grepture tracks token usage and calculates cost for every request flowing through the proxy. You get per-request, per-model, and per-endpoint cost breakdowns — across OpenAI, Anthropic, Google AI, and every other supported provider.

How it works

The proxy counts input and output tokens per request and maps them to each provider's pricing. Costs are attributed to the model and endpoint used. View spend in the dashboard, filter by time range, model, or endpoint, and export for billing.

app.grepture.com/dashboard
Today's spend
$12.47
Requests today
1,847
Avg cost/req
$0.0067
Top model
gpt-4o
ModelRequestsTokensCost% of total
gpt-4o
8921.2M$8.9471.7%
claude-3.5-sonnet
412340K$2.5520.4%
gpt-4o-mini
389890K$0.544.3%
gemini-1.5-pro
154210K$0.443.5%

Key features

  • [+]Per-request token count and cost attribution
  • [+]Cost breakdown by model, provider, and endpoint
  • [+]Spend trends and usage analytics over time
  • [+]Support for all major provider pricing models
  • [+]Filter and search by cost, tokens, or model
  • [+]Export data for internal billing and chargeback

Start observing your AI traffic in 5 minutes

Drop-in SDK. See your first request in under a minute.

Free for up to 1,000 requests/month · No credit card required

Get Started Free