Per-request cost attribution across every provider. See exactly where your tokens go, which models burn the most, and where to optimize.
Grepture tracks token usage and calculates cost for every request flowing through the proxy. You get per-request, per-model, and per-endpoint cost breakdowns — across OpenAI, Anthropic, Google AI, and every other supported provider.
The proxy counts input and output tokens per request and maps them to each provider's pricing. Costs are attributed to the model and endpoint used. View spend in the dashboard, filter by time range, model, or endpoint, and export for billing.
| Model | Requests | Tokens | Cost | % of total |
|---|---|---|---|---|
| gpt-4o | 892 | 1.2M | $8.94 | 71.7% |
| claude-3.5-sonnet | 412 | 340K | $2.55 | 20.4% |
| gpt-4o-mini | 389 | 890K | $0.54 | 4.3% |
| gemini-1.5-pro | 154 | 210K | $0.44 | 3.5% |
Drop-in SDK. See your first request in under a minute.
Free for up to 1,000 requests/month · No credit card required
Get Started Free