Inspect prompts, trace multi-turn conversations, and catch issues before they hit production. Full request/response logging with zero code changes.
Grepture logs every LLM request and response flowing through your app. You get a real-time traffic log showing prompts, completions, token counts, latency, and status codes — across every provider and model.
Point your SDK at Grepture (one config line). Every request is captured, indexed, and searchable in the dashboard. Filter by model, endpoint, status, or time range. Trace full conversation threads across multiple requests.
| Time | Method | URL | Status | Cost | Duration |
|---|---|---|---|---|---|
| 2s ago | POST | /v1/chat/completions | 200 | $0.0342 | 1.2s |
| 5s ago | POST | /v1/embeddings | 200 | $0.0001 | 84ms |
| 12s ago | POST | /v1/chat/completions | 200 | $0.0189 | 2.4s |
| 18s ago | POST | /v1/messages | 429 | $0.0000 | 12ms |
| 31s ago | POST | /v1/chat/completions | 200 | $0.0024 | 890ms |
| 45s ago | POST | /v1/chat/completions | 200 | $0.0510 | 3.1s |
Drop-in SDK. See your first request in under a minute.
Free for up to 1,000 requests/month · No credit card required
Get Started Free