[PROMPT DEBUGGING]

Debug prompts like you debug code.

Inspect the exact prompt sent, the exact response received, and everything in between. Trace multi-turn conversations. Spot regressions before users do.

What it does

When an AI feature breaks, you need to see exactly what happened. Grepture captures the full request/response pair for every LLM call — the system prompt, user input, model response, token counts, and latency. Search, filter, and trace conversation threads to find where things went wrong.

How it works

Every request through the proxy is logged with its full context. Use the dashboard to search by content, filter by status or model, and trace multi-turn conversations. Compare different prompt versions side-by-side to understand what changed.

app.grepture.com/traffic-log/req_8f3a...
POST /v1/chat/completions200247msgpt-4o
Input: 847 tokCompletion: 312 tok
Total: 1,159 tok · $0.0089
Messages
System31 tok

You are a customer support agent for Acme Corp. Answer questions about our API pricing and features. Be concise and accurate.

User10 tok

What's the rate limit on the free plan?

Assistant42 tok

The free plan includes 1,000 requests per day with a rate limit of 10 requests per second. If you need higher limits, the Pro plan offers 50,000 requests/day with 100 req/s.

Key features

  • [+]Full request/response capture with prompt inspection
  • [+]Multi-turn conversation tracing
  • [+]Search by prompt content, response text, or metadata
  • [+]Filter by model, status code, latency, or time range
  • [+]Side-by-side comparison of prompt variations
  • [+]Latency and token usage per request

Start observing your AI traffic in 5 minutes

Drop-in SDK. See your first request in under a minute.

Free for up to 1,000 requests/month · No credit card required

Get Started Free