Docs›SDK Reference
SDK Reference
Grepture class, clientOptions() for any OpenAI-compatible SDK, grepture.fetch() drop-in, error classes, and TypeScript types.
Overview
@grepture/sdk provides a Grepture client with two integration methods:
grepture.clientOptions()— configure any OpenAI-compatible SDK to route through Grepturegrepture.fetch()— a drop-in replacement forfetchthat protects any HTTP call
Both methods work in either operating mode — proxy or trace.
Operating modes
The SDK supports two modes, controlled by the mode option:
| Mode | Traffic flow | Use case |
|---|---|---|
"proxy" (default) | App → Grepture proxy → Provider | PII redaction, blocking, prompt management, observability |
"trace" | App → Provider (direct) | Observability and cost tracking with zero latency overhead |
Proxy mode routes traffic through the Grepture proxy, where detection rules run before requests reach the provider. Use this when you need to inspect or modify requests in flight.
Trace mode sends requests directly to the provider. The SDK captures metadata — model, tokens, latency, cost, status — after the response and batches it asynchronously to Grepture. The dashboard shows the same traffic log, cost tracking, and conversation tracing. Use this for latency-sensitive workloads or when you only need visibility.
Creating a client
import { Grepture } from "@grepture/sdk";
// Proxy mode (default) — requests route through Grepture
const grepture = new Grepture({
apiKey: process.env.GREPTURE_API_KEY!,
proxyUrl: "https://proxy.grepture.com",
});
// Trace mode — requests go directly to the provider
const grepture = new Grepture({
apiKey: process.env.GREPTURE_API_KEY!,
proxyUrl: "https://proxy.grepture.com",
mode: "trace",
});
| Option | Type | Description |
|---|---|---|
apiKey | string | Your Grepture API key (format: gpt_...) |
proxyUrl | string | Grepture proxy URL |
mode | "proxy" | "trace" | Operating mode. "proxy" routes through the proxy (default). "trace" sends requests directly to the provider and captures metadata async. |
traceId | string? | Default trace ID for conversation tracing (optional) |
clientOptions()
Returns configuration to spread into an OpenAI-compatible SDK constructor. Works with any provider that uses the same client interface (OpenAI, Azure OpenAI, Groq, Together, etc.) — just pass the target provider's API key and base URL.
import OpenAI from "openai";
import { Grepture } from "@grepture/sdk";
const grepture = new Grepture({
apiKey: process.env.GREPTURE_API_KEY!,
proxyUrl: "https://proxy.grepture.com",
});
const openai = new OpenAI({
...grepture.clientOptions({
apiKey: process.env.OPENAI_API_KEY!,
baseURL: "https://api.openai.com/v1",
}),
});
Input options
| Option | Type | Description |
|---|---|---|
apiKey | string | The target provider's API key (e.g., your OpenAI key) |
baseURL | string | The target provider's base URL |
What it returns
clientOptions() returns a ClientOptionsOutput with:
| Field | Type | Description |
|---|---|---|
baseURL | string | In proxy mode: the Grepture proxy URL. In trace mode: the provider's original URL. |
apiKey | string | The target API key (passed through) |
fetch | typeof fetch | A wrapped fetch that handles auth headers and, in trace mode, captures metadata asynchronously |
In proxy mode, the wrapped fetch moves the target SDK's Authorization header to X-Grepture-Auth-Forward and sets Grepture's own auth header, so the proxy can authenticate both sides. In trace mode, the wrapped fetch sends requests directly to the provider and captures trace data after the response.
grepture.fetch()
A drop-in replacement for fetch that routes any HTTP call through the Grepture proxy. Use it for webhooks, analytics, third-party APIs, or anything outside an SDK.
const response = await grepture.fetch("https://api.example.com/data", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ user: "jane@example.com" }),
});
const data = await response.json();
grepture.fetch() accepts the same arguments as the standard fetch API. It returns a GreptureResponse — a wrapper around the standard Response with extra properties:
| Property | Type | Description |
|---|---|---|
requestId | string | Unique ID for this request (from X-Request-Id header) |
rulesApplied | string[] | IDs of detection rules that matched (from X-Grepture-Rules-Applied header) |
aiSampling | { used: number; limit: number } | null | AI sampling usage for free-tier users (from X-Grepture-AI-Sampling header). null for paid plans or when no AI actions ran. |
All standard Response methods (json(), text(), blob(), arrayBuffer(), formData(), clone()) and properties (status, ok, headers, body) are available.
Conversation tracing
Group related requests into a single trace to debug multi-turn conversations and agent loops. All requests with the same trace ID appear together in the Traffic Log's Traces tab, showing cost, tokens, and a step-by-step timeline.
The trace ID is stripped before forwarding — upstream providers never see it.
Set a trace ID globally
Pass traceId in the constructor, or call setTraceId() at any point. Every request made through grepture.fetch() or clientOptions() will include it automatically.
const grepture = new Grepture({
apiKey: process.env.GREPTURE_API_KEY!,
proxyUrl: "https://proxy.grepture.com",
traceId: `agent-${crypto.randomUUID().slice(0, 12)}`,
});
// All requests are grouped under this trace
const openai = new OpenAI({
...grepture.clientOptions({
apiKey: process.env.OPENAI_API_KEY!,
baseURL: "https://api.openai.com/v1",
}),
});
await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello" }],
});
Change trace mid-session
Start a new trace when the conversation changes — for example, a new user session or a separate agent run.
// New conversation — new trace
grepture.setTraceId(`conv-${crypto.randomUUID().slice(0, 12)}`);
// Stop tracing
grepture.setTraceId(undefined);
Per-request override
Override the default trace ID for a single request using grepture.fetch():
const res = await grepture.fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: { Authorization: `Bearer ${process.env.OPENAI_API_KEY}` },
body: JSON.stringify({ model: "gpt-4o", messages }),
traceId: "special-run-abc",
});
Labels
Attach a label to individual requests within a trace to identify what each step does — for example, "extract-facts", "tool-call", or "generate-report". Labels appear as badges in the trace timeline.
Global label — applies to all subsequent requests:
grepture.setLabel("extract-facts");
await openai.chat.completions.create({ ... });
grepture.setLabel("draft-response");
await openai.chat.completions.create({ ... });
grepture.setLabel(undefined); // clear
Per-request label — overrides the global default for a single grepture.fetch() call:
const res = await grepture.fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: { Authorization: `Bearer ${process.env.OPENAI_API_KEY}` },
body: JSON.stringify({ model: "gpt-4o", messages }),
label: "summarize",
});
Metadata
Attach arbitrary key-value tags to requests for filtering and debugging — for example, user IDs, environments, or feature flags. Metadata appears in the request detail panel in the dashboard.
Global metadata — applies to all subsequent requests:
grepture.setMetadata({ userId: "u_123", environment: "production" });
Per-request metadata — merges with global metadata (per-request values win on conflicts):
const res = await grepture.fetch(url, {
body: JSON.stringify({ model: "gpt-4o", messages }),
metadata: { feature: "chat", attempt: "1" },
});
Clear global metadata:
grepture.setMetadata(undefined);
Custom log events
Log non-AI events into the current trace — cache hits, tool executions, validation results, or any application-level event. Log events appear in the trace timeline alongside AI calls with an expandable detail view.
grepture.setTraceId("agent-run-42");
// ... AI call ...
grepture.log("tool-executed", { tool: "web-search", query: "latest news" });
// ... next AI call ...
grepture.log("cache-hit", { key: "embedding-abc" });
log() is fire-and-forget — it buffers entries and sends them asynchronously, like trace mode. Call flush() before process exit in short-lived environments.
Trace methods
| Method | Description |
|---|---|
setTraceId(id?) | Set or clear the default trace ID for all subsequent requests |
getTraceId() | Returns the current default trace ID |
setLabel(label?) | Set or clear the default label for all subsequent requests |
getLabel() | Returns the current default label |
setMetadata(metadata?) | Set or clear default metadata (Record<string, string>) for all subsequent requests |
getMetadata() | Returns the current default metadata |
log(event, data?) | Log a custom event into the current trace. event is the event name (string), data is an optional payload (object). |
flush() | Send any buffered trace and log data immediately. Call this before process exit in serverless environments. |
Error handling
The SDK throws typed error classes when the proxy returns an error status. Your existing error handling continues to work — errors from the target API pass through unchanged.
import { Grepture, BlockedError, AuthError } from "@grepture/sdk";
try {
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: prompt }],
});
} catch (error) {
if (error instanceof BlockedError) {
// Request blocked by a detection rule (HTTP 403)
} else if (error instanceof AuthError) {
// Invalid or missing Grepture API key (HTTP 401)
}
}
Error classes
All error classes extend GreptureError, which extends Error and includes a status property.
| Class | HTTP Status | Description |
|---|---|---|
BadRequestError | 400 | Malformed request (missing headers, invalid URL, body too large) |
AuthError | 401 | Invalid or missing Grepture API key |
BlockedError | 403 | Request blocked by a detection rule |
ProxyError | 502 / 504 | Target unreachable or request timed out |
GreptureError | any | Base class — catch-all for other non-OK statuses |
TypeScript types
The SDK is fully typed. All configuration and error types are exported:
import type {
GreptureConfig,
ClientOptionsInput,
ClientOptionsOutput,
FetchOptions,
GreptureResponseMeta,
} from "@grepture/sdk";
| Type | Description |
|---|---|
GreptureConfig | { apiKey: string; proxyUrl: string; mode?: "proxy" | "trace"; traceId?: string } |
ClientOptionsInput | { apiKey: string; baseURL: string } |
ClientOptionsOutput | { baseURL: string; apiKey: string; fetch: typeof fetch } |
FetchOptions | RequestInit & { traceId?: string; label?: string; metadata?: Record<string, string> } |
GreptureResponseMeta | { requestId: string; rulesApplied: string[]; aiSampling: { used: number; limit: number } | null } |
Runtime support
The SDK runs in Node.js, Bun, Deno, and edge runtimes (Cloudflare Workers, Vercel Edge). Zero native dependencies.