[Guides]

Step-by-step guides for redacting PII and sensitive data from AI API calls — without changing your application code.

How to Set Up AI Usage Alerts and Compliance Reports

Build an audit trail for every AI API call. Activity logging, compliance reports, zero-data mode, and team access controls — everything you need for AI governance.

Read guide

How to Make AI API Calls GDPR-Compliant

Every LLM API call sends data to a third party. Under GDPR, that's a transfer to a data processor. Learn how to make your AI API calls compliant — lawful basis, data minimization, PII redaction, and EU-hosted processing.

Read guide

How to Monitor and Log All LLM API Calls in One Place

Get unified logging across OpenAI, Anthropic, Google, and Azure. See every request, response, token count, and latency — with a single proxy. No custom logging code.

Read guide

How to Redact PII from Anthropic Claude API Calls

Stop sending names, emails, and secrets to Anthropic Claude. Learn how to redact PII from every Claude API call using a proxy-level security layer — no code changes required.

Read guide

How to Redact PII from Any API Call

Stop PII from leaking through outbound API calls. Learn how to redact sensitive data from any HTTP request — AI providers, webhooks, payment APIs — using a proxy-level security layer.

Read guide

How to Redact PII from AWS Bedrock API Calls

Stop sending names, emails, and secrets to AWS Bedrock. Learn how to redact PII from every Bedrock API call using a proxy-level security layer — no code changes required.

Read guide

How to Redact PII from Azure OpenAI API Calls

Stop sending names, emails, and secrets to Azure OpenAI. Learn how to redact PII from every Azure OpenAI API call using a proxy-level security layer — no code changes required.

Read guide

How to Redact PII from Google Gemini API Calls

Stop sending names, emails, and secrets to Google Gemini. Learn how to redact PII from every Gemini API call using a proxy-level security layer — no code changes required.

Read guide

How to Redact PII from LangChain Pipelines

Stop PII from leaking through LangChain chains and agents. Learn how to redact sensitive data from every LLM call in your LangChain pipeline using a proxy-level security layer.

Read guide

How to Redact PII from OpenAI API Calls

Stop sending names, emails, and secrets to OpenAI. Learn how to redact PII from every OpenAI API call using a proxy-level security layer — no code changes required.

Read guide

How to Redact PII from Vercel AI SDK Calls

Stop sending names, emails, and secrets through the Vercel AI SDK. Learn how to redact PII from every LLM call using a proxy-level security layer — no code changes required.

Read guide

How to Separate Prompts from Code with Server-Side Prompt Serving

Treat prompts like configuration, not code. Edit, version, and deploy prompt templates without redeploying your app — and let non-developers iterate on prompts safely.

Read guide

How to Track and Control AI API Costs Across Providers

Get per-request cost attribution across OpenAI, Anthropic, Google, and Azure. See where your tokens go, which models cost the most, and where to optimize — with a single proxy.

Read guide

How to Version and Manage LLM Prompts Server-Side

Stop hardcoding prompts. Store, version, and deploy prompt templates from a dashboard — resolve them at request time with zero redeploys. Handlebars templating, draft/publish workflow.

Read guide

Zero-Retention AI Processing: How to Use LLMs Without Storing Data

Process data through AI models without writing request content to disk. Learn how zero-retention AI processing works, when to use it, and how it maps to GDPR, HIPAA, and PCI DSS requirements.

Read guide