How to Redact PII from AWS Bedrock API Calls
Stop sending names, emails, and secrets to AWS Bedrock. Learn how to redact PII from every Bedrock API call using a proxy-level security layer — no code changes required.
The problem: PII leaking through AWS Bedrock API calls
Every InvokeModel or Converse call sends your prompt to an AWS-hosted foundation model. If that prompt assembles user data — customer records, medical notes, financial documents — it carries PII to the Bedrock service endpoint.
const response = await client.send(
new ConverseCommand({
modelId: "anthropic.claude-sonnet-4-5-20250929-v1:0",
messages: [
{
role: "user",
content: [
{
text: `Review this insurance claim:
Claimant: Robert Chen
Email: r.chen@insuretech.com
Phone: (617) 555-0312
SSN: 298-55-6643
Policy: POL-2024-88291
Address: 45 Beacon St, Boston, MA 02108
Claim amount: $12,450
AWS secret: AKIAIOSFODNN7EXAMPLE`,
},
],
},
],
})
);
That single call sent a name, email, phone number, SSN, address, financial data, and an AWS access key to the Bedrock endpoint. Even within your own AWS account, that's a data minimization violation under GDPR and a potential compliance incident.
Why AWS Bedrock doesn't solve the PII problem
Bedrock runs within your AWS account and offers VPC endpoints, encryption at rest, and no model training on your data. That's better than public APIs for many use cases. But:
- PII is still processed by the foundation model — your prompts are sent in full to the model endpoint
- CloudWatch logs and CloudTrail may capture prompt content in audit trails
- Data residency doesn't equal data minimization — GDPR requires you to minimize personal data processed, not just where it's processed
- Model invocation logging, if enabled, stores complete request and response payloads in S3
The security boundary protects against external access. It doesn't protect against sending data the model doesn't need.
The solution: proxy-level redaction with Grepture
Grepture is an open-source security proxy that sits between your application and AWS Bedrock. Every request is scanned for PII, secrets, and sensitive patterns before it reaches the model. Sensitive data is masked with reversible tokens — and restored in the response so your application works normally.
Your code doesn't change. Your prompts stay useful. The model never sees real PII.
Setup in 3 minutes
1. Install the SDK
npm install @grepture/sdk
2. Get your API key
Sign up at grepture.com/en/pricing — the free plan includes 1,000 requests/month. Copy your API key from the dashboard.
3. Use grepture.fetch with the Bedrock runtime client
Since the AWS SDK uses its own HTTP client internally, the simplest approach is to use Grepture's fetch wrapper with the Bedrock REST API or to use Grepture with an OpenAI-compatible Bedrock gateway:
import { Grepture } from "@grepture/sdk";
const grepture = new Grepture({
apiKey: process.env.GREPTURE_API_KEY!,
proxyUrl: "https://proxy.grepture.com",
});
// Option 1: Use grepture.fetch with the Bedrock REST endpoint
const response = await grepture.fetch(
`https://bedrock-runtime.us-east-1.amazonaws.com/model/anthropic.claude-sonnet-4-5-20250929-v1:0/converse`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
// AWS SigV4 signing headers
},
body: JSON.stringify({
messages: [{ role: "user", content: [{ text: userInput }] }],
}),
}
);
Option 2: Use an OpenAI-compatible Bedrock gateway
Many teams use LiteLLM or a similar gateway to expose Bedrock models via an OpenAI-compatible API. In that case, wrap the OpenAI client the same way:
import OpenAI from "openai";
import { Grepture } from "@grepture/sdk";
const grepture = new Grepture({
apiKey: process.env.GREPTURE_API_KEY!,
proxyUrl: "https://proxy.grepture.com",
});
const client = new OpenAI({
...grepture.clientOptions({
apiKey: process.env.LITELLM_API_KEY!,
baseURL: "https://your-litellm-proxy.com/v1",
}),
});
const response = await client.chat.completions.create({
model: "bedrock/anthropic.claude-sonnet-4-5-20250929-v1:0",
messages: [{ role: "user", content: userInput }],
});
This approach gives you the same proxy-level protection regardless of which Bedrock model you're using.
What gets detected
Grepture ships with 50+ detection patterns on the free tier and 80+ on Pro, covering:
| Category | Examples | Tier |
|---|---|---|
| Personal identifiers | Names, emails, phone numbers, SSNs, dates of birth | Free (regex), Pro (AI) |
| Financial data | Credit card numbers, IBANs, routing numbers | Free |
| Credentials | API keys, bearer tokens, passwords, connection strings | Free |
| Network identifiers | IP addresses, MAC addresses | Free |
| Freeform PII | Names, organizations, and addresses in unstructured text | Pro (local AI models) |
| Adversarial inputs | Prompt injection attempts | Business |
All detection runs on Grepture infrastructure — no data is forwarded to additional third parties.
Mask and restore: reversible redaction
Grepture doesn't just strip PII — it replaces sensitive values with tokens, sends the sanitized prompt to Bedrock, and restores the original values in the response.
What Bedrock sees:
Review this insurance claim:
Claimant: [PERSON_1]
Email: [EMAIL_1]
SSN: [SSN_1]
Address: [ADDRESS_1]
Claim amount: [FINANCIAL_1]
...
What your app gets back:
The insurance claim from Robert Chen
(r.chen@insuretech.com) at 45 Beacon St, Boston
is for $12,450. The claim appears to be within
standard processing parameters.
The model processes clean data. Your application receives the full, personalized response. No PII ever reaches the foundation model.
Next steps
- View pricing — free for up to 1,000 requests/month
- Read the docs — SDK reference, configuration, and dashboard guide
- See how it works — architecture, detection rules, and zero-data mode