TL;DR
Portkey and Grepture are both AI gateways that sit on the hot path between your app and LLM providers. They take different bets on what matters most.
Portkey bets on routing and resilience — load balancing, fallbacks, circuit breakers, caching, and support for 48+ providers. It's the gateway you pick when reliability and cost optimization across many models are your top priorities.
Grepture bets on data protection and quality — inline PII redaction with mask-and-restore, secret scanning, prompt injection blocking, LLM evals, and rule-based datasets. It's the gateway you pick when what flows through the pipe matters as much as where it goes.
At a glance
| Grepture | Portkey | |
|---|---|---|
| Architecture | API gateway (proxy) + trace mode | API gateway (proxy) |
| Primary focus | Data protection + observability | Routing + resilience |
| PII redaction | Inline, reversible (mask-and-restore) | Inline, irreversible (permanent removal) |
| Secret scanning | Built-in (API keys, tokens, credentials) | Not available |
| Prompt injection detection | Yes (Business plan) | Via partner integrations |
| Load balancing | Not available | Weight-based with sticky sessions |
| Fallbacks | Not available | Multi-tier composable chains |
| Circuit breaker | Not available | Built-in with configurable thresholds |
| Caching | Not available | Simple + semantic (Enterprise) |
| Conditional routing | Not available | Metadata/parameter-based routing |
| Provider support | 10+ providers | 48+ providers |
| LLM evals | LLM-as-a-judge with templates | Batch processing with output validation |
| Datasets | Rule-based auto-creation | Not available |
| Prompt management | Yes (versioning, A/B experiments) | Yes (versioning, partials, playground) |
| Observability | Full-text search, request replay, diffs | Logging, tracing, analytics dashboards |
| Pricing | Free tier, then from €49/mo | Free tier, then from $49/mo |
| Open source | Yes | Yes |
Architecture: two gateways, different priorities
Both tools are forward proxies — your LLM requests flow through them. The difference is what each gateway does with the traffic.
Portkey focuses on where requests go. It routes traffic across providers, balances load, handles failovers, caches responses, and manages rate limits. If your OpenAI endpoint goes down, Portkey can automatically fall back to Anthropic. If one model is cheaper for certain queries, Portkey can route accordingly. The gateway is an infrastructure layer for reliability and cost.
Grepture focuses on what's in the requests. It scans content for PII, secrets, and threats, redacts sensitive data before it reaches the provider, and restores it on the way back. The gateway is a security layer for data protection and compliance.
// Portkey — routing and resilience
import Portkey from "portkey-ai";
const portkey = new Portkey({
config: {
strategy: { mode: "fallback" },
targets: [
{ provider: "openai", override_params: { model: "gpt-4o" } },
{ provider: "anthropic", override_params: { model: "claude-sonnet-4-20250514" } },
],
},
});
// Grepture — security and data protection
import OpenAI from "openai";
import { clientOptions } from "@grepture/sdk";
const openai = new OpenAI(clientOptions());
// PII is redacted before reaching OpenAI, restored in the response
Neither approach is wrong — they solve different problems. The question is which problem is more urgent for your team.
PII redaction: reversible vs. irreversible
Both gateways redact PII inline before data reaches the LLM. The critical difference is what happens next.
Portkey replaces PII with standardized identifiers ({{EMAIL_ADDRESS_1}}, {{PHONE_NUMBER_1}}). This is permanent by design — Portkey's documentation explicitly states that "redaction is irreversible." The LLM never sees the original values, and neither does your application in the response. This works well for use cases where you don't need PII in the output — analytics, classification, summarization.
Grepture replaces PII with tokens on the outbound request (Sarah Chen → [PERSON_a3f2]), the LLM processes the sanitized text, and Grepture restores the original values in the response. Your application receives complete, personalized data. The model never sees real PII, but your user still gets a response that references their actual name, email, or account number.
When this matters: Any use case where the AI response needs to reference real user data — customer support, personalized summaries, document generation, email drafting. With irreversible redaction, the model responds with {{NAME_1}} and your app has to figure out what to put back. With mask-and-restore, the response arrives complete.
Portkey uses a hybrid detection approach — built-in guardrails plus partner integrations (Patronus AI, Pangea, AWS Bedrock). Grepture uses a two-tier approach: 50+ regex patterns for structured PII (Free) and local AI models for names, addresses, and organizations (Pro).
Security beyond PII
PII redaction is one piece of the security puzzle. The gateways differ significantly on what else they catch.
Grepture includes purpose-built secret scanning (API keys, bearer tokens, AWS credentials, database connection strings, private keys), prompt injection detection and blocking (Business plan), toxicity scanning, data loss prevention, and compliance flagging. These run on the hot path — threats are blocked before they reach the provider.
Portkey offers guardrails that cover content moderation, language detection, gibberish detection, and output validation (JSON Schema, regex matching). For security-specific features like prompt injection detection, Portkey relies on partner integrations (Pillar Security, Acuvity, SydeLabs, AWS Bedrock guardrails). These are available but require configuring third-party services.
Verdict: Grepture has more built-in security features. Portkey has a broader partner ecosystem for plugging in specialized security tools. If you want one tool that handles PII + secrets + prompt injection + toxicity out of the box, Grepture is simpler. If you want to assemble best-of-breed security tools behind a gateway, Portkey's integration approach gives you flexibility.
Routing and resilience
This is where Portkey is significantly ahead.
Portkey offers weight-based load balancing with sticky sessions, multi-tier fallback chains with composable strategies, circuit breakers that auto-stop routing to unhealthy targets, conditional routing based on metadata or request parameters, automatic retries with configurable strategies, and request timeouts. You can nest strategies — a fallback target can itself contain a load balancer. Canary testing lets you send a small percentage of traffic to a new model.
Grepture does not offer load balancing, fallbacks, circuit breakers, or conditional routing. It's a single-path proxy focused on what happens to the request, not where it goes.
Verdict: If you need multi-provider routing, automatic failover, or traffic distribution across models, Portkey is the clear choice. Grepture does not compete in this area.
Caching
Portkey offers simple caching (exact input match, all plans) and semantic caching (cosine similarity matching, Enterprise only). Semantic caching finds similar-enough previous responses to serve without making a new LLM call — saving both cost and latency. Limitations apply: token limits, message count limits, and it only works on chat/completions endpoints.
Grepture does not offer response caching.
Verdict: If LLM cost reduction through caching is important, Portkey wins. This is particularly valuable for applications with repetitive queries.
Observability and tracing
Both gateways log and trace requests, with different strengths.
Portkey provides OpenTelemetry-compliant tracing with W3C Trace Context support, hierarchical trace/span trees, analytics dashboards for cost, latency, errors, cache performance, and user behavior. Alert configuration is available on paid plans. The breadth of metrics and dashboard views is strong.
Grepture provides full-text search across all prompts and responses, waterfall timelines for multi-step agent traces, one-click request replay, and before/after diff views showing exactly what was redacted or blocked. The observability is tightly integrated with the security layer — you see what happened to the data, not just that it passed through.
Grepture also offers a dedicated trace mode that captures observability data without routing traffic through the proxy — zero added latency when you don't need inline processing. Trace mode and proxy mode share the same dashboard and eval pipeline.
Verdict: Portkey has broader analytics and OTel-compliant tracing. Grepture's security-aware observability (seeing redactions, blocks, and detections alongside requests) is unique. Different strengths for different needs.
Evals and quality scoring
Grepture provides LLM-as-a-judge evaluation with six pre-built templates (relevance, helpfulness, toxicity, conciseness, instruction-following, hallucination), custom judge prompts, configurable sampling (1-100%), and quality badges on traffic logs. You can run experiments with evaluators to compare prompt versions side by side. Evals run in the background with zero proxy latency impact.
Portkey supports batch evaluation workflows — running thousands of LLM calls with guardrail validation on outputs. This is useful for testing prompts at scale and validating output format (JSON Schema, regex). Their playground supports side-by-side model comparison for informal evaluation. It's a different approach: batch validation rather than continuous quality scoring on live traffic.
Verdict: Grepture's eval system is designed for ongoing quality monitoring in production. Portkey's batch approach is better for pre-deployment prompt testing. Grepture wins for live quality scoring; Portkey's guardrail validation is useful for structured output requirements.
Datasets
Grepture supports datasets with rule-based automatic creation — define rules (e.g., "all requests that triggered PII detection" or "all requests with a toxicity score above 0.8") and matching traffic is automatically added to a dataset. Datasets integrate with the eval and prompt experiment workflow.
Portkey does not have a dedicated dataset management feature.
Verdict: Grepture wins. Automatic dataset creation from live traffic patterns is a unique capability that feeds directly into the eval pipeline.
Prompt management
Both tools offer prompt management with version control and templates.
Portkey has a polished prompt management system with a playground for testing across 1,600+ models, side-by-side comparisons, automatic versioning with rollback, {{variable}} templates, and Prompt Partials — reusable components that can be shared across prompts. The playground is a standout feature for interactive prompt iteration.
Grepture takes an API-first approach with stable slugs, automatic versioning, {{variable}} templates with type validation and defaults, runtime fetching via SDK or REST, and A/B experiments with weighted variant distribution. Evaluator-backed experiments let you compare prompt versions by quality scores before full rollout.
Verdict: Portkey's playground and Prompt Partials make prompt iteration faster. Grepture's evaluator-backed experiments make prompt deployment safer. Both are solid — different workflow strengths.
Provider support
Portkey supports 48+ providers through a universal API with automatic request/response translation. The breadth is impressive — from major providers (OpenAI, Anthropic, Google, Azure, AWS Bedrock) to specialized ones (Together AI, Fireworks, Perplexity, Cerebras, and many more). Multimodal support covers vision, image generation, speech-to-text, text-to-speech, and embeddings.
Grepture supports 10+ major providers (OpenAI, Anthropic, Google AI, Azure, Cohere, Mistral, AWS Bedrock, HuggingFace, Groq, Replicate). The proxy also works with any HTTP endpoint via grepture.fetch().
Verdict: Portkey has significantly broader provider coverage. If you need access to niche or specialized providers through a single API, Portkey is the better choice.
Compliance and certifications
Portkey has SOC 2 Type 2, ISO 27001, GDPR, and HIPAA certifications. Enterprise tier offers VPC hosting, data isolation, custom BAAs, and customer-managed encryption keys.
Grepture is EU-hosted (Frankfurt and Nuremberg) with GDPR compliance and EU AI Act readiness. All subprocessors are EU-based. Zero-data mode ensures request content is never written to disk. Self-hosting is available for full infrastructure control.
Verdict: Portkey has more formal certifications (SOC 2, ISO 27001, HIPAA). Grepture has a stronger EU/GDPR story with all-EU infrastructure and zero-data mode. The right choice depends on your compliance requirements — US healthcare (Portkey's HIPAA) vs. EU data residency (Grepture's all-EU hosting).
Pricing
Both tools offer free tiers and usage-based paid plans. Portkey starts at $49/month (Production) with 100K logs, 30-day retention, and $9/100K overage. Grepture starts at €49/month (Pro) with 100K requests and €8/100K overage.
Portkey's free Developer tier gives you 10K logs with only 3-day retention. Grepture's free tier gives you 1,000 requests with standard retention.
Both tools' Enterprise tiers are custom-priced. Portkey's Enterprise adds semantic caching, SSO, data export, and VPC hosting. Grepture's Business plan (€299/month) adds zero-data mode, prompt injection detection, toxicity scanning, and DLP.
The pricing is comparable — the difference is what's included. Portkey's price covers routing, caching, and observability. Grepture's price covers data protection, evals, and observability.
Who Portkey is best for
- Teams that need multi-provider routing with load balancing, fallbacks, and circuit breakers
- Teams that want response caching (simple or semantic) to reduce LLM costs
- Teams working with many providers (48+) that need a universal API layer
- Teams that need conditional routing based on request metadata or parameters
- Organizations that require SOC 2 and HIPAA certifications
- Teams using agent frameworks (LangChain, CrewAI, OpenAI Agents) that benefit from native integrations
Who Grepture is best for
- Teams that need reversible PII redaction — mask-and-restore that protects data while keeping responses personalized
- Teams that need built-in security — secret scanning, prompt injection blocking, toxicity scanning, DLP — without assembling third-party integrations
- Teams that want LLM evals on live traffic with quality scoring, experiments, and automatic dataset creation
- Organizations with EU data residency requirements or strict GDPR compliance needs
- Teams that want the flexibility to start with trace mode and upgrade to proxy mode without switching tools
FAQ
Is Portkey free?
Portkey has a free Developer tier with 10,000 logs per month and 3-day log retention. The open-source gateway is free to self-host but excludes observability, prompt management, and semantic caching. Paid plans start at $49/month.
Does Portkey support PII redaction?
Yes. Portkey redacts PII inline before it reaches the LLM, replacing values with tokens like {{EMAIL_ADDRESS_1}}. However, redaction is irreversible by design — the original values are permanently removed. Grepture's mask-and-restore replaces PII on the way out and restores original values in the response.
Does Grepture support load balancing and fallbacks?
Grepture focuses on security and observability rather than routing. If you need weight-based load balancing, multi-tier fallback chains, circuit breakers, or conditional routing, Portkey is the stronger choice.
Can I use Grepture and Portkey together?
You could chain them, but running two proxies adds latency and complexity. Most teams choose one based on their primary need: data protection (Grepture) or routing resilience (Portkey).
Which gateway adds less latency?
Both add some overhead as inline proxies. Portkey reports sub-millisecond for the open-source gateway and roughly 20-40ms for the managed service. Grepture's regex-based PII detection runs in under 1ms; AI-powered detection adds more. Both offer async/trace modes for zero-latency observability.