multi-layered heuristic prompt injection detection
Analyzes incoming prompts using fast, pattern-based keyword and rule matching to detect common prompt injection attack signatures before they reach the LLM. Operates as the first defense layer in the multi-layered strategy, using configurable thresholds to flag suspicious patterns like instruction overrides, role-play attempts, and known attack keywords. Executes synchronously with minimal latency overhead.
Unique: Implements a configurable strategy pattern for heuristic tactics, allowing developers to enable/disable specific rules and adjust thresholds per deployment without code changes, rather than using fixed rule sets like most competitors
vs alternatives: Faster than LLM-based detection (sub-millisecond vs 100-500ms) and requires no API calls, making it suitable for high-throughput applications where latency is critical
llm-based semantic prompt injection detection
Delegates prompt analysis to a dedicated language model that evaluates semantic intent and malicious patterns beyond simple keyword matching. The LLM tactic accepts user input and returns a detection score based on the model's understanding of attack intent, allowing detection of sophisticated, paraphrased, or novel injection attempts. Integrates with configurable LLM backends (OpenAI, Anthropic, local models) and caches results to reduce API costs.
Unique: Abstracts LLM backend selection through a pluggable interface, allowing users to swap between OpenAI, Anthropic, or self-hosted models without code changes, and includes built-in result caching to reduce API costs for repeated inputs
vs alternatives: Detects semantic intent-based attacks that keyword filters miss, but trades latency and cost for accuracy; more flexible than fixed-model competitors by supporting multiple LLM backends
self-hardening attack pattern learning from canary leaks
Automatically captures new attack patterns when canary tokens are leaked in LLM responses and stores them in the vector database for future detection. When isCanaryWordLeaked() detects a leak, the system extracts the leaked prompt, generates embeddings, and adds it to the vector database with metadata about the attack (timestamp, user, LLM model). Over time, the vector database grows with real-world attack examples, improving detection accuracy without manual threat intelligence curation.
Unique: Implements automatic attack pattern capture from canary token leaks, creating a feedback loop where successful attacks are immediately added to the vector database for future detection; unique among competitors in treating incident response as training data generation
vs alternatives: Enables continuous improvement of detection without manual threat intelligence curation; more adaptive than static rule-based systems that require manual updates for each new attack variant
deployment and self-hosting with environment configuration
Supports multiple deployment models including cloud-hosted (Netlify), Docker containerization, and self-hosted on-premise installations. Configuration is managed through environment variables for API keys, database connections, and detection thresholds, enabling different configurations per environment (dev, staging, production) without code changes. Includes Docker Compose templates for quick self-hosted setup with all dependencies (vector database, LLM backend).
Unique: Provides both cloud-hosted and self-hosted deployment options with environment-based configuration, enabling organizations to choose deployment model based on compliance requirements; includes Docker Compose templates for rapid self-hosted setup
vs alternatives: More flexible than SaaS-only competitors by supporting on-premise deployment; environment-based configuration enables multi-environment deployments without code changes
detection result explanation and scoring breakdown
Returns detailed explanations for each detection decision, including per-tactic scores, matched patterns, and reasoning from the LLM-based detector. When a prompt is flagged, developers can see which tactics triggered (heuristic keywords matched, vector similarity score, LLM confidence), enabling debugging and tuning of detection rules. Scores are normalized to 0-1 range for comparison across tactics with different scoring schemes.
Unique: Provides per-tactic score breakdown and matched pattern details, enabling developers to understand which detection layers triggered and why; LLM-based detector includes semantic reasoning for transparency
vs alternatives: More transparent than black-box detection systems; detailed explanations enable faster tuning of detection rules and easier debugging of false positives
vector database similarity matching against known attacks
Stores embeddings of previously detected or known prompt injection attacks in a vector database and compares incoming prompts against this corpus using cosine similarity or other distance metrics. When a new prompt is submitted, it's embedded and compared to the attack vector store; if similarity exceeds a configurable threshold, the input is flagged. This layer learns from past incidents and enables cross-organization threat intelligence sharing.
Unique: Implements a pluggable vector database abstraction that supports multiple backends (Pinecone, Weaviate, Milvus) and embedding providers, enabling organizations to choose infrastructure based on compliance and cost requirements, rather than being locked to a single vendor
vs alternatives: Provides institutional memory of attacks that heuristic and LLM-based detection lack, enabling detection of attack variations without retraining; more scalable than storing attack examples in code or configuration
canary token injection and leak detection
Inserts randomly generated, unique canary words into system prompts as invisible markers, then monitors LLM outputs to detect whether the model has leaked its instructions. When a canary word appears in the model's response, it indicates the model has exposed its system prompt or instructions to the user. This mechanism detects successful prompt injection attacks even if earlier layers missed them, and enables logging of new attack patterns to the vector database for future detection.
Unique: Generates cryptographically random canary words per request and stores them in-memory during the detection session, preventing attackers from discovering patterns; integrates with vector database to automatically log leaked prompts as new attack examples for continuous learning
vs alternatives: Provides a second line of defense that catches attacks missed by earlier layers and enables active learning; unique among competitors in treating canary leaks as training data for the vector database
strategy pattern-based detection configuration
Organizes all detection tactics (heuristic, LLM-based, vector database, canary tokens) using the strategy design pattern, allowing developers to enable/disable specific tactics, adjust per-tactic thresholds, and compose custom detection pipelines without modifying core code. Each tactic is a pluggable strategy with a standard interface, and the SDK initializes with a sensible default strategy that includes all three main tactics. Configuration is applied at SDK initialization and can be overridden per-request.
Unique: Implements strategy pattern with per-tactic threshold configuration and enable/disable flags, allowing fine-grained control over detection behavior without code changes; default strategy includes all tactics but developers can compose minimal pipelines for latency-sensitive applications
vs alternatives: More flexible than monolithic detection systems that run all checks unconditionally; enables cost optimization by disabling expensive tactics in low-risk scenarios while maintaining security in high-risk paths
+5 more capabilities