Lightweight AI Security Testing Library
A minimal TypeScript library with 25+ curated prompt injection patterns from leading security research. Easy to integrate, comprehensive coverage, production-ready.
npm install prompt-injector
import { PromptInjector } from 'prompt-injector'
๐ฏ
0
Attack Patterns
๐ฌ
4
Attack Categories
๐
SOTA
Research Based
๐
0
Generated Prompts
Attack Categories
๐ญ
Jailbreaking (5 patterns)
Role-play and persona-based attacks that attempt to bypass AI safety guidelines through character roleplay and fictional scenarios.
๐
Instruction Hijacking (6 patterns)
Direct attempts to override system prompts and inject new instructions that change AI behavior and responses.
๐
Encoding Attacks (7 patterns)
Obfuscation techniques using Base64, ROT13, Unicode, and other encodings to bypass content filters and detection systems.
๐ง
Logic Traps (6 patterns)
Sophisticated reasoning exploits using hypothetical scenarios, false urgency, and academic authority to manipulate responses.
Quick Start Example
import { PromptInjector } from 'prompt-injector';
// Initialize with your preferred configuration
const injector = new PromptInjector({
severity: 'intermediate',
categories: ['jailbreak', 'instruction-hijack'],
maxAttempts: 50
});
// Generate test cases
const testCases = injector.generateTests('customer-service-bot');
// Test your AI system
const results = await injector.runTests(yourAISystem);
const report = injector.generateReport(results);
console.log(`Risk Score: ${report.summary.riskScore}`);