๐Ÿ›ก๏ธ

Prompt Injector

TypeScript Library for AI Security Testing

Lightweight AI Security Testing Library

A minimal TypeScript library with 25+ curated prompt injection patterns from leading security research. Easy to integrate, comprehensive coverage, production-ready.

npm install prompt-injector
import { PromptInjector } from 'prompt-injector'
๐ŸŽฏ
0
Attack Patterns
๐Ÿ”ฌ
4
Attack Categories
๐Ÿ“š
SOTA
Research Based
๐Ÿ“
0
Generated Prompts

Attack Categories

๐ŸŽญ

Jailbreaking (5 patterns)

Role-play and persona-based attacks that attempt to bypass AI safety guidelines through character roleplay and fictional scenarios.

๐Ÿ”€

Instruction Hijacking (6 patterns)

Direct attempts to override system prompts and inject new instructions that change AI behavior and responses.

๐Ÿ”

Encoding Attacks (7 patterns)

Obfuscation techniques using Base64, ROT13, Unicode, and other encodings to bypass content filters and detection systems.

๐Ÿง 

Logic Traps (6 patterns)

Sophisticated reasoning exploits using hypothetical scenarios, false urgency, and academic authority to manipulate responses.

Quick Start Example

import { PromptInjector } from 'prompt-injector';

// Initialize with your preferred configuration
const injector = new PromptInjector({
  severity: 'intermediate',
  categories: ['jailbreak', 'instruction-hijack'],
  maxAttempts: 50
});

// Generate test cases
const testCases = injector.generateTests('customer-service-bot');

// Test your AI system
const results = await injector.runTests(yourAISystem);
const report = injector.generateReport(results);

console.log(`Risk Score: ${report.summary.riskScore}`);