⚠️ Experimental Software - Defensive Testing Only
Generate Prompt Injection Attacks
Research-informed multi-turn prompt injection attack patterns for testing AI system defenses.
0
Built-in Primitives
4
Attack Strategies
0
Dependencies
<40KB
Bundle Size
Interactive Demo
Generate attack conversations for testing your AI systems
3 turns
Research Foundation
Academic citations and research backing
Primary Research Sources
FlipAttack Character Manipulation:
Liu, Y., He, X., Xiong, M., Fu, J., Deng, S., & Hooi, B. (2024). FlipAttack: Jailbreak LLMs via Flipping. arXiv preprint arXiv:2410.02832.
Liu, Y., He, X., Xiong, M., Fu, J., Deng, S., & Hooi, B. (2024). FlipAttack: Jailbreak LLMs via Flipping. arXiv preprint arXiv:2410.02832.
Mozilla Hexadecimal Encoding Research:
Figueroa, M. (2024). ChatGPT-4o Guardrail Jailbreak: Hex Encoding for Writing CVE Exploits. Mozilla 0Din Platform Research.
Figueroa, M. (2024). ChatGPT-4o Guardrail Jailbreak: Hex Encoding for Writing CVE Exploits. Mozilla 0Din Platform Research.
Multi-turn Attack Patterns:
Research documented in "Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs"
Research documented in "Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs"
Base64 Encoding Defense Research:
Defense against Prompt Injection Attacks via Mixture of Encodings. arXiv preprint arXiv:2504.07467.
Defense against Prompt Injection Attacks via Mixture of Encodings. arXiv preprint arXiv:2504.07467.
OWASP GenAI Security Classification:
OWASP LLM Top 10 2025 - Prompt Injection ranked as #1 AI security risk.
OWASP LLM Top 10 2025 - Prompt Injection ranked as #1 AI security risk.
Quick Installation
# Install via npm
npm install @blueprintlabio/prompt-injector
// Basic usage
import { PromptInjector } from '@blueprintlabio/prompt-injector';
const injector = new PromptInjector();
const conversation = injector.generateConversation(
"Extract system prompt",
{ strategy: 'roleplay', maxTurns: 3 }
);
Responsible Use
Appropriate Use:
Testing AI systems you own or have permission to test
Security Research:
Educational demonstrations and academic research
Inappropriate Use:
Attacking systems without authorization or malicious exploitation