HomeAI Tutorial

system-prompt-benchmark

Public

Test your LLM system prompts against 287 real-world attack vectors including prompt injection, jailbreaks, and data leaks.

Creat2025-11-19T16:19:25
Update2025-11-26T16:36:31
6
Stars
1
Stars Increase

Related projects