vigil-llm
Public? Vigil ? Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
adversarial-attacksadversarial-machine-learninglarge-language-modelsllm-securityllmopsprompt-injectionsecurity-toolsyara-scanner
Creat:2023-09-05T01:02:21
Update:2025-03-26T08:24:10
https://vigil.deadbits.ai/
400
Stars
0
Stars Increase