aixploit
PublicEngineered to help red teams and penetration testers exploit large language model AI solutions vulnerabilities.
adversarial-attacksadversarial-machine-learningchatgpthackinglarge-language-modelsllmllm-guardrailsllm-securityprompt-injectionredteaming
Creat:2024-11-04T23:10:26
Update:2025-03-07T18:28:38
https://www.aintrust.ai
6
Stars
0
Stars Increase