Jailbreaking-Deep-Models
PublicThis repository contains the codebase for Jailbreaking Deep Models, which investigates the vulnerability of deep convolutional neural networks to adversarial attacks. The project systematically implements and analyzes Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and localized patch-based attacks on the pretrained
adversarial-attacksdeep-learningdensenet121fgsm-attackimagenet-classifierjailbreakmachine-learningnumpypatch-based-attackpgd-adversarial-attacks
Creat:2025-05-18T06:34:50
Update:2025-05-18T07:10:44
0
Stars
0
Stars Increase