AIbase

Jailbreaking-Deep-Models

Public

This repository contains the codebase for Jailbreaking Deep Models, which investigates the vulnerability of deep convolutional neural networks to adversarial attacks. The project systematically implements and analyzes Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and localized patch-based attacks on the pretrained

Creat2025-05-18T06:34:50
Update2025-05-18T07:10:44
0
Stars
0
Stars Increase