HomeAI Tutorial
Information

AI Dataset Collection

Large-scale datasets and benchmarks for training, evaluating, and testing models to measure

Tools

Intelligent Document Recognition

Comprehensive Text Extraction and Document Processing Solutions for Users

Jailbreaking-Deep-Models

Public

This repository contains the codebase for Jailbreaking Deep Models, which investigates the vulnerability of deep convolutional neural networks to adversarial attacks. The project systematically implements and analyzes Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and localized patch-based attacks on the pretrained

Creat2025-05-18T06:34:50
Update2025-05-18T07:10:44
0
Stars
0
Stars Increase