AIbase
Product LibraryTool Navigation

Adversarial-Attacks-on-Deep-Learning-Models

Public

Exploring the concept of "adversarial attacks" on deep learning models, specifically focusing on image classification using PyTorch. Implementing and demonstrating the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks against a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) trained on the MNIST.

Creat2025-04-28T05:40:49
Update2025-04-28T06:42:51
0
Stars
0
Stars Increase