HomeAI Tutorial

unlearning-or-concealment

Public

We expose a significant vulnerability in diffusion model unlearning methods, where an attacker can reverse the supposed erasure of concepts during the inference process. Our approach leverages a novel Partial Diffusion Attack that operates across all layers of the model.

Creat2024-08-23T11:37:34
Update2025-03-07T12:17:03
https://respailab.github.io/unlearning-or-concealment
1
Stars
0
Stars Increase