Home
Information

AI Dataset Collection

Large-scale datasets and benchmarks for training, evaluating, and testing models to measure

Tools

Intelligent Document Recognition

Comprehensive Text Extraction and Document Processing Solutions for Users

AI Tutorial

unlearning-or-concealment

Public

We expose a significant vulnerability in diffusion model unlearning methods, where an attacker can reverse the supposed erasure of concepts during the inference process. Our approach leverages a novel Partial Diffusion Attack that operates across all layers of the model.

Creat2024-08-23T11:37:34
Update2025-03-07T12:17:03
https://respailab.github.io/unlearning-or-concealment
1
Stars
0
Stars Increase