AIbase

RevealVLLMSafetyEval

Public

RevealVLLMSafetyEval is a comprehensive pipeline for evaluating Vision-Language Models (VLMs) on their compliance with harm-related policies. It automates the creation of adversarial multi-turn datasets and the evaluation of model responses, supporting responsible AI development and red-teaming efforts.

Creat2025-05-05T18:08:41
Update2025-05-19T12:29:17
1
Stars
0
Stars Increase

Related projects