Home
Information

AI Dataset Collection

Large-scale datasets and benchmarks for training, evaluating, and testing models to measure

Tools

Intelligent Document Recognition

Comprehensive Text Extraction and Document Processing Solutions for Users

AI Tutorial

LLM-Attacks

Public

Comprehensive taxonomy of AI security vulnerabilities, LLM adversarial attacks, prompt injection techniques, and machine learning security research. Covers 71+ attack vectors including model poisoning, agentic AI exploits, and privacy breaches.

Creat2024-07-29T04:13:13
Update2025-09-19T15:30:31
https://ai-security-research-group.github.io/LLM-Attacks/
4
Stars
0
Stars Increase

Related projects