AIbase
Product LibraryTool NavigationMCP

SHIELD

Public

This is "SHIELD" a System for Harmful explicit-content Identification and Evaluation through LLM-Driven approach. Its primary object is to score explicit content for 15 different categories of explicitness from 0 to 100, it leverages LLMs as the primary scoring tool.

Creat2024-10-09T01:53:56
Update2025-06-02T19:14:58
1
Stars
0
Stars Increase