HomeAI Marketplace

clmpi-benchmark

Public

CLMPI Benchmark is a lightweight, transparent framework for evaluating small-to-mid LLMs across five core dimensions: Accuracy, Contextual Understanding, Coherence, Fluency, and Performance Efficiency. It ships with curated prompts, reproducible generation profiles, and stepwise runners that execute each metric independently or as a full pipeline.

Creat2025-08-01T10:22:30
Update2025-09-04T21:01:15
0
Stars
0
Stars Increase

Related projects