AIBase
Home
AI NEWS
AI Tools
AI Models
MCP
AI Services
AI Compute
AI Tutorial
EN

AI News

View More

Ant Group CodeFuse Open Source ModelCache for Large Model Semantic Caching

The ModelCache architecture under Ant Group's CodeFuse includes the adapter, embedding, similarity, and data_manager modules. ModelCache can reduce the inference cost of large model applications and enhance user experience. Cache hits can reduce average latency by 10 times, with speed improvements of up to 14.5%. ModelCache will continue to optimize performance and accuracy, improving recall time.

15.1k 6 days ago
Ant Group CodeFuse Open Source ModelCache for Large Model Semantic Caching
AIBase
Empowering the future, your artificial intelligence solution think tank
English简体中文繁體中文にほんご
FirendLinks:
AI Newsletters AI ToolsMCP ServersAI NewsAIBaseLLM LeaderboardAI Ranking
© 2025AIBase
Business CooperationSite Map