AIBase
Home
AI NEWS
AI Tools
AI Models
MCP
AI Services
AI Compute
AI Tutorial
Datasets
EN

AI News

View More

Proposed by Peking University and others, a training method for medical expert models has elevated an 8B model to the performance level of GPT-4.

The team from Peking University and the Hong Kong University of Science and Technology made a big splash with a new training method that has achieved GPT-4 level performance with an 8B-sized medical expert model. This is no small feat, and they have also introduced a new concept, "stability gap," to explain certain phenomena observed during the continuous pre-training of large language models.Image Source Note: The image is generated by AI, and the image is provided by Midjourney, an image autho

8.9k 2 days ago
Proposed by Peking University and others, a training method for medical expert models has elevated an 8B model to the performance level of GPT-4.
AIBase
Empowering the future, your artificial intelligence solution think tank
English简体中文繁體中文にほんご
FirendLinks:
AI Newsletters AI ToolsMCP ServersAI NewsAIBaseLLM LeaderboardAI Ranking
© 2025AIBase
Business CooperationSite Map