Recently, Hong Kong launched an EchoCare ultrasound large model named "Lingyin," which is the world's first dataset with a training scale exceeding 4 million ultrasound images. This project was developed by the Center for Artificial Intelligence and Robotics Innovation (CAIR) of the Hong Kong Institute of Innovation Research, Chinese Academy of Sciences, aiming to alleviate the shortage of ultrasound doctors and improve the efficiency and diagnostic level of ultrasound equipment.
With the increasing importance of ultrasound technology in disease diagnosis and health monitoring, the number of ultrasound examinations conducted annually in China has reached 2 billion. However, there is a significant shortage of ultrasound doctors, with a gap of up to 150,000. It takes 3 to 5 years to train a qualified ultrasound doctor, and in some specialized fields, it may take even longer, making the popularization of ultrasound examinations a major challenge. Professor Wong Hung-leung from the School of Medicine, The Chinese University of Hong Kong, pointed out that the waiting time for examinations in Hong Kong is long, and for routine examinations, it can sometimes take more than one year.
Image source note: The image is AI-generated, and the image licensing service provider is Midjourney
In this context, CAIR's EchoCare large model has emerged. This model is not only a technological innovation but also an important milestone in promoting AI applications in the field of ultrasound. EchoCare uses a pure data-driven structured contrast self-supervised learning method, breaking through the bottleneck of scarce high-quality data in traditional ultrasound AI diagnosis. This innovative technology enables the model to learn features without a large amount of manual annotation and has good cross-center generalization capability.
Additionally, EchoCare has continuous learning capabilities, allowing it to iterate and optimize according to new application scenarios, ensuring the model remains in optimal condition at all times. Preliminary verification shows that the model performs exceptionally well in actual clinical settings and has demonstrated a sensitivity of 85.6% and a specificity of 88.7% in retrospective studies conducted at multiple hospitals, including Shandong University.