LiquidAI has officially launched the "Liquid Nanos" series of lightweight AI models. This series of models is designed for edge computing devices and can run efficiently on small devices such as Raspberry Pi. The Liquid Nanos covers five application scenarios: translation, extraction, RAG (retrieval-augmented generation), tool calling, and mathematical reasoning, providing developers with a flexible and diverse choice.
The Liquid Nanos series offers two parameter versions: 350M and 1.2B. These models are designed to meet low-power and high-performance requirements, allowing users to achieve complex AI functions without relying on powerful computing resources. All models support the GGUF quantization format, which means they perform well in terms of resource utilization and can be used by more users.
LiquidAI's first 12 task-specific models have been launched on the Hugging Face platform. They include the professional Japanese-English translation model LFM2-350M-ENJP-MT, the extraction model LFM2-350M/1.2B-Extract, the RAG model LFM2-1.2B-RAG, the tool calling model LFM2-1.2B-Tool, and the mathematical reasoning model LFM2-350M-Math. The launch of these models not only enriches the developer's toolkit but also provides strong support for various practical application scenarios.
The release of Liquid Nanos marks further development of edge computing in AI applications. With its lightweight characteristics and high performance, the Liquid Nanos series is undoubtedly an important tool for developers and enterprises to innovate in AI. As edge computing technology continues to evolve, we will see more innovative applications based on the Liquid Nanos models, driving digital transformation across industries.
https://huggingface.co/collections/LiquidAI/liquid-nanos-68b98d898414dd94d4d5f99a
Key Points:
🌟 LiquidAI has released the "Liquid Nanos" series of lightweight AI models, specifically designed for edge devices.
📊 Offers two parameter versions: 350M and 1.2B, supports the GGUF quantization format, optimizing performance and resource utilization.
🚀 The first 12 task-specific models have been launched on Hugging Face, covering multiple application scenarios such as translation, extraction, and RAG.