The enterprise-level AI intelligent agent market is experiencing explosive growth, with the Chinese market size reaching 18.6 billion yuan in the first three quarters of 2025, an increase of 220% year-on-year. Sunac was selected in the international vendor map by virtue of its multimodal large model integration platform, marking that AI Agent technology is moving from proof of concept to large-scale application across multiple industries.
Qiangnao Technology completed approximately 2 billion yuan in funding, setting a record in the domestic brain-computer interface industry, with the scale second only to Neuralink. The investors include top institutions such as IDG Capital and HD International, indicating the high confidence of the capital market in the non-invasive brain-computer interface sector.
Baidu Baike launched new features such as "Dynamic Baike" and "AI Knowledge Graph" during its annual ceremony, aiming to enhance the knowledge acquisition experience and make information more vivid and systematic. Currently, the total number of entries has exceeded 30 million, making it one of the largest knowledge bases on the Chinese internet, with over 8.03 million users participating in editing and contributing, demonstrating widespread participation in knowledge sharing and creation.
ChatGPT has over 900 million weekly active users globally, but about 90% are outside the US and Canada, with lower ad value in emerging markets, posing challenges for its monetization strategy.....
Build secure, full-stack, production-grade internal applications and workflows with AI without coding.
Super Intern is an AI teammate in group chats. It can provide reminders, answer questions, and create content, ensuring smooth conversations.
Bitchat is Jack Dorsey's revolutionary decentralized communication application that works via Bluetooth mesh network without the need for the Internet.
supOS is an integrated industrial Internet of Things platform that supports the integration of multi-source industrial data and enables digital transformation.
Openai
$7.7
Input tokens/M
$30.8
Output tokens/M
200
Context Length
Bytedance
-
Alibaba
$54
$163
1k
$8.75
$70
400
$0.3
32
Google
$140
$280
Baidu
$3
$9
$1.5
$4.5
128
Huawei
Tencent
$2.4
$9.6
$2
Baichuan
$15
Shanghai-ai-lab
8
DavidAU
This is a text generation model fine-tuned on an internal horror dataset based on the Gemma-3 model, specifically designed for generating horror-style content. The model is optimized through Unsloth and can generate horror content ranging from mild to intense, including long horror stories.
bartowski
JanusCoderV-7B is a 7B parameter code generation model developed by InternLM. This project provides multiple versions of imatrix quantization using llama.cpp, supporting various quantization levels to meet different hardware requirements.
This is a quantized version of the internlm's JanusCoder-14B model. It is quantized using specific tools and datasets, providing a variety of quantization type files from low quality to high quality. It can run in LM Studio or projects based on llama.cpp.
This is a quantized version of the internlm's JanusCoder-8B model, using the imatrix quantization technology of llama.cpp. It significantly reduces the storage and computational resource requirements of the model while ensuring performance, enabling the model to run efficiently on more devices.
noctrex
This is an MXFP4 Mixture-of-Experts quantized version based on the Intern-S1 model, specifically optimized for image-text-to-text tasks. It improves inference efficiency through quantization technology.
onnx-community
Granite-4.0-1B is a lightweight instruction model developed by IBM, fine-tuned based on Granite-4.0-1B-Base. This model combines open-source instruction datasets and internal synthetic datasets, and is developed using techniques such as supervised fine-tuning, reinforcement learning, and model merging. It is suitable for device-side deployment and research use cases.
Granite-4.0-350M is a lightweight instruction model developed by IBM, fine-tuned based on Granite-4.0-350M-Base. This model combines open-source instruction datasets and internal synthetic datasets, and is developed using technologies such as supervised fine-tuning, reinforcement learning, and model fusion. It has powerful instruction-following capabilities and is particularly suitable for device-side deployment and research scenarios.
ibm-granite
Granite-4.0-350M is a lightweight instruction model developed by IBM, fine-tuned based on Granite-4.0-350M-Base. This model combines open-source instruction datasets and internal synthetic datasets, and is developed using supervised fine-tuning, reinforcement learning, and model merging techniques. It has powerful instruction-following capabilities and tool invocation functions.
Granite-4.0-1B is a lightweight instruction model developed by IBM. It is fine-tuned based on Granite-4.0-1B-Base, combining open-source instruction datasets and internal synthetic datasets, and developed using supervised fine-tuning, reinforcement learning, and model merging techniques.
Granite-4.0-H-350M is a lightweight instruction model developed by IBM, fine-tuned based on Granite-4.0-H-350M-Base. This model combines open-source instruction datasets and internal synthetic datasets, and is developed using various technologies such as supervised fine-tuning, reinforcement learning, and model merging. It has powerful instruction-following capabilities and multilingual support.
unsloth
Granite-4.0-H-Small is a long-context instruction model developed by IBM with 32 billion parameters, fine-tuned based on Granite-4.0-H-Small-Base. This model combines open-source instruction datasets and internal synthetic datasets, and uses techniques such as supervised fine-tuning, reinforcement learning alignment, and model merging. It has significantly improved instruction following and tool invocation capabilities, and is particularly suitable for enterprise-level applications.
Granite-4.0-H-Micro is a 3-billion parameter long-context instruction model developed by IBM, fine-tuned from Granite-4.0-H-Micro-Base. This model combines open-source instruction datasets and internal synthetic datasets, and is developed using techniques such as supervised fine-tuning, reinforcement learning alignment, and model merging. It has a structured chat format and performs excellently in instruction following and tool invocation capabilities.
Granite-4.0-H-Tiny is a long-context instruction model developed by IBM with 7 billion parameters, fine-tuned based on Granite-4.0-H-Tiny-Base. This model combines open-source instruction datasets and internal synthetic datasets, and is developed using techniques such as supervised fine-tuning, reinforcement learning alignment, and model merging. It has enhanced instruction-following and tool invocation capabilities, making it particularly suitable for enterprise-level applications.
Granite-4.0-Micro is a long-context instruction model with 3 billion parameters developed by IBM, fine-tuned based on Granite-4.0-Micro-Base. This model combines open-source instruction datasets and internal synthetic datasets, and is developed using technologies such as supervised fine-tuning, reinforcement learning alignment, and model merging. It has enhanced instruction-following and tool invocation capabilities, and is particularly suitable for enterprise-level applications.
Granite-4.0-H-Micro is a 3-billion-parameter long-context instruction model developed by IBM, fine-tuned from Granite-4.0-H-Micro-Base. This model is trained by combining an open-source instruction dataset and an internal synthetic dataset, and it has enhanced instruction-following and tool call capabilities, making it particularly suitable for enterprise-level applications.
Granite-4.0-H-Small is a long-context instruction model with 32 billion parameters developed by IBM, fine-tuned based on Granite-4.0-H-Small-Base. This model combines open-source instruction datasets and internal synthetic datasets and is developed using techniques such as supervised fine-tuning, reinforcement learning alignment, and model merging, with significant improvements in instruction following and tool invocation capabilities.
Granite-4.0-Micro is a long-context instruction model developed by IBM with 3 billion parameters, fine-tuned from Granite-4.0-Micro-Base. This model uses an open-source instruction dataset and an internal synthetic dataset, and has enhanced instruction-following and tool invocation capabilities. It supports multilingual tasks and can serve as a base model for AI assistants in various fields.
gwkrsrch2
This is a Transformer model published on the Hugging Face model hub, and the model card is automatically generated. Due to the lack of specific information, a detailed model introduction cannot be provided.
Guilherme34
Qwen2.5-14B-Instruct is a large language model with 14 billion parameters, designed specifically for chat and text generation scenarios. This model is built based on the transformers library and is suitable for internal testing and lightweight application deployment.
Granite-4.0-H-Tiny is a 7-billion parameter long context instruction model developed by IBM, fine-tuned from Granite-4.0-H-Tiny-Base. This model is trained by combining open-source instruction datasets and internal synthetic datasets, and has the ability to provide professional, accurate, and secure responses. It supports multiple languages and tool invocation, and is suitable for enterprise-level applications.
Notte is an open-source full-stack network AI agent framework that provides browser sessions, automated LLM-driven agents, web page observation and operation, credential management, etc. It aims to transform the Internet into an agent-friendly environment and reduce the cognitive burden of LLMs by describing website structures in natural language.
An MCP server for providing Internet news search functionality
The Linkup MCP Server is a server based on the Model Context Protocol, providing real-time web search and web page content scraping functions through the Linkup API, enabling AI assistants to access the latest information on the Internet.
The Pulse CN MCP Server is an MCP protocol server that obtains real-time Chinese Internet popular content. It supports hot search data from 18 platforms such as Weibo and Today's Headlines, providing the latest Chinese trend information for AI models.
The MCP server for industrial Internet of Things and edge computing provides 11 tools through HTTP endpoints, enabling AI - driven industrial automation, predictive maintenance, and smart factory operations, and supporting multiple protocols such as MQTT and Modbus.
A Python MCP server for retrieving, parsing, and reading IETF RFCs and Internet Drafts, supporting the search and parsing of RFC documents, Internet Drafts, IETF working group documents, and OpenID Foundation specifications.
The Internet Search MCP is a service encapsulated based on Tencent Cloud's Internet Search API, providing intelligent search capabilities with millisecond - level response and minute - level updates, supporting multiple functions such as natural result retrieval and multi - modal VR cards, helping developers quickly integrate Internet search capabilities.
An internet research service based on Perplexity AI
The Bocha AI Web Search MCP Server provides a search service for Chinese Internet content that complies with Chinese regulations, supporting output in Markdown and JSON formats.
Uncover MCP is an MCP service implementation based on the uncover tool, used to quickly discover exposed hosts on the Internet, supporting multiple search engines and output formats.
The LSD MCP server is a bridge connecting Claude AI with Internet data, enabling efficient querying and aggregation of web data through the LSD SQL language.
The Shodan MCP Server is a model context protocol server that provides Shodan API functionality, allowing AI assistants to query detailed information about Internet - connected devices and services.
A Perl-based MCP server implementation that interacts with the Cursor IDE through standard input and output, providing direct access to its internal state (including chat and writing history).
MCP2Serial is a bridge project connecting physical devices with large AI models, enabling intelligent control of the Internet of Things by controlling hardware devices through natural language.
A TypeScript-based MCP server for interacting with DAOs on the Internet Computer
An MCP server based on TypeScript, providing tools for interacting with the Perplexity AI API, supporting search - enhanced queries and displaying internal reasoning processes.
The Maven Indexer MCP Server provides a tool for AI agents to search for Java classes, method signatures, and source code by indexing the local Maven repository and Gradle cache, especially suitable for understanding the code of internal private libraries and less - known public libraries.
A VSCode extension based on the FastMCP framework that transforms VSCode into an MCP server, supporting remote execution of internal VSCode commands via HTTP Streaming, querying workspace information, and asynchronous task management
The MCP Server Collector is a service tool for collecting MCP servers from the Internet. It provides URL and content extraction functions and supports submitting servers to the MCP directory.
MCP server for Internet of Things (IoT) devices