Recently, the AI company Anthropic published a significant guide titled "Writing effective tools for LLM agents—using LLM agents" on its official blog. This guide elaborates on how to use the Model Context Protocol (MCP) to design efficient tools for large language model (LLM) agents, providing a systematic "prototype - evaluate - collaborate" three-step iterative process aimed at helping developers build and optimize tools more effectively.

In the guide, Anthropic emphasizes five design principles to help developers avoid common pitfalls when creating tools. First, care must be taken when selecting tools to ensure they align with the Agent's context and strategy. Second, clear namespaces are essential, as they help users better understand the functionality of the tools. Third, returned information should include richer context to enable the Agent to make more accurate decisions. Additionally, optimizing the token efficiency of returned information is key to improving tool performance. Finally, using prompt engineering to enhance the quality of tool descriptions ensures users have the best experience when using them.

Notably, many of the conclusions mentioned in the article were derived from repeated analysis and evaluation using Anthropic's Claude Code, demonstrating its strong data processing capabilities and level of intelligence. To prevent overfitting, Anthropic has specifically reserved a test set. In the future, as the MCP protocol and underlying LLMs are upgraded, Anthropic plans for the capabilities of the tools to evolve in sync with the Agent, ensuring it remains at the forefront of the rapidly developing AI field.

Additionally, Anthropic has simultaneously open-sourced a Cookbook for tool evaluation, offering developers more practical resources and references. These efforts not only provide better tool support for AI developers but also drive innovation and development in the large language model field across the industry.