Anthropic has announced four new features on its API: the code execution tool, MCP connector, file API, and expanded prompt caching. These features are designed to help developers build smarter and more efficient AI agents.

Code Execution Tool: From Code Assistant to Data Analyst

The code execution tool launched by Anthropic grants Claude the ability to run Python code in a sandbox environment, elevating it from a mere code-writing assistant to a powerful data analyst. This feature enables Claude to perform data analysis, generate visual charts, and handle complex computational tasks directly within an API call. For example, developers can leverage Claude for real-time data processing or to create dynamic visual content, significantly enhancing AI’s practicality in data-driven scenarios.

MCP Connector: Seamless Integration with External Systems

The MCP connector is based on Anthropic's Model Context Protocol (MCP), introduced in November 2024. It provides developers with a way to connect to remote MCP servers without writing complex client-side code. By simply adding the server URL to an API request, Claude can automatically handle tool discovery, execution, and error management. This launch greatly simplifies the integration process between AI and external data sources or tools, enabling seamless interaction with popular platforms like GitHub and Google Drive. This makes it easier for developers to create cross-system AI agents.

File API: Enhancing Data Processing Capabilities

The introduction of the file API further expands Claude’s contextual processing capabilities, allowing developers to upload files and have the model execute tasks based on their content. Whether handling large datasets, parsing documents, or analyzing data combined with external sources, the file API offers developers greater flexibility. This functionality is particularly suitable for scenarios requiring complex document or multi-format data processing, such as enterprise-level knowledge management or content analysis.

Expanded Prompt Caching: Optimizing Performance and Cost

The expanded prompt caching feature allows developers to cache prompts for up to one hour, significantly reducing the computational cost of repeated requests while improving response speed. This feature is especially useful for scenarios requiring frequent calls to the same context, such as ongoing conversations or complex task processing. Combined with Claude4 series' strong reasoning capabilities, the expanded prompt caching provides technical support for building efficient and cost-effective AI agents.

Discussions on X platform show that developers are highly interested in these four new features. The code execution tool is considered the highlight of this update, with many looking forward to its performance in data analysis and automation tasks. The standardization characteristics of the MCP connector also received positive feedback, seen as a significant step toward integrating AI with external tools. However, some developers expressed the need for further validation regarding stability and actual application effects. AIbase noted that the public beta versions of these features were made available to all Anthropic API users on the day of release, marking Anthropic's continued deep involvement in the AI development ecosystem.

From AIbase's perspective, Anthropic API's four new features not only enhance Claude's practicality but also provide developers with a more flexible and efficient set of tools. The code execution tool and file API expand AI’s applications in data processing and analysis, while the MCP connector and expanded prompt caching lower development barriers and costs through standardization and optimization. These advancements indicate that Anthropic is committed to building an open and interconnected AI ecosystem, helping developers create smarter AI agents.