OpenAI has just announced a major update to their API offerings, introducing new tools that make it easier for developers to build AI agents. During their recent livestream, the company revealed the “Responses API” along with three powerful built-in tools and an open-source SDK for agent development.
What Are AI Agents?
OpenAI defines an agent as “a system that can act independently to do tasks on your behalf.” This simple definition captures the core idea – these are AI systems that can work on their own to complete tasks for users.

Three New Built-in Tools
The announcement highlights three main tools that developers can now access through the API:
Web Search Tool
The web search tool gives AI models the ability to look up current information online. This addresses one of the biggest limits of AI models – their lack of up-to-date knowledge.
The tool uses the same system that powers ChatGPT’s search feature and includes a specially trained model that can find relevant information from web results and cite sources properly. On benchmark tests like Simple QA, adding search capabilities to GPT-4 models dramatically improved their accuracy.
File Search Tool
The file search tool lets AI models search through private documents and data. OpenAI has added two new features to this tool:
- Metadata filtering – You can add tags to your files to help the AI find exactly what it needs
- Direct search endpoint – You can search your vector stores directly without going through the model first
This tool is particularly useful for businesses that need to search through internal documents. Box, one of the launch partners, is using this tool to help companies extract insights from their unstructured data.
Computer Use Tool
The computer use tool allows AI models to control computers directly. This is the same technology that powers the “Operator” feature in ChatGPT. The tool can:
- Control virtual machines
- Work with legacy applications that don’t have API access
- Automate tasks that require clicking, typing, and other computer interactions
The New Responses API
Along with these tools, OpenAI introduced the “Responses API,” which they designed to be more flexible than their previous chat completions API. While the chat completions API will still be supported, the new Responses API offers:
- Support for multiple turns of conversation
- Built-in tool use
- The ability to handle multimodal inputs (text, images, audio)
In their demo, OpenAI showed how a developer could build a personal stylist assistant using the Responses API. The assistant could:
- Use the file search tool to find data about a user’s style preferences
- Use the web search tool to find stores near the user
- Use the computer use tool to help make purchases online
All of this happened in a single API call, showing how the different tools can work together.
Open Source Agents SDK
Perhaps the most exciting part of the announcement is the open-source Agents SDK. This is a rebranded and improved version of their experimental “Swarm” project, which many developers had already started using in production.
The Agents SDK makes it easier to build complex agent systems by:
- Allowing developers to create multiple specialized agents
- Supporting “handoffs” between agents during a conversation
- Providing built-in monitoring and tracing tools
- Including guard rails and lifecycle events
- Supporting multiple AI model vendors (not just OpenAI)
In the demo, they showed how developers could create a system with a stylist agent and a customer support agent, with a “triage” agent deciding which one should handle each user request.
The SDK is available now via pip install (Python) with a JavaScript version coming soon.
What This Means for Developers
These new tools put OpenAI at the forefront of the growing “AI agents” trend. Unlike some other solutions, OpenAI is making these tools available through their API, allowing developers to build their own agent applications.
The most significant aspects of this announcement are:
- The tools provide essential capabilities that agents need: current information (web search), private data access (file search), and the ability to act in the world (computer use)
- The Responses API creates a more flexible foundation for building complex AI applications
- The open-source Agents SDK lowers the barrier to entry for developers wanting to build multi-agent systems
OpenAI’s Kevin Weil concluded the presentation by saying that “2025 is going to be the year of the agent” – when AI systems move beyond just answering questions to actually doing things in the real world.
How These Tools Compare to Other Options
This announcement comes at an interesting time in the AI space. Companies like Anthropic (with Claude) and startups like Manus have been working on similar agent capabilities. Manus is “an incredible project” that demonstrates how current AI models are ready for these kinds of applications.
What sets OpenAI’s approach apart is the focus on providing these capabilities through an API and open-source tools, rather than just as features in their own products. This means developers can build these capabilities into their own applications.
The Future of AI Agents
OpenAI’s announcement shows that the company sees agents as the next big step for AI. By providing these tools to developers, they’re hoping to spark a wave of new applications that can go beyond just answering questions.
The ability to search the web, access private data, and control computers opens up many new possibilities. AI agents could help with tasks like:
- Research assistants that can find and summarize information from multiple sources
- Personal shoppers that can search for products and make purchases
- Virtual assistants that can help with computer tasks like file management or data entry
- Customer support systems that can handle multiple types of requests
How to Get Started
Developers interested in these new tools can:
- Start with the Responses API to add web search, file search, or computer use to their applications
- Install the Agents SDK (pip install openai-agents) to build more complex multi-agent systems
- Use the built-in tracing tools to monitor and debug their agents
While these tools are powerful, they’re designed to be easy to use, even for developers who are new to working with AI.
OpenAI has made it clear that they’ll continue to support the chat completions API, but new features will be added to the Responses API. They also mentioned that they plan to sunset the Assistants API sometime in 2026, with a migration guide coming to help developers move to the Responses API.
With these new tools, OpenAI is making it easier than ever for developers to build AI systems that can take action in the world. As AI continues to grow more capable, we can expect to see many new applications that use these agent capabilities to solve real-world problems.
Want to try building your own AI agent? Check out the OpenAI documentation or watch the full livestream to learn more about these new tools and how you can use them in your projects.