Pydantic AI supports MCP in three different ways, and that is where the confusion starts.
At a glance, MCPServer, FastMCPToolset, and MCPServerTool can look like minor variations on the same idea. They are not. One keeps the MCP connection inside your app, one swaps in the FastMCP client with a few extra tricks, and one pushes the whole MCP exchange out to the model provider.
That difference matters more than the naming suggests. It changes where tool calls happen, whether localhost works, whether sampling and elicitation are available, and how much control your own app keeps over the session.
This guide is the decision map I wish most teams started with.
If you want the implementation details after you choose a path, we already have two companion tutorials:
- How to Build a Python MCP Server with FastMCP
- How to Connect a Pydantic AI Agent to MCP Servers with FastMCPToolset
What you’ll learn:
- What each Pydantic AI MCP integration path actually does
- When to choose
MCPServeroverFastMCPToolset - When
MCPServerToolis faster and when it is the wrong tool entirely - Which path supports local servers, public remote servers, sampling, and elicitation
- The practical migration path from local dev to production
Time required: 20-30 minutes
Difficulty level: Intermediate
Step 1: Start with the Simple Mental Model
The official Pydantic AI MCP overview says agents can connect to MCP servers in three ways:
- Pydantic AI can act as an MCP client directly through
MCPServer - Pydantic AI can use the FastMCP client through
FastMCPToolset - Some model providers can connect to remote MCP servers directly through the built-in
MCPServerTool
I find it easier to think about them like this:
MCPServer: your app is the MCP clientFastMCPToolset: your app is still the MCP client, but it uses FastMCP instead of the MCP SDK clientMCPServerTool: the model provider is the MCP client
That last one is the important split. Once you move to MCPServerTool, your own app is no longer brokering every MCP call.
Step 2: Compare the Three Paths Before You Write Code
Here is the short version:
| Path | Who talks to the MCP server? | Works with local STDIO? | Works with public remote HTTP? | Same-process FastMCP object? | Sampling / elicitation | Best fit |
|---|---|---|---|---|---|---|
MCPServer | Your Pydantic AI app | Yes | Yes | No | Yes | When you want full agent-side control |
FastMCPToolset | Your Pydantic AI app | Yes | Yes | Yes | No | When you want FastMCP flexibility and easier wiring |
MCPServerTool | The model provider | No | Yes, public URL only | No | Limited compared with agent-side MCP | When you want provider-side execution and less round-trip overhead |
There are two quick takeaways hiding in that table:
- If your MCP server only exists on localhost,
MCPServerToolis not the answer. - If you need MCP sampling or elicitation,
FastMCPToolsetis not the answer.
That is enough to eliminate the wrong path surprisingly often.
Step 3: Choose MCPServer When MCP Is Part of Your App Contract
The standard MCPServer client is Pydantic AI’s most direct MCP path. The docs show it supporting local STDIO servers, Streamable HTTP servers, SSE servers, config loading from load_mcp_servers(), custom TLS via httpx.AsyncClient, MCP sampling, and elicitation callbacks.
That makes it the most capable choice when your application needs to stay in control of the session.
Use it when:
- your server is local or private
- you need MCP sampling
- you need elicitation
- you want agent-side control over TLS, retries, logging, callbacks, or resource access
Minimal local example
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
server = MCPServerStdio(
"python",
args=["server.py"],
)
agent = Agent(
"openai:gpt-5.2",
toolsets=[server],
)
Why teams keep coming back to it
This path is less flashy, but it is extremely solid. If your MCP server is part of the infrastructure you own, or if the user experience depends on interactive server requests, MCPServer is usually the safer default.
It is also the only one of the three paths that the docs clearly position for sampling and elicitation. That alone makes it the right answer for a class of workflows that FastMCPToolset and MCPServerTool simply do not cover.
Example: a sampling-enabled client
The MCP client docs show that sampling is supported when the agent sets the MCP sampling model:
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerStdio
server = MCPServerStdio(
"python",
args=["generate_svg.py"],
)
agent = Agent(
"openai:gpt-5.2",
toolsets=[server],
)
async def main() -> None:
agent.set_mcp_sampling_model()
result = await agent.run("Create an SVG roadmap for our next release.")
print(result.output)
If you know that your MCP server will ask the client for more structured input later, or make model calls through sampling, choose this path first and spare yourself the migration later.
Step 4: Choose FastMCPToolset When You Want the Easiest Wiring
FastMCPToolset keeps the agent-side execution model, but swaps the MCP SDK client for the FastMCP client. The official FastMCP client docs call out the main reasons to use it:
- it can connect to local and remote MCP servers whether or not they were built with FastMCP
- it supports extra FastMCP capabilities like tool transformation and simpler OAuth patterns
- it can be created from more input types, including a FastMCP server object, script paths, transports, clients, URLs, and JSON MCP config
This is the most ergonomic option when your real problem is wiring, not protocol edge cases.
Use it when:
- you want to mount several servers from one config
- you already use FastMCP elsewhere
- you want to pass a FastMCP server object directly inside the same process
- you want cleaner transport ergonomics than the lower-level MCP SDK client path
Minimal same-process example
from fastmcp import FastMCP
from pydantic_ai import Agent
from pydantic_ai.toolsets.fastmcp import FastMCPToolset
release_server = FastMCP("release_ops")
@release_server.tool()
async def get_release_status(service: str) -> str:
return f"{service} is healthy"
agent = Agent(
"openai:gpt-5.2",
toolsets=[FastMCPToolset(release_server)],
)
That same-process shortcut is unique. Neither MCPServer nor MCPServerTool gives you that.
The limitation you should not ignore
The FastMCP client docs are explicit here: FastMCPToolset does not yet support integration elicitation or sampling. That is the line in the sand.
So the rule is simple:
- if you want the easiest developer experience, lean toward
FastMCPToolset - if you know you need the full interactive MCP surface, stop and use
MCPServer
Step 5: Choose MCPServerTool When the Provider Should Do the Work
MCPServerTool is the outlier. It is not an agent-side toolset at all. It is a built-in tool executed by the model provider.
The built-in tools docs make three points very clearly:
- the MCP server must be reachable at a public URL the provider can access
- it does not support many of the advanced features of Pydantic AI’s agent-side MCP support
- it can be faster and more token-efficient because there is no extra round trip back through your app
That tradeoff is worth it when your server is already remote and public, and you care more about provider-side efficiency than about owning the full MCP session inside your app.
Use it when:
- the server already lives on a public endpoint
- the provider can reach it directly
- you want less application-side plumbing
- you want the provider to handle the MCP exchange itself
Minimal remote example
from pydantic_ai import Agent, MCPServerTool
agent = Agent(
"openai-responses:gpt-5.2",
builtin_tools=[
MCPServerTool(
id="deepwiki",
url="https://mcp.deepwiki.com/mcp",
)
],
)
Provider support matters here
According to the Pydantic AI built-in tools docs as of March 22, 2026, MCPServerTool support is listed for:
- OpenAI Responses
- Anthropic
- xAI
The same docs list Google, Groq, Bedrock, Mistral, Cohere, HuggingFace, and OpenAI Chat Completions as unsupported for this built-in tool.
So even if MCPServerTool looks architecturally right, it still depends on provider support. With OpenAI specifically, the docs also note that you must use the Responses API model path, not Chat Completions.
OpenAI connector support is a real differentiator
The built-in tools docs also mention that OpenAI Responses can use connectors via a special x-openai-connector:<connector_id> URL.
That means MCPServerTool is not just for your own public MCP endpoints. It can also be the cleanest route into provider-managed connector flows when the provider already exposes them.
Step 6: The Choice Usually Comes Down to One of These Questions
When I strip away the naming, these are the only questions I really ask:
Does the MCP server live on localhost or inside my app?
If yes, use MCPServer or FastMCPToolset.
If it is literally the same Python process and you like FastMCP, FastMCPToolset is especially nice. If you need the richer MCP feature set, use MCPServer.
Do I need sampling or elicitation?
If yes, use MCPServer.
This is the easiest decision in the whole guide.
Do I want provider-side execution and a public remote endpoint?
If yes, look hard at MCPServerTool.
This is the path that reduces the extra hop back through your app, which can help with caching and context efficiency.
Do I mainly want easy multi-server wiring and config-based setup?
If yes, FastMCPToolset is usually the most pleasant fit.
That is especially true if your team already works with FastMCP transports and config dictionaries.
Step 7: A Practical Migration Path That Avoids Rework
Teams often ask which one they should start with. My answer is usually less dogmatic than they expect.
Good local-to-production path
- Start with
FastMCPToolsetif you are iterating quickly and mostly care about getting local and remote servers mounted with minimal fuss. - Start with
MCPServerinstead if you already know the workflow will need elicitation, sampling, or richer client-side control. - Move to
MCPServerToollater only if the server ends up public, the provider supports it, and provider-side execution genuinely buys you something.
That last point matters. MCPServerTool is not an automatic upgrade. It is a different deployment model.
Step 8: Common Mistakes
These are the mistakes I would watch for first:
Trying MCPServerTool against a private or localhost server
The provider cannot call what it cannot reach.
Choosing FastMCPToolset and discovering you need sampling later
That is a real migration, not a small flag change.
Choosing MCPServer when all you really wanted was easier wiring
If you do not need the richer MCP SDK client features, FastMCPToolset often gives you a simpler setup with less ceremony.
Forgetting that built-in tool support is provider-specific
This is especially easy to miss with OpenAI, because MCPServerTool requires the Responses API path.
Final Takeaway
The three Pydantic AI MCP paths are not competitors so much as answers to different architecture questions.
MCPServeris the full-control pathFastMCPToolsetis the convenience pathMCPServerToolis the provider-executed path
If you keep that frame in your head, the naming gets a lot less confusing.
My own rule of thumb is:
- choose
MCPServerwhen MCP behavior is part of your application logic - choose
FastMCPToolsetwhen wiring speed and flexibility matter most - choose
MCPServerToolwhen the provider should talk directly to a public remote server
That is usually enough to make the right call before the codebase grows around the wrong one.
Discussion
Leave a comment
No comments yet
Be the first to start the conversation.