Description
In PR #3431, we fixed the exposure of the MCP server instructions as a readable property on MCPServer classes. FastMCPToolset was omitted out but I think that's just a small omission.
What I wanted mostly wanted to talk about is allowing for auto-injection of the MCP Server's instructions into the agent's system prompt.
Per the MCP spec, servers send instructions in the initialize response to provide LLMs with contextual knowledge about tool interdependencies, operational constraints, and usage guidance.
From my understanding, major MCP clients have chosen the path inject these into the LLM's system prompt. Pydantic AI currently does not.
Importance of MCP Instructions from Initialize call: https://blog.modelcontextprotocol.io/posts/2025-11-03-using-server-instructions
I think this matters a bunch because FastMCP very recently introduced transformers, most notably BM25SearchTransform which replaces list_tools() with two synthetic tools (search_tools + call_tool) and relies on server instructions to tell the LLM the search-then-call workflow.
Basically it tries to simulate a bit of the "Progressive Discovery" of Skills to not overwhelm the context window. Without injection of the host server giving clear indications to the agent/llm, the LLM never learns this workflow and calls call_tool directly with guessed tool names as this is not a pattern LLMs were trained on compared to regular MCP flow.
Proposed behavior:
When an MCP toolset or a FastMCPToolset is used with an agent, the server's instructions (if present) should be automatically included in the agent's system prompt.
Right now you can just hackily subclass FastMCPToolset, read self.client.initialize_result.instructions after the first connection and pass it as a dynamic agent instruction manually.
But I could not think of an instance when I would not want MCP Instructions put into the prompt. So I believe it should be done in the library.
A demo of how code might look when using the feature
agent = Agent(
...,
toolsets=[
server_a.with_instructions(), # injected
server_b, # not injected
],
)
Honestly I prefer .without_instructions() as I think the behavior should be default but I think this may be too drastic of a change on a minor bump.
Looking for guidance on the right philosophy. Can start a PR.
References
#3431 (merged)
#3353
https://blog.modelcontextprotocol.io/posts/2025-11-03-using-server-instructions/
Description
In PR #3431, we fixed the exposure of the MCP server instructions as a readable property on MCPServer classes. FastMCPToolset was omitted out but I think that's just a small omission.
What I wanted mostly wanted to talk about is allowing for auto-injection of the MCP Server's instructions into the agent's system prompt.
Per the MCP spec, servers send instructions in the initialize response to provide LLMs with contextual knowledge about tool interdependencies, operational constraints, and usage guidance.
From my understanding, major MCP clients have chosen the path inject these into the LLM's system prompt. Pydantic AI currently does not.
Importance of MCP Instructions from Initialize call: https://blog.modelcontextprotocol.io/posts/2025-11-03-using-server-instructions
I think this matters a bunch because FastMCP very recently introduced transformers, most notably BM25SearchTransform which replaces list_tools() with two synthetic tools (search_tools + call_tool) and relies on server instructions to tell the LLM the search-then-call workflow.
Basically it tries to simulate a bit of the "Progressive Discovery" of Skills to not overwhelm the context window. Without injection of the host server giving clear indications to the agent/llm, the LLM never learns this workflow and calls call_tool directly with guessed tool names as this is not a pattern LLMs were trained on compared to regular MCP flow.
Proposed behavior:
When an MCP toolset or a FastMCPToolset is used with an agent, the server's instructions (if present) should be automatically included in the agent's system prompt.
Right now you can just hackily subclass FastMCPToolset, read self.client.initialize_result.instructions after the first connection and pass it as a dynamic agent instruction manually.
But I could not think of an instance when I would not want MCP Instructions put into the prompt. So I believe it should be done in the library.
Honestly I prefer .without_instructions() as I think the behavior should be default but I think this may be too drastic of a change on a minor bump.
Looking for guidance on the right philosophy. Can start a PR.
References
#3431 (merged)
#3353
https://blog.modelcontextprotocol.io/posts/2025-11-03-using-server-instructions/