Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource, specializing in consulting and engineering innovative, customized tools and IDEs, with a strong …
MCP and Context Overload: Why More Tools Make Your AI Agent Worse
January 22, 2026 | 6 min ReadThe Model Context Protocol (MCP) is one of the most successful innovations in AI tooling. It lets agents connect to external systems — GitHub, databases, browsers, IDEs — through a standardized interface. Third-party providers can ship MCP servers, and users can plug them into their favorite tools with minimal effort.
And yet, MCP also contributes to an often underestimated problems in AI coding: context overload.
In this article, we’ll explore why providing more (MCP) tools often makes agents perform worse — and what you can do about it.
👉 Watch the full video here:
The Handyman With Too Many Tools
Imagine a handyman assembling a wardrobe. In the ideal scenario, they have the instructions and exactly the tools they need — neatly organized, easy to access.
Now imagine the same handyman surrounded by thousands of tools. Drills, saws, welding equipment, plumbing supplies — most of it irrelevant to the task. Any reasonable person would first clear away everything they don’t need.
LLMs can’t do that.
They are stateless. Every request starts fresh. The agent receives the full list of available tools with every single interaction and must choose from all of them. There’s no way to “put tools aside” and focus.
Worse: unlike a human handyman who knows their tools by heart, LLMs have no prior knowledge of most MCP tools. Each tool comes with a full description — what it does, what parameters it accepts, how to use it. All of this consumes context.
In a recent video, we have shown with a coding agent, a fairly standard set of MCP servers — Playwright, GitHub, and an IDE integration — consumed over 20% of the context window before the agent even started working.
As a rule of thumb, you want to stay below 40% total context usage. If tools alone already eat up half of that budget, there’s little room left for files, instructions, and actual work.
The Irony of MCP’s Success
The irony: the two things that make MCP so successful are also the two things that contribute most to context overload.
First, MCP servers are typically not built by the people who design agents. GitHub builds the GitHub MCP server. Playwright builds the Playwright server. This is fantastic for the ecosystem — it means broad compatibility and less integration work.
But it also means these servers are designed to be generic. They expose as many functions as possible to cover as many use cases as possible. Your agent may only need three of those functions — but it receives all forty.
Second, MCP is incredibly easy to integrate. In many tools, especially IDEs, end users can activate MCP servers themselves. It’s tempting to enable everything that looks useful. And many tools don’t offer fine-grained control over which functions from an MCP server you actually want.
The result: agents drowning in tools they don’t need.
What You Can Do Today
The ecosystem is moving toward better solutions. Anthropic released a tool discovery and deferred loading feature in late 2025, allowing agents to load tools on demand rather than all at once. We’ll likely see more of this.
But what can you do right now?
1. Control your tool set deliberately
Don’t give your agent access to full MCP servers if it doesn’t need them. If you’re designing agents yourself, be selective. If you’re using tools like VS Code or Theia IDE, take advantage of fine-grained tool selection — these IDEs let you enable or disable individual functions from an MCP server.
Claude Code, for example, doesn’t currently support this level of control — you can only toggle entire MCP servers. However, they recently introduced tool search and discovery features to mitigate the problem automatically.
2. Use sub-agents with dedicated tool sets
In some systems , sub-agents can have their own isolated context and tool set. This lets you create specialized agents — one for GitHub operations, another for browser testing — and delegate tasks to them as needed.
👉 See this demonstration for an example of sub agents in Theia AI
The main agent stays lean. It doesn’t carry tools it rarely uses. Only the sub-agent responsible for a specific step gets the tools it needs.
Note: not all systems that support sub-agents actually isolate their context. Claude Code’s sub-agents, for instance, still share the same tool set as the main agent. Check whether your system truly separates context before relying on this pattern.
3. Design functions that leverage existing LLM knowledge
If you’re building MCP servers or custom tools, consider how much you’re asking the LLM to learn.
The terminal function in Claude Code is a great example. Every LLM understands how to use a terminal — it’s in the training data. You don’t need lengthy descriptions. The function is powerful (it can search, rename, execute, inspect) and intuitive.
Another pattern: build discovery into your function design. In the Theia IDE, the agent can execute user-defined tasks through just two functions — one to search for relevant tasks, one to execute them. This mirrors the deferred loading approach at the function level.
If you write MCP servers, fewer well-designed functions will often outperform many narrow ones.
Context Overload Is Invisible — Until It Isn’t
What makes this problem so dangerous is that it’s hard to detect.
Missing context is obvious: the AI asks questions or fails in predictable ways. Context overload, on the other hand, produces unpredictable behavior. Instructions get ignored. Responses feel random. The agent seems “dumb” — even though it’s the same model that worked fine yesterday.
If your agent’s performance feels inconsistent, context overload is one of the first things to check.
Wrapping Up
MCP is a fantastic protocol. It has fundamentally changed how agents connect to external systems. But like any powerful abstraction, it comes with trade-offs.
The ease of adding tools makes it dangerously easy to add too many. And the generic nature of third-party MCP servers means you often get far more than you need.
The solution isn’t to avoid MCP — it’s to use it deliberately. Control your tool set. Leverage sub-agents where possible. And if you’re building tools yourself, design functions that work with the LLM’s existing knowledge rather than against it.
If you’re building agentic systems or AI-native tools and need help with design or implementation, EclipseSource is here to help. We’re your tech partner for building tools of any kind — in coding, engineering, or any domain.
👉 Our services for Building AI-Enhanced Tools and IDEs
👉 Explore our AI Coding Training and Consulting options
💼 Follow us: EclipseSource on LinkedIn
🎥 Subscribe to our YouTube channel: EclipseSource on YouTube
Stay Updated with Our Latest Articles
Want to ensure you get notifications for all our new blog posts? Follow us on LinkedIn and turn on notifications:
- Go to the EclipseSource LinkedIn page and click "Follow"
- Click the bell icon in the top right corner of our page
- Select "All posts" instead of the default setting