Invisible Blockers for AI Coding: When Your Setup Blocks Progress

January 13, 2026 | 6 min Read

Teams invest a lot of time and effort into configuring AI coding tools. MCP servers, custom commands, context files, automation pipelines — everything carefully designed and optimized.

And yet, when we review these setups in practice, we regularily see the same outcome: the tool looks impressive, but it performs horribly in day-to-day work.

This is not because the setup is “wrong” or because people didn’t try hard enough. In fact, it’s often the opposite: the setups are too sophisticated.

We want to share three invisible blockers we repeatedly observe when working with teams on AI coding adoption — blockers that are rarely discussed, but have a massive impact on whether AI coding actually works in practice.

👉 Watch the full video here

Blocker #1: Context Overload

When we inspect an AI coding setup, one of the first things we look at is the context window — how much information is already consumed before a developer even starts typing.

In one recent review, the team had configured a long list of MCP servers:

  • requirements tools
  • documentation lookup
  • file system access
  • GitHub and GitLab integration
  • memory services
  • UI testing tools

On paper, this looks powerful. In reality, these tools alone already consumed more than a quarter of the available context window.

Once the developer starts working — writing prompts, loading files, iterating on code — the context window fills up almost immediately. Many teams follow a rule of thumb: once you go beyond roughly 40% context usage, model behavior becomes unreliable. Instructions are ignored. Results get weird. The agent feels “dumb”.

The problem is not that the AI lacks information — it has too much of it.

The same pattern appears with large project context files. Teams often invest a lot of effort into documenting:

  • architectural decisions
  • coding guidelines
  • best practices
  • examples

All of this is valuable documentation. But when it’s pushed wholesale into the AI’s context, the agent simply cannot prioritize what matters. Even strong instructions like “always” or “very important” are ignored.

With AI, more context is not automatically better.

What makes this blocker especially dangerous is that it’s hard to detect. Missing context is obvious — the AI asks questions or fails clearly. Context overload, on the other hand, leads to unpredictable behavior that feels almost random.

Blocker #2: Automation Before Understanding

The second blocker is harder to see, and it often appears in very experienced teams.

These teams build impressive automation pipelines:

  • enter a bug number
  • fetch the report
  • fix the issue
  • generate tests
  • run them
  • commit the change
  • update documentation
  • review and merge the PR

Everything is automated end-to-end.

This might very well be the future of AI coding. But today, in practice, it often backfires.

⚠️ Don’t Outsource Your Thinking

Premature automation creates a dangerous illusion: that developers no longer need to think. You just enter a Github issue ID and wait for the result. But AI doesn’t replace thinking — it amplifies it. When you outsource thinking to automation you don’t understand, you lose the ability to guide, debug, or improve it.

The problem is not the automation itself — it’s the expectation it creates. Polished automation pipelines subtly signal that the hard work is done. Developers stop engaging critically with each step. They stop asking “why did the AI do this?” and start assuming the system just works.

But AI systems are indeterministic. When something goes wrong — and it will — teams need to understand:

  • what step failed
  • why it failed
  • how to influence the prompt or the process

If developers didn’t build the automation themselves and never worked through the steps manually, they lack the intuition needed to debug or adjust it. They’ve outsourced not just the work, but the understanding. The result is frustration and the claim that “the tool doesn’t work”.

What we do in these situations is often counter-intuitive: we remove most of the automation.

We let teams work manually with prompts again, until they develop a solid understanding of what the AI can and cannot do. This rebuilds the thinking that premature automation had bypassed. Only then do we rebuild automation — step by step, with the team owning every decision.

This approach works significantly better.

Blocker #3: No Ownership of the Setup

The third blocker is organizational.

AI coding setups are often created centrally:

  • by lead architects
  • by senior engineers
  • by infrastructure teams

A lot of thought goes into these configurations. And yet, we repeatedly observe three symptoms:

  • Teams claim the setup doesn’t work and refuse to use it
  • Experienced developers create private “shadow setups”
  • Configuration files never change after initial creation

If your setup files haven’t changed in months, something is wrong. AI setups must evolve continuously — especially project context files.

The core issue is ownership. When a setup is “thrown over the fence”, teams don’t understand why decisions were made, they don’t feel responsible for improving it, and the setup becomes a black box.

What works much better is creating the initial setup together with the team, explaining the trade-offs, and making evolution part of the process. Central responsibility still matters — but without shared ownership, the setup will stagnate.

What Actually Works in Practice

Across many teams, we’ve found one approach that consistently scales:

  • Don’t try to get the setup perfect on day one
  • Start minimal
  • Let people feel friction when something is missing
  • Add tools, context, and automation incrementally
  • Discuss every addition with the team
  • Allow disagreement

A slightly worse setup that everyone understands and agrees on will always outperform a “perfect” setup that nobody trusts.

This approach leads to:

  • shared understanding instead of siloed expertise
  • grounded intuition before automation
  • automation that fits the team’s real workflow

It may feel slower at first, but it’s the only approach we’ve seen work sustainably.

AI Coding Is a Practice, Not a Configuration Problem

All three blockers share the same underlying pattern: treating AI coding as something to automate, instead of something to learn and practice together.

If this resonates with your experience, you’re not alone. These are exactly the issues we address when working with teams on AI coding adoption — long before tools, prompts, or automation become effective.

We offer practical training and adoption support to help teams move beyond ad-hoc experimentation and build a structured, reliable AI coding workflow.

👉 Explore our AI Coding Training and Consulting options

More articles and videos in the Invisible Blockers for AI Coding series will follow soon.

💼 Follow us: EclipseSource on LinkedIn

🎥 Subscribe to our YouTube channel: EclipseSource on YouTube

Stay Updated with Our Latest Articles

Want to ensure you get notifications for all our new blog posts? Follow us on LinkedIn and turn on notifications:

  1. Go to the EclipseSource LinkedIn page and click "Follow"
  2. Click the bell icon in the top right corner of our page
  3. Select "All posts" instead of the default setting
Follow EclipseSource on LinkedIn

Jonas, Maximilian & Philip

Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource, specializing in consulting and engineering innovative, customized tools and IDEs, with a strong …