AI (Coding) at EclipseSource: The Internal Story

April 8, 2026 | 16 min Read

We build AI-powered tools, IDEs and business solutions, write a lot about AI coding and provide training on how to use AI coding effectively. So a question we get: What does AI (coding) actually look like inside EclipseSource?

How much code do we generate? Which tools do we use? What solutions do we build? How do we train our own team? And perhaps most importantly, where is the core value of a software engineering company in the age of AI?

This post is our answer. No hype, no inflated numbers, no downplaying either. Just the reality of AI at a software engineering company that happens to also build AI tools for a living and started to become AI-native early.

Early version of deep AI integration in a domain-specific workflow editor, 2024.

How much do we use AI coding?

We are often confused by the external conversation around AI coding. On one end, numbers like Spotify’s claim that close to 100% of their code is AI-generated seem to shock the world. On the other end, we regularly engage with development teams that have zero AI adoption at all, not experimental, literally none.

For us, numbers like Spotify’s are not surprising at all - depending on the project-specific environment, we are in a similar range. AI coding at EclipseSource is no longer something new. It’s simply how we work, every day, for every task where it is applicable. AI-native development is the default.

“Where it is applicable” means there are exceptions. Some client projects come with legal constraints that prohibit AI-assisted development, and we fully respect those. But for the vast majority of our work, every developer defaults to AI-native workflows. AI coding has been part of our workflow since early 2023, and we have been evolving our practices ever since. In early 2025 it became the norm. Looking back, the biggest leaps came from model improvements, especially successive Sonnet versions and Opus 4.5 in late 2025, more than from any single tool change.

In terms of generated code, we estimate > 90% across the projects where AI coding applies. We do not measure this precisely, but we systematically and continuously estimate it. However, the percentage itself is not what matters, for us, efficient and high-quality software development for our customers is. AI-generated code is never accepted blindly; it is systematically reviewed, adapted, and refined in context, with full engineering ownership over every result. How much can be generated depends on context: greenfield feature development pushes it higher, complex debugging or legacy work brings it lower. But the point is not the number. The point is that AI-assisted development has become our normal.

Presenting on AI-powered IDEs at Mozilla Builders, San Francisco, late 2024.

How much more efficient are we?

This is difficult to answer based on hard data, because you cannot run a controlled experiment: you cannot build the same feature twice, with AI and without, at least not under identical conditions like the knowledge you gained in the process. And estimations just adapt over time. What we do instead is rely on regular internal reflection, as part of our sprint estimation routines and retrospectives. Our developers estimate efficiency gains for their own work on an ongoing basis, and we discuss these openly, not with the goal to assess developers but to learn as a team.

The range: between 15% and 50% efficiency gain for most coding tasks. Some types of tasks benefit enormously: scaffolding new features, generating boilerplate, creating new UI, writing tests, exploring unfamiliar APIs. Other tasks benefit less: tracing a concurrency bug, optimizing a hot path, navigating tangled legacy code. The 15–50% range reflects the reality across a diverse project portfolio, not a best-case scenario.

It is worth noting that this range applies specifically to coding tasks. In practice, a significant share of our project work is not coding at all. It includes architecture, problem thinking, requirements analysis, system design, code review, coordination, and technical decision-making. These activities benefit from AI too, but in different and less easily quantifiable ways. So when you look at overall project efficiency, the effect is real but less dramatic than the per-task coding numbers might suggest.

Presenting AI for building AI-native Tools and IDEs at OCX in 2024.

How do we charge our customers?

The answer is simple: we don’t charge more.

Most of our projects are time-and-material based. When we become more efficient, our customers get more done within the same budget. We have not raised our daily rates because of AI-assisted productivity gains. Our ambition is to serve our clients with the best tools and methods available, and right now, that includes AI coding. The extra efficiency is something our customers benefit from directly, at no additional cost.

We believe this model is sustainable. Clients see the increased output and quality, and they recognize the value. If anything, it makes working with us a better deal than it was two years ago.

Which tools do we use?

We have tried nearly every AI coding tool that has entered the market since 2023. All EclipseSource employees have a 20% innovation budget that goes into open source work and into improving our own development workflows. A significant part of that time has gone into evaluating and comparing AI coding tools, as well as establishing best practices for AI coding that go beyond specific tool features.

After extensive experimentation, we have currently consolidated around three primary tools: Claude Code, GitHub Copilot, and the Theia IDE, which we contribute to ourselves. Most of our developers use at least two of these, and many switch between all three depending on the task.

Claude Code natively integrated into an IDE, mid 2025.

Why we switch tools and why that makes us better

This might sound inefficient, but it is one of the most valuable practices we have adopted. When you use only one tool, you develop habits. Some of those habits are good, but many are workarounds for that tool’s specific limitations, and you stop noticing the difference. Switching tools regularly forces reflection. You notice what each tool does well, where each one struggles, and most importantly, you develop a clearer understanding of what AI coding approaches work best versus what the tool is nudging you toward.

This is not something we enforce as a policy. It emerged naturally because our developers are curious and the 20% innovation time gives them space to explore. But the effect is real: a team that uses multiple tools thinks more critically about all of them, abstracting a common approach rather than a set of tool-specific features.

Plan-mode workflows in the Theia IDE, H1 2025.

The unfair advantage of building your own AI tools

Here is something that makes our situation genuinely unusual: we do not just use AI tools. We build them.

AI-native solutions are our daily business. We work on AI-native tooling for our clients, we build AI integrations for domain-specific workflows and we contribute to and shape open source projects such as Theia AI. This creates a mutual innovation loop that is hard to replicate.

When you build AI tools, you understand what happens under the hood of every AI agent: how context is assembled, how prompts are constructed, how tool calls work, where token limits create blind spots and what LLMs work best in which scenarios. That understanding makes you a fundamentally better user of any AI tool, including the ones you did not build. You know what happens under the hood, why a tool behaves the way it does, not just that it does. You know when to trust it and when to intervene.

The loop works in the other direction too. Because we use AI coding tools every day in real production projects, we see firsthand what works and what fails. That experience flows directly into the tools we build. We are not designing AI developer tools from theory. We are designing them from daily frustration and daily delight.

This is also why we made a basic understanding of how AI coding agents work a core part of our Systematic AI Coding training. Our own experience showed us that developers who understand the mechanics behind these tools use them dramatically more effectively.

Theia AI goes public, September 2024 - built by 16 EclipseSource contributors across the first half of the year.

What kind of AI tools do we build?

The easy answer is: almost every kind. But a large part of our work is helping customers identify where AI can create real value in their specific domain.

Often we integrate AI capabilities into domain-specific expert tools. We have worked on AI integration for graphical modeling environments, hardware configuration tools, tracing data analysis, railway construction software, and the list keeps growing. Literally any domain that involves structured, typically complex data and expert workflows is a great candidate.

What surprises many of our customers is how much large language models can actually do within these specialized domains. The assumption is often that AI is useful for general tasks - writing text, generating code - but too unreliable for expert workflows with strict rules and complex data. In practice, the opposite is frequently true. When you combine an LLM’s ability to reason over complex information with deep integration into domain-specific tools and data, you unlock capabilities that create measurable value for expert users.

Getting there is not always easy. Domain-specific AI integration requires understanding both the AI side and the domain side deeply, and designing the right interface between them. But this is exactly where built-up experience compounds. Every domain we work in teaches us patterns that accelerate the next one, and two years of doing this across diverse industries has given us a strong foundation.

Are we still focused on tools and IDEs?

Yes, and that remains our core. We even still work on plenty of projects with no AI aspect at all, including plain RCP modernization and migration work. But our deep adoption of AI has opened doors, even beyond AI in tools.

When you spend years building AI tools and using AI coding daily in production, you accumulate a specific kind of experience. We understand how AI agents work, how to make them reliable, and how to integrate them into complex, real-world systems. That knowledge doesn’t only apply to tools in the classic sense.

Increasingly, clients and partners approach us to apply this expertise in business workflows, automating processes that traditionally required human judgment.

The most prominent example - and the area where we have gone deepest - is AI interaction agents: fully autonomous voice, chat, and messaging agents that don’t just answer questions but actually complete workflows end-to-end. A patient calls to book an appointment - the agent checks availability, collects insurance data, and books the slot. A customer calls a furniture store - the agent checks real-time inventory and reserves the item. No callback needed, no manual follow-up.

First public announcement of MediVoice, late 2023: AI-powered phone automation for medical practices.

Our most established product in this space is MediVoice, an AI-powered phone system actively used in medical practices across Germany, handling millions of patient interactions. Since early 2025, we have been transferring what we learned in healthcare to other industries: furniture retail, steel trading, large-goods commerce - through a partner model where software vendors bring the domain expertise and customer relationships, and we bring the AI interaction platform and the production experience to make it work reliably.

If you are curious about what we have learned building these agents, we recently published a detailed article: AI Voice and Interaction Agents in Production: 6 Lessons from the Field. And if you want to explore AI-powered automation for your business processes or add interaction capabilities to your product, take a look at our AI Interaction Services.

This expansion is not a pivot away from tools and IDEs, but the same AI expertise applied to a different problem. And the two reinforce each other - building interaction agents deepens our understanding of AI systems, which flows back into the developer tools we build, and vice versa.

Where is our core value?

AI is an amplifier. It scales whatever you bring to the table. A junior developer with AI tools gets faster at writing code. But a team of experienced engineers with deep architectural knowledge, years of domain expertise, and a thorough understanding of how AI coding actually works? That team does not just get faster. It operates at a fundamentally different level, not just in efficiency but in output quality, exactly the kind of difference that matters when your codebase needs to last a decade.

Every decision matters: what you ask the AI, how you structure a problem, how you evaluate the output, when you accept it and when you reject it, how you design systems that remain maintainable. All of this depends on expertise that AI does not replace. If anything, the gap between an expert team using AI and a less experienced team using AI is wider than it was without AI, because expert judgment compounds with every AI-assisted decision.

But there is another dimension that has become equally important: AI itself is still a very young field, and almost nobody has real production experience with it. We do. After years of building AI tools, deploying AI agents, and using AI coding daily across a diverse project portfolio, we bring a depth of practical AI experience. And that experience matters more every day, because almost any piece of software we touch now integrates or at least interacts with AI in some way. Knowing how to build reliable, maintainable AI-powered systems is no longer a niche skill. It is becoming a core requirement, and our customers benefit from the fact that we have been building this expertise for years.

What has shifted is where our time goes. We spend less time on implementation mechanics and more time on architecture, code review, system design, and strategic technical decisions. More time is shifted to aspects higher up in the value chain. Clients are not paying us to type code. They are paying for the engineering judgment that makes the code expressing the best possible solution, maintainable, and aligned with their strategic goals. AI just lets us apply that judgment at a much higher throughput.

A good example is Eclipse RCP modernization. Migrating large RCP applications has always been a project that stakeholders approached with a mix of necessity and dread: the codebases are large, the frameworks are complex, and the effort has traditionally been significant. With AI, we can now execute these migrations dramatically more efficiently. But - and this is the key point - only because our team has two decades of deep RCP expertise. The AI does not know how to migrate an RCP application without deeply experienced guidance and control. We do. AI lets us apply that knowledge at a speed and scale that was simply not possible before. For many stakeholders, this changes the calculus entirely: migrations that once felt too risky or too expensive are now realistic.

How did we learn all of this?

This is where it gets personal.

Yes, we have structural support for learning. Our strict 20% innovation rule gives every developer dedicated time to explore. We run internal workshops focused entirely on innovation. Our open source work, especially on Theia and related projects, gives us a space to put new ideas into practice immediately, not just theorize about them.

But what really drives it is simpler: our team is naturally curious. They always have been. For almost two decades, the people at EclipseSource have had this instinct to dive into new things early, figure them out, and share what they find. That didn’t start with AI - it’s just who they are.

When MCP emerged in late 2024, our team was implementing it even before you could find meaningful search results about it on the internet. We built our first coding agent before Claude Code existed and used plan-mode workflows before they were integrated in any off-the-shelf tools. We implemented AI agents for our customers’ tools when LLM tool calling was not even a thing yet and many people still considered AI a chatbot that couldn’t interact with external functions. Because our team saw something that may add real value and couldn’t resist putting it into practice.

We don’t take this for granted. A company can create the conditions for learning: the time, the formats, the freedom. But the curiosity itself is something people bring with them. We are proud of and grateful for this culture. It is the foundation everything else in this post is built on.

Our internal innovation retreats had AI at the center since 2023.

How do we train our people?

Our developers have regular, dedicated time to improve their AI coding workflows - time that is not billed to customers but invested in staying sharp. In a field that evolves this quickly, standing still for even a few months means falling behind.

In early 2025, we realized that individual experimentation alone was not enough. We needed a shared methodology - a common language and workflow that the entire team could build on. That is when we started developing what would become a methodology we now call Systematic AI Coding (formerly “Dibe Coding”).

The methodology grew out of our own needs first. We systematized what worked, documented what didn’t, and created a structured approach to AI-assisted development that guides our team. It was never designed as a product, but to make us better.

What happened next was organic. Clients we worked with started noticing our involvement with AI and asking how we apply it. We helped a few existing customers adopt similar practices. And when we realized the demand went beyond our client base, we turned the methodology into the training program that is now known as Systematic AI Coding.

That origin matters. The training is based on years of daily practice across a real portfolio of production projects, continuously refined by a team that also builds the AI tools themselves.

Systematic AI Coding: training and adoption support since September 2025.

Summary

AI is deeply embedded in how we work. But “embedded” does not mean “finished.” Applying AI is an ongoing practice that evolves with every new model, every new tool, and every project that challenges our assumptions. It keeps us busy, and it keeps us curious, which is exactly how we like it.

The good news is that everything we have learned along the way is something we can now put to work for you, across our full spectrum of AI services:

Wherever you are on your AI journey, we have been through the hard parts and know what actually works:

Let’s talk.

💼 Follow us: EclipseSource on LinkedIn

🎥 Subscribe to our YouTube channel: EclipseSource on YouTube

Stay Updated with Our Latest Articles

Want to ensure you get notifications for all our new blog posts? Follow us on LinkedIn and turn on notifications:

  1. Go to the EclipseSource LinkedIn page and click "Follow"
  2. Click the bell icon in the top right corner of our page
  3. Select "All posts" instead of the default setting
Follow EclipseSource on LinkedIn

Jonas, Maximilian & Philip

Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource, specializing in consulting and engineering innovative, customized tools and IDEs, with a strong …