Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource, specializing in consulting and engineering innovative, customized tools and IDEs, with a strong …
The AI Coding Spectrum: 6 Levels of Assistance Developers Should Know
June 26, 2025 | 7 min ReadAs developers, we interact with an ever-growing range of tools that help us write, fix, and understand code. From rule-based linters that enforce formatting to autonomous AI agents capable of planning and implementing features on their own, the spectrum of assistance is expanding fast.
But when we talk about “AI coding” what do we really mean? Are we referring to autocomplete suggestions? Code explanations via chat? Or a fully autonomous agent pushing a PR?
To make this conversation more precise, we propose a very simple, progressive framework: six levels of AI-enhanced coding, defined by what the assistant does, not just how it works under the hood.
Here’s a quick overview of the levels before we dive into each in more detail:
Level | Name | Behavior | Typical UX |
---|---|---|---|
Level 0 | Static Tooling | Rule-based helpers like linters or formatters — typically no AI | Tooltips, overlays, problem views |
Level 1 | Token-Level Completion | Predicts the next token or word based on local context | Inline, auto-completion |
Level 2 | Block-Level Completion | Completes entire lines or functions | Inline, auto-completion |
Level 3 | Intent-Based Chat Agent | Chat-driven assistant suggests changes | Chat UI + Diff |
Level 4 | Local Autonomous Agent | Receives a feature description, edits files, runs tests — iterates | Chat UI, Diff (for final Review) Planner Dashboard |
Level 5 | Fully Autonomous Dev Agent | Plans and completes tasks end-to-end | Agent Dashboard |
This model has proven very useful in practice. In our workshops with teams exploring AI-native coding practices, we’ve seen that people often have very different ideas of what “AI coding” actually means. As the field is still rapidly evolving, we’ve found it incredibly helpful to give things a name—establishing a shared vocabulary helps teams align faster, communicate more clearly, and most importantly: share experiences among each other.
Level 0: Static Tooling
Behavior: Rule-based helpers like compilers or linters — typically no AI involved.
Typical UX: Tooltips, overlays, hover over, problem views.
These tools analyze your code and flag potential errors or violations of style rules. They are deterministic and don’t attempt to “guess” your intent. They can, based on the context, suggest fixing an error in a static, deterministic way (a.k.a. “Quick Fix”)
Examples: Linters, Compilers, Language Servers
Level 1: Token-Level Completion
Behavior: Predicts the next token or word based on local context.
Typical UX: Inline, auto-completion
This level covers traditional autocomplete features. The assistant doesn’t understand your overall intent but is good at finishing identifiers or keywords based on what you’ve typed. For this it uses the context of the file you are editing, other files in your project and underlying libraries. There were several AI-enhanced versions of level 1 completion tools over the last decades already, long before the coding power of modern LLMs were revealed.
Examples: Traditional IDE autocomplete, Language Servers
Level 2: Block-Level Completion
Behavior: Completes entire lines or code blocks, such as functions or loops.
Typical UX: Inline, auto-completion
Here, the assistant starts to infer slightly broader context and may generate several lines of code that form a coherent block. It still operates reactively - responding to context like comments, function headers, surrounding code or in general the context of your current editor/cursor.
Examples: Any AI auto-completion tool
Level 3: Intent-Based Chat Agent (a.k.a. “Edit Mode”)
Behavior: You describe your goal or problem in natural language; the assistant responds with suggested code changes to be reviewed.
Typical UX: Chat UI + Diff preview.
This is the rise of chat-based assistants that can take in broader context, explain code, fix bugs, or implement new functions. It combines code generation with dialogue, typically in a chat window.
Examples: Github Copilot in Edit Mode, Theia Coder in Edit Mode, Continue.dev, Roo, Cline, Cursor, etc.
It is worth mentioning that there are multiple variants of level 3 agents in terms of UX. A typical variant is to allow starting a chat from a given context, e.g. from within the code editor. In the example screenshot below, we ask Coder to write a test for a specific function we have selected in the code editor. Other examples for these sub-variants are right click actions “Fix with AI” on errors, which start a scoped chat and create suggestions, often displayed “inline”.
Level 3 agents are already amazingly powerful in practice. With good prompts and LLMs, you can code full features or entire applications from scratch. Due to the review step, they provide the developer with full control over any applied changes. However, as they depend on explicit user review, they typically cannot fully iterate on changes including running tests, building the system or even launching it for testing the UI.
Level 4: Local Autonomous Agent (a.k.a. “Agent Mode”)
Behavior: Given a feature description, it plans, edits multiple files, compiles, runs tests, starts the application and iterates within your local environment.
Typical UX: Multiple Chat UIs, Diff viewer, Planner UI.
Unlike Level 3, this agent doesn’t just suggest code - it executes multi-step workflows, adapts to failures (like failing builds or tests), it can potentially start and test the application under development and maintains memory across steps. It’s the beginning of real automation, but still under local and optional interactive control.
Examples: Same as for level 3, but in agent mode, CLI tools such as Claude Code
Level 5: Fully Autonomous Dev Agent
Behavior: Plans and completes development tasks end-to-end with minimal or no human intervention.
Typical UX: Agent dashboard or regular platforms such as Github or Gitlab
At this stage, the assistant acts like a real developer: it reads your backlog or issue tracker, picks a task or looks at a task assigned to it, makes a plan, writes code, tests it, and opens a pull request—all without you needing to touch the keyboard.
Examples: Devin (by Cognition), Github Agents
Conclusion
The six-level framework is intentionally simple—its purpose is to give teams a shared vocabulary, not to perfectly capture every possible tool or edge case. Naturally, many solutions fall somewhere in between levels or span multiple ones.
In our workshops and work with development teams, we’ve seen a clear pattern: Levels 0 to 2 are easy to adopt and integrate into daily workflows. Even Level 3 can be introduced with minimal disruption. But moving beyond that—to Levels 3 through 5—requires a fundamental shift in how we work as developers. As discussed in our article on AI adoption failures in enterprises, a structured approach with defined workflows and targeted training is often essential for success.
It’s also important to stress that these levels are not a hierarchy where higher always means better. Different tasks require different approaches. The most effective AI adopters are those who fluidly move between levels—for example, letting a Level 4 agent do the heavy lifting, switching to Level 3 for refinement, and applying final tweaks at Level 2, Level 0, or even manually. Mastery of the full spectrum gives developers both control and leverage.
Finally, we acknowledge that today’s tools still have UX gaps—especially between Levels 0–2, 3–4, and 5. Closing these gaps is a critical opportunity for tool builders aiming to create the next generation of truly seamless AI coding experiences.
Want to Bring AI Coding to Your Team?
We at EclipseSource help teams adopt AI-native software engineering practices through structured methodology, hands-on training, and tailored tooling—designed to fit your enterprise context.
Let’s discuss how we can support your transformation into a team that thrives with AI.