Domain-specific AI Extensions in VS Code: Worth Another Look

March 19, 2026 | 10 min Read

In the early days of IDE AI integrations, the Visual Studio Code extension API offered very limited support for custom AI features. While the native VS Code chat interface and its integration with GitHub Copilot became increasingly sophisticated, third-party extension developers were largely left behind. If a team wanted to contribute domain-specific AI support to VS Code, they typically had to implement custom webviews from the ground up to handle LLM communication, user interaction, and tool calling. This limitation led to the rise of heavily customized extensions like Continue, and in more extreme cases, complete forks of the VS Code editor such as Cursor, Windsurf, and Google Antigravity.

Over the past year, the VS Code Extension API has grown significantly. It now provides contribution points across the entire AI integration stack. So it is worth re-evaluating the pros and cons of building on top of VS Code’s native AI APIs or resort to creating custom chat UIs.

But what do we mean by domain-specific AI support? Think of specialized IDE use cases, such as tools with domain-specific languages, modeling environments with custom editors and underlying data models, device configurators, hardware design tools, or industrial configurators. Domain-specific AI goes beyond general purpose code assistance: it understands your domain’s concepts, workflows, and terminology, and directly interacts with your tool’s capabilities through APIs, MCPs, and Skills. It integrates domain data, such as hardware catalogues, compliance guides, or product databases, and orchestrates all of this in an agentic, autonomous, multi-step process to help domain users accomplish real tasks within their specialized environment.

In this article, we take another look at the current capabilities of the VS Code AI API, the architectural benefits, and the trade-offs you need to consider.

The Current State of the VS Code AI Stack

The VS Code API now allows developers to hook into the AI workflow at multiple levels. Depending on your product requirements, you can choose to silently interact with language models in the background, provide specialized tools to existing agents, or take complete control over a user-facing chat session.

1. The Language Model API

At the lowest level, VS Code gives extensions direct programmatic access to language models. This means you can incorporate AI capabilities into any standard extension feature, such as code actions, hover providers, or custom views, without relying on the chat interface.

You can request access to the models provided by the user’s Copilot subscription using the Language Model API. You specify the model family you need and VS Code handles the routing.

import * as vscode from 'vscode';

async function generateVariableNames(text: string, token: vscode.CancellationToken) {
    const [model] = await vscode.lm.selectChatModels({ family: '...' });

    if (!model) return '';

    const messages = [
        vscode.LanguageModelChatMessage.User('Suggest three better names for this variable.'),
        vscode.LanguageModelChatMessage.User(text)
    ];

    const response = await model.sendRequest(messages, {}, token);

    let result = '';
    for await (const fragment of response.text) {
        result += fragment;
    }
    return result;
}

This streamlined approach is highly effective for building editor-specific interactions that need AI processing in the background, such as generating inline code annotations or intelligently renaming symbols.

2. Extending Agents with Tools

If you want your extension to participate in autonomous coding workflows, you can contribute Language Model Tools. In agent mode, the LLM orchestrates tasks and can automatically invoke your extension’s tools based on the user’s prompt.

You define the tool’s input parameters using a JSON schema in your extension manifest. You then implement the execution logic in your extension code. Because the tool runs in the extension host process, it has full access to the VS Code extension APIs. This allows you to read the active debugging context or analyze the workspace state.

Here is an example of registering and implementing a tool that counts open editor tabs:

import * as vscode from 'vscode';

class TabCountTool implements vscode.LanguageModelTool<any> {
    async invoke(
        options: vscode.LanguageModelToolInvocationOptions<any>,
        token: vscode.CancellationToken
    ) {
        const group = vscode.window.tabGroups.activeTabGroup;
        return new vscode.LanguageModelToolResult([
            new vscode.LanguageModelTextPart(`There are ${group.tabs.length} tabs open.`)
        ]);
    }
}

export function activate(context: vscode.ExtensionContext) {
    context.subscriptions.push(
        vscode.lm.registerTool('myext_tabCount', new TabCountTool())
    );
}

The language model evaluates the tool’s description and schema to decide when to call it, injecting the LanguageModelToolResult back into its context to continue reasoning.

3. Owning the Conversation with Chat Participants

When you need to control the end-to-end interaction flow and response behavior, you can implement a Chat Participant. Users invoke your participant by typing an at-symbol followed by your participant name in the chat view, such as @<your-participant>.

A chat participant receives the raw user prompt and is responsible for handling the entire interaction. This is the correct choice when your extension needs to provide a domain-specific expert. You have complete control over the system instructions sent to the model and can define how the history is formatted.

Implementing a participant requires registering a ChatRequestHandler that streams content back to the UI:

import * as vscode from 'vscode';

const handler: vscode.ChatRequestHandler = async (request, context, stream, token) => {
    const [model] = await vscode.lm.selectChatModels({ family: 'gpt-4o' });

    // Construct prompt including custom system instructions
    const messages = [
        vscode.LanguageModelChatMessage.User('You are a senior database architect...'),
        vscode.LanguageModelChatMessage.User(request.prompt)
    ];

    const chatResponse = await model.sendRequest(messages, {}, token);

    // Stream markdown back to the user
    for await (const fragment of chatResponse.text) {
        stream.markdown(fragment);
    }

    // Render an interactive button in the chat response
    stream.button({ command: 'myext.applySchema', title: 'Apply Schema' });
};

export function activate(context: vscode.ExtensionContext) {
    vscode.chat.createChatParticipant('myext.dbExpert', handler);
}

The ChatResponseStream allows you to push markdown, progress messages, and interactive buttons directly to the native chat window, creating a rich user experience.

4. Advanced Output Rendering

Historically, the chat output in VS Code was restricted to a fixed set of renderers. Extensions could return markdown, code blocks, buttons, file trees, and progress messages. This was a significant limitation for domain-specific AI extensions that required complex visualizations.

Recently, the API added the chatOutputRenderers contribution point. Make sure you evaluate this option if you need to build custom UIs. This powerful feature allows extensions to render specific JSON data generated by the language model in a custom webview directly within the chat interface.

You declare the output renderer and the mime types it handles in your package.json:

"contributes": {
    "chatOutputRenderers": [
        {
            "id": "myext.chartRenderer",
            "mimeTypes": ["application/vnd.myext.chart-data"]
        }
    ]
}

5. MCP and MCP Apps Integration

VS Code fully supports the Model Context Protocol (MCP). MCP tools provide a standardized way to integrate external services. If your tool needs to work across different environments outside of VS Code or runs as a remote service, building an MCP server is often the better architectural choice. You can programmatically register these servers right from your extension:

import * as vscode from 'vscode';

export function activate(context: vscode.ExtensionContext) {
    context.subscriptions.push(vscode.lm.registerMcpServerDefinitionProvider('myMcpProvider', {
        provideMcpServerDefinitions: async () => {
            return [new vscode.McpStdioServerDefinition({
                label: 'my-database-mcp-server',
                command: 'node',
                args: ['server.js'],
                cwd: vscode.Uri.file('/path/to/server')
            })];
        }
    }));
}

The introduction of MCP Apps also enables tools to return interactive UI components that render inline, supporting complex workflows like forms, visualizations, and drag-and-drop interfaces. This bridges the gap between the native chat UI and the highly customized webviews used by older extensions.

Strategic Benefits of the Native API

If you are evaluating the architecture of a new AI extension, building on the native VS Code API offers several compelling advantages over maintaining a custom webview:

  • Subscription Management: Users can reuse their existing Copilot subscriptions to power your extension’s AI features. This drastically simplifies your architecture because you do not have to provision API keys or manage subscriptions yourself.
  • User Interface Consistency: By leveraging the built-in API, you avoid the overhead of bringing your own custom chat UI in a webview. You can reuse the powerful chat UI of VS Code.
  • Discoverability: When you contribute tools or MCP servers to the agent mode, users benefit from your capabilities immediately. The agent can autonomously discover and invoke your tools based on the user’s intent without requiring the user to explicitly configure or call your extension.
  • Contextual Depth: Because your tools and participants run within the extension host, they can access the state of your extensions, editors or tools you contribute. With the VS Code extension API, you can also pull in the general context, such as active line of code, read the file system, or analyze the current debug stack trace to construct accurate prompts.

Trade-offs and Limitations

Despite the significant growth of the API, relying entirely on the native VS Code integration stack comes with drawbacks that may impact your specific use case.

  • Agent Mode Isolation: You can only seamlessly add capabilities to the agent mode. If a user operates within the standard ask or edit modes, your tools will not be invoked automatically. While you can register custom chat participants that work across modes, users must explicitly call them using the at-mention syntax, which is often less discoverable.
  • Lack of System Prompt Control in Agent Mode: As an extension author, you cannot overwrite or control the overarching system message used by Copilot in agent mode. You also cannot contribute custom modes for the agent mode, which would otherwise give extension developers some degree of flexibility on the agent’s system instructions. While users have the ability to create their own custom agent modes to amend these instructions, you cannot ship a custom mode and make it available to the user. If your application requires strict adherence to a specific persona or workflow, you must build a custom chat participant to bypass the default agent instructions.
  • Tool Competition: When you register a tool, you are competing with built-in Copilot tools and tools provided by other extensions.
  • Guest in Someone Else’s Product: As an extension, you are fundamentally a guest in VS Code. You cannot remove or replace existing concepts, menu entries, agents, modes, or default behaviors. You can only add to what is already there. In practice, this means users are exposed to the full VS Code and Copilot feature set alongside your extension’s capabilities, with no way for you to streamline or hide unrelated options. Depending on the complexity of your use case, this can leave users without a clear, guided path through your intended product experience.

These limitations are intentional design choices by the VS Code team. VS Code is an extensible code editor with its own default Copilot AI support following a specific user experience philosophy. For instance, the ask and edit modes are deliberately designed to avoid making unexpected changes to the workspace. Allowing third-party tools to execute silently in these modes could potentially break that promise. Users expect the default Copilot behavior to remain consistent unless they explicitly address a different participant.

Conclusions

The generic chat UI and the VS Code AI extension API have matured into a highly viable option for adding advanced domain-specific AI capabilities to the editor. For most development tools, the benefits of reusing the user’s Copilot subscription and integrating into the native UI far outweigh the limitations.

This is ultimately a strategic decision, and for many cases it is not a clear “either/or”. We have implemented both models — extending VS Code and building on a dedicated platform — for our customer projects. If you need a experienced sparring partner to discuss which path is right for your endeavour, get in contact with us, we are happy to support your project.

If your product vision requires taking more control over the default system messages, you are diverging from the intended extensibility concept of VS Code. In those specific scenarios, you will possibly need to rely on your own chat UI like Continue, or fork VS Code (usually not a recommended option). In either case, you are still extending an existing product, that is VS Code, and are strategically dependent on the directions this product takes. If you rather aim at building your own product with full freedom to steer it as you see fit, you rather should use a dedicated platform for custom tools and IDEs such as Eclipse Theia, which—together with Theia AI—offers a fully customizable AI agent and chat UI framework for building your own tool product with complete control.

Interested in building AI-powered tools? EclipseSource provides consulting and implementation services backed by our extensive experience in tool development. We also specialize in tailored AI assistance for web- and cloud-based tools and professional support for Eclipse Theia and VS Code.

Contact us to discuss your project.

Stay Updated with Our Latest Articles

Want to ensure you get notifications for all our new blog posts? Follow us on LinkedIn and turn on notifications:

  1. Go to the EclipseSource LinkedIn page and click "Follow"
  2. Click the bell icon in the top right corner of our page
  3. Select "All posts" instead of the default setting
Follow EclipseSource on LinkedIn

Jonas, Maximilian & Philip

Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource, specializing in consulting and engineering innovative, customized tools and IDEs, with a strong …