Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource, specializing in consulting and engineering innovative, customized tools and IDEs, with a strong …
Eclipse Theia 1.63 Release: News and Noteworthy
July 10, 2025 | 14 min ReadWe are happy to announce the Eclipse Theia 1.63 release! The release contains in total 99 merged pull requests. In this article, we will highlight some selected improvements and provide an overview of the latest news around Theia.
For those new to Eclipse Theia, it is the next-generation platform for building IDEs and tools for the web or desktop, based on modern state-of-the-art web technologies. With Theia AI, part of the Theia platform, you can also build AI-powered tools and IDEs with ease. For more details about Eclipse Theia, please refer to this article and visit the Theia website.
Current Theia project stats on Open Hub.Important Update for users of the AI-features in the Theia IDE Users and Theia AI Adopters:
Theia AI 1.63 introduces significant changes to prompt and function identifiers used by built-in agents. If you’re customizing or extending prompts or using LLM tools in your workflows, please read the section below carefully to ensure compatibility with the updated naming conventions and function definitions.
The Theia project also releases a product, the Theia IDE. The Eclipse Theia IDE is a modern, AI-powered, and open IDE for cloud and desktop environments, aimed at end users. The Theia IDE is based on the Theia platform and also includes advanced AI powered features. For more details, see the Theia IDE website.
If you are looking for a simple way to check out the new release, please download and install the Theia IDE, which is based on Theia 1.63.
Eclipse Theia 1.63: Selected features and improvements
In the following, we will highlight some selected improvements in the new release. As usual, we cannot mention all 99 improvements, however we will focus on the most notable changes as well as changes visible to end users. The corresponding pull requests are linked under the respective heading when applicable.
Improvements for Task Context Management
Task Contexts, originally introduced in 1.62, have been extended and refined in Theia 1.63 to offer a more structured and reproducible approach to working with AI agents. Users can now automatically externalize prompts into editable, version-controlled files that serve as the foundation for AI-assisted development workflows. These improvements streamline collaboration and enhance the accuracy of generated plans and code generation.
The following video demonstrates the updated workflow in action:
For more details, see the article: 👉 Structure AI Coding with Task Context
Agent Mode for Theia Coder
Theia Coder now supports a new autonomous workflow called Agent Mode. While the existing Edit Mode already allows for structured, user-controlled interactions with AI - such as generating, reviewing, and committing code step by step - Agent Mode shifts the model into a more independent role.
When activated, Agent Mode grants Theia Coder full access to the workspace, allowing it to plan, implement, test, and refine code with minimal user input. It can write and modify files, compile and execute code, evaluate the results, and even resolve issues it encounters along the way. Despite its autonomy, all changes remain traceable through the IDE’s built-in version control and change tracking features.
Users can enable Agent Mode by selecting Theia Coder in the AI Configuration View and choosing the agent-mode prompt. For more complex tasks, the mode can be combined with advanced language models like GPT-4.1, latest Gemini or Sonnet-4. Notifications on completion help track the progress of longer operations.
Agent Mode works best when paired with well-defined prompts and also the use of task Contexts (see previous section). It is especially useful for greenfield development, complex features or bug fixes and other multi-step tasks. Under the hood, it relies on improved prompt design and access to new workspace tool functions, which are also available for users building custom agents with Theia AI.
The following video demonstrates the new agent mode in action:
For more details, see the following article: 👉 Theia Coder Agent Mode: From AI Assistant to Autonomous Developer
AI-driven E2E Testing Agent
AI-based end-to-end testing is now available directly in the Theia IDE through the new App Tester Agent, powered by Theia AI and the Model Context Protocol. This new agent enables developers to run fully automated tests on web applications simply by prompting the agent in natural language. The App Tester interacts with your app like a real user, using browser automation to click buttons, enter text, and evaluate results without requiring any test code or manual setup.
Once an application is launched - potentially via another Theia AI agent like the Coder Agent - you can instruct the App Tester to validate specific functionality. The agent will open a browser session through the integrated Playwright MCP Server, generate and execute test scenarios, compare expected versus actual outcomes, and return a structured report. These reports can then be used to trigger automated fixes using the Coder Agent, enabling a seamless prompt → test → fix loop.
The App Tester supports complex applications, including those requiring logins or containing domain-specific flows, by allowing developers to adjust the prompts or supply tailored usage instructions. This makes it a flexible and extensible solution for diverse testing needs.
A full demonstration of the App Tester is available in the video linked below:
For more details, see the following article: 👉 AI-driven E2E Testing in Theia IDE
Support for Remote MCP Servers
Theia 1.63 supports connecting to remote Model Context Protocol (MCP) servers, expanding MCP capabilities beyond local environments. This enhancement allows users to integrate external MCP servers via HTTP, utilizing either StreamableHttp or Server-Sent Events (SSE) protocols.
Configuration is straightforward, requiring only the server URL and optional authentication details. The system automatically attempts to connect using StreamableHttp, falling back to SSE if necessary. Authentication tokens and custom header names can be specified to ensure secure connections.
Remote MCP servers can be managed similarly to local ones, including support for auto-start functionality. This feature enables access to enterprise services, reduces local resource usage, simplifies setup by eliminating the need for local server installations, and allows centralized management of MCP servers.
To utilize this feature, update your preferences to include the remote MCP server configuration and use the “MCP: Start MCP Server” command to establish a connection. For instance, connecting to the Cloudflare demo server can be achieved with the following configuration:
"cloudflare": {
"serverUrl": "https://demo-day.mcp.cloudflare.com/sse"
}
Once connected, AI assistants can leverage the tools provided by the remote MCP server to enhance their responses. The video below shows how we connect to an example remote server from Cloudflare and then use the Universal agent to call the available function, which returns some information about an event.
Support for installing MCP Servers as VS Code Extensions
Theia now supports installing MCP servers directly through VS Code extensions. This update removes the need for manual configuration steps and allows users to set up MCP servers as easily as installing any other extension.
Previously, integrating a Model Context Protocol (MCP) server with Theia required editing IDE settings and understanding internal configuration details. With the new extension-based installation, the process is streamlined, making it much more accessible for users who want to extend their IDE with AI-enabled tools. The following video explains the new capability in detail.
This change also encourages a more modular and unified ecosystem for AI-native development tools. Server publishers are now able to distribute their MCP servers via the same mechanisms as other extensions, promoting broader adoption and easier integration into agent-based workflows.
Support for Images
Theia AI 1.63 introduces support for image inputs across the entire AI system, including the LLM communication layer, agent framework, and default Chat UI. If the selected LLM is capable of handling images, users can now include visual inputs alongside text during conversations. This enhancement allows for more intuitive and efficient interactions, especially in cases where visual context significantly improves understanding.
In the example shown below, a screenshot of a broken webpage layout is shared with Theia Coder. Without requiring further description, the agent is able to interpret the visual issue and suggest an appropriate fix. This new capability opens up additional workflows where visual context is essential for accurate assistance.
Agent to Agent Delegation
Theia AI 1.63 introduces Agent to Agent Delegation allowing multi agent workflows. This is enabled via a toll function allowing agents to delegate prompts to other agents and retrieve the results within the same conversation flow. This delegation is visualized through a collapsible section that embeds the delegated chat, helping users maintain a clear overview without cluttering the main interaction.
In addition to displaying the delegated conversation, the feature also propagates any resulting change sets back to the main chat, ensuring a consistent and unified editing experience. The delegated chat supports streaming responses, providing a seamless and responsive interaction even when the task is complex or ongoing.
Agents can now explicitly delegate tasks to specialized peers such as Architect, Coder, or any custom Agent, depending on the nature of the prompt. Users of the AI-powered Theia IDE can also prompt the delegation directly.
The screenshot below shows an example use case of agent-to-agent delegation in action. In this demo, the AppTester agent has been configured to support direct delegation to Theia Coder. This is enabled by adding the following line to the AppTester prompt template:
**Delegate Fixing Issues**: In case there were any issues,
delegate to 'Coder' to fix the issue and rebuild the application
using this function: **~{delegateToAgent}**
At the end of a test run, the AppTester identifies issues related to incorrect multiplication behavior in the tested application. Instead of attempting a fix itself, the AppTester suggests that the user delegate this task to Theia Coder. Upon delegation, the conversation with Theia Coder appears in a collapsed section within the main chat, showing how it processes the issue and generates a suitable fix. This setup illustrates how agent specialization and delegation can streamline workflows by automatically directing tasks to the most appropriate AI agent. Of course, agents can also delegate to other agents without user confirmation depending on the prompt instructions.
Control over Tool Calls
Theia 1.63 introduces a user-configurable tool call confirmation system for AI agents. As AI agents increasingly rely on tools - especially when using MCP servers or remote MCP servers - this feature provides users with more control over how and when tools are used.
Each tool can now be individually configured with one of three modes:
- Disabled: The tool cannot be executed.
- Confirm: The user is prompted for approval each time the tool is called.
- Always Allow: The tool is executed immediately without confirmation.
These settings can be adjusted via the AI Configuration view or directly in the settings.json file. The default behavior is “Always Allow,” but users can define a global default and override it per tool.
Users can also decide to approve or deny a tool call once, for the current session, or persistently in their settings. A dedicated configuration UI simplifies managing these preferences per tool. The following screenshot shows how we set one function of the Github MCP server to “confirm”. When using it via an agent, the user is prompted whether the tool call is allowed or not.
Theia AI: Improved Efficiency and Robustness
Theia 1.63 contains various improvements for efficiency and robustness across tool functions, AI communication, and code completion workflows.
The workspace search function has been refined to allow specifying file extensions, enabling more targeted searches within the workspace (#15704). Additionally, search results now display relative paths, providing clearer context for file locations (#15703). These improvements contribute to a more efficient context retrieval for agents accessing the workspace.
In the realm of AI communication, error messages are now filtered out from the messages sent to the language model, ensuring that only relevant information is processed (#15699). Furthermore, the system now allows for an increased number of retries before failing, which is especially useful when dealing with rate limits of the underlying LLMs (#15720).
Finally, the capabilities of the AI-driven Code Completion have also been enhanced, with ongoing work to improve the efficiency, accuracy and relevance of suggestions provided to developers (#15730, #15715).
Start Chats from Code Editor
Theia 1.63 introduces a new way to initiate AI chat sessions directly from the editor context. Users can start a session by right-clicking anywhere in a file - either at the cursor position or with a selection - and choosing the “Ask AI” option. Alternatively, the shortcut Ctrl+I can be used to trigger the same action. The context of the chat includes information about the current editor state, such as the selected range or the cursor location, which helps the AI provide more relevant responses.
The screenshot below shows an example where Theia Coder is used to generate a test case for a specific function in a file.
Context Variables for Open Editors and Editor Context
This release introduces two new variables that enhance context awareness in AI interactions within the editor.
The first variable, #openFilesRelative (with the shorthand #_ff), provides a comma-separated list of all currently open files in the workspace, relative to the workspace root. This allows users to reference multiple files at once in prompts, which is particularly useful for tasks involving broader workspace context. The list is updated dynamically as files are opened or closed, ensuring accuracy in real time.
The second variable, #editorContext, captures the current cursor position or selection in the editor. This enables users to easily include focused code snippets in their prompts. For instance, selecting a piece of code and referencing #editorContext in a chat message allows the AI to operate directly on the selected content. This variable mirrors the context behavior used in the “Start Chat from Code Editor” feature described earlier, streamlining workflows that depend on precise code context.
Consolidate Prompts, Prompt IDs and Function Names
Theia AI 1.63 includes a consolidation of prompt and function identifiers used by built-in agents. All built-in prompt IDs now follow a unified naming convention in the format agentname-system-variant. This change ensures more predictable behavior across different agents and variants. Users who have customized or extended built-in prompts should update their custom prompt files to match the new naming pattern; otherwise, the built-in agents will revert to their default prompts.
In addition to prompt changes, function names used by LLM tools for file interactions have been updated. As part of the introduction of Theia Coder Agent mode, the two core functions for modifying file content are now explicitly separated: ~{writeFileContent} applies changes directly to the file, while ~{suggestFileContent} proposes changes through a change set mechanism. If your prompt templates rely on these functions, make sure to update them to the new identifiers. Examples of these changes can be found in the latest built-in Coder prompt templates.
Several smaller prompt adjustments have also been made to improve the overall behavior and reliability of the standard agents.
Improved Support for LLMs hosted via Ollama
Theia 1.63 introduces enhanced support for Ollama 0.9.0, bringing several key improvements to the integration of local or self-hosted LLMs into Theia AI.
The update adds support for streaming tool calling, allowing users to observe tool interactions in real-time during generation. Models running via Ollama now follow a similar interaction pattern as those from providers like OpenAI and Anthropic.
Another notable addition is support for explicit reasoning output. The updated Ollama provider can now distinguish between reasoning steps and final output at the API level. These reasoning blocks are rendered as collapsible sections in the UI, making it easier to follow the thought process of the model without cluttering the final response.
Token usage statistics are now available for Ollama in the AI Configuration View under a dedicated tab. Although local setups may not be cost-sensitive, this feature can help users monitor performance and resource usage.
Finally, the release also introduces preliminary support for passing images to LLMs (via Ollama) as part of the chat context. This allows users to include visual information in conversations, assuming the selected model supports image input.
As always, the 1.63 release contains much more than described in this article, e.g. the compatibility with VS Code Extensions has been upgraded to API version 1.101.1, Electron has been upgraded to version 36.4.0.
For a complete overview of all Theia AI updates in this release, please refer to the collecting epic for 1.63. All these features and improvements (in total 99) were the result of one month of intensive development. Eclipse Theia follows a monthly release schedule. We are looking forward to the next release due next month, stay tuned! To be notified about future releases, follow us on LinkedIn or follow Theia on Twitter and subscribe to our mailing list.
If you are interested in building custom tools or IDEs based on Eclipse Theia, EclipseSource provides consulting and implementation services for Eclipse Theia, for AI-powered tools, as well as for web-based tools in general.
Furthermore, if you want to extend Theia, Theia AI or the Theia IDE with new features or strategically invest into the project, EclipseSource provides sponsored development for Theia, too. Finally, we provide consulting and support for hosting web-based tools in the cloud.
👉 Get in contact with us, to discuss your use case!
Stay Updated with Our Latest Articles
Want to ensure you get notifications for all our new blog posts? Follow us on LinkedIn and turn on notifications:
- Go to the EclipseSource LinkedIn page and click "Follow"
- Click the bell icon in the top right corner of our page
- Select "All posts" instead of the default setting