Eclipse Theia 1.60 Release: News and Noteworthy

April 10, 2025 | 14 min Read

We are happy to announce the Eclipse Theia 1.60 release! The release contains in total 100 merged pull requests. In this article, we will highlight some selected improvements and provide an overview of the latest news around Theia.

For those new to Eclipse Theia, it is the next-generation platform for building IDEs and tools for the web or desktop, based on modern state-of-the-art web technologies. With Theia AI, part of the Theia platform, you can also build AI-powered tools and IDEs with ease. For more details about Eclipse Theia, please refer to this article and visit the Theia website.

Current Theia project stats, more on Open Hub.

Good news for Mac users: We now provide a signed Arm native build, as the X86 build for Mac runs slow and unstable on recent Arm systems. Please note that the automatic update from the previous experimental build to the new version won’t work via the update, please download and re-install your Theia IDE manually once, check our download page for updates!!

The Theia project also releases a product, the Theia IDE. The Eclipse Theia IDE is a modern, AI-powered, and open IDE for cloud and desktop environments, aimed at end users. The Theia IDE is based on the Theia platform and also includes AI-powered features. For more details, see the Theia IDE website.

If you are looking for a simple way to check out the new release, please download and install the Theia IDE, which is based on Theia 1.60.

Eclipse Theia 1.60: Selected features and improvements

In the following, we will highlight some selected improvements in the new release. As usual, we cannot mention all 100 improvements, however we will focus on the most notable changes as well as changes visible to end users. The corresponding pull requests are linked under the respective heading when applicable.

Migration from PhosphorJS to Lumino

Theia 1.60 migrates its underlying widget and layout framework from PhosphorJS to Lumino, a more actively maintained fork developed by the JupyterLab community. While PhosphorJS served as a reliable foundation for Theia over several years, its lack of ongoing maintenance has increasingly led to issues, such as the need to patch it manually to support features like secondary windows.

The migration to Lumino aims to improve long-term maintainability and align Theia with an active ecosystem. Although this change is largely internal, it represents a significant architectural shift. The migration enables future improvements while keeping the current behavior consistent, including continuing support for secondary windows using patches and workarounds that can now be incrementally removed through collaboration with the Lumino project.

The Theia widget abstraction provided through its public API remains unchanged. Therefore, most adopters should not need to adjust their code. However, if you have implemented customizations that directly depend on PhosphorJS internals, these will need to be updated to work with Lumino. We recommend adopters to carefully check UI elements related to the workbench for possible regressions and to test and report feedback early.

Tool function for Theia AI agents to retrieve diagnostics in files

A new tool function has been added to allow Theia AI agents to retrieve diagnostics for a specific file. When called, the function opens the file in an editor and collects all issues reported in the problem view. By opening the file, the function triggers feedback from tools that are typically designed to provide diagnostics to the user. This includes all diagnostics provided by VS Code extensions, such as language servers, linters, or spell checkers. These tools often only emit diagnostics for files that are actively open in an editor, making this step essential for comprehensive issue detection. The new function is integrated directly into the Theia Coder agent, enabling it to analyze a file’s state and offer automated fixes based on the gathered diagnostics. See the following screenshot for a demo. As you can see, the currently opened file has some issues reported in the problem view, that are fully automatically fixed with Theia Coder.

Theia Coder retrieving and automatically fixing issues in files.

Support for Google AI / Gemini (experimental)

Theia 1.60 introduces native support for Google AI as an LLM provider. While it was previously possible to connect Theia AI to Google’s models via their OpenAI-compatible API, that approach depended on an alpha-stage implementation and proved to be unreliable in many cases. With this release, Theia now provides a dedicated integration for Google AI, offering a significantly more stable and seamless experience.

The new integration supports the entire Gemini and Gemma model families, including the lightweight Gemini Flash and the recently released Gemini Pro 2.5. The latter has shown strong performance in both general-purpose tasks and coding-related benchmarks. This makes it a compelling choice for users exploring alternatives to existing models.

Google AI integration with support for the Gemini family of models.

Another benefit of using Google AI is the ability to experiment with the models at no cost. Google offers a free tier with rate limits, and users who create a billing account may receive initial credits to get started. This makes it easy to try out the new integration without upfront costs and share feedback with the community.

👉 See the documentation on how to set-up Google AI (and other LLM providers)

Chat-specific model request settings (experimental)

Theia AI 1.60 introduces an experimental feature that allows users to define custom request settings per individual chat session. While it was already possible to configure request settings for specific language model and provider combinations in a global or static way, this new feature adds flexibility by enabling on-the-fly adjustments within the context of a single conversation.

Users can now click a new icon in the top-right corner of a chat window to access this functionality. The settings must currently be entered manually as text. For example, users can adjust the temperature for a particular session to make the language model more or less creative. In a demonstration video, adjusting this parameter for the Theia Coder agent results in the generation of more imaginative code examples.

Demonstration of how adjusting the temperature parameter affects AI response creativity.

This feature also unlocks the ability to use provider-specific parameters, such as Claude’s new “thinking mode,” which will be discussed in the following section. While still in an experimental phase, this addition lays the foundation for more detailed and session-specific control of AI behavior. If you build a custom agent, you have these settings fully under control. Future updates are expected to improve the default user interface, especially for commonly used settings.

Thinking Mode for Claude

Theia 1.60 adds support for Claude’s “thinking mode” when using Sonnet-3.7. By setting a custom request parameter—either globally or for a specific chat session—you can instruct the model to “think more.” This is particularly useful for more difficult questions and shows its strengths when using agents like the Architect or Theia Coder on complex coding tasks.

As shown in the following video, we first ask Sonnet-3.7 a fairly difficult question without thinking mode enabled. It responds quickly but with an incorrect answer. We then switch to a new chat session and enable thinking mode via a chat-specific setting. This time, the model takes noticeably longer to respond. To keep the video short, we switch to a previously completed session with the same setting, and it arrives at the correct solution.

Comparison between Claude's responses with and without thinking mode enabled.

As mentioned in the previous section, the UI for chat-specific settings is currently experimental. We aim to improve its usability in the future, including making options like enabling thinking mode more accessible. If you build a custom tool based on Theia AI, you might want to introduce your own specific way of exposing thinking mode to your users anyways or not expose it at all.

Prompt Fragments

The 1.60 Theia AI release introduces support for prompt fragments, enabling users to define reusable parts of prompts for recurring instructions given to an AI. These fragments can be referenced both in the chat interface (for one time usage) and within the prompt templates of agents (to customize agents with reusable fragments). For example, users can define a prompt fragment that defines a specific task, provides specific workspace context or coding guidelines and then reuse it across multiple AI requests without having to repeat the full text.

To support this functionality, Theia now includes a special variable “prompt” that takes the ID of a prompt fragment as an argument. In the following video, we demonstrate the usage of a prompt fragment to create a reusable workflow (documenting a file). We add a new directory to our workspace with a prompt template in it. We then make sure that the directory is configured as a location for prompt templates (also see next section). Now we can use the prompt fragment in the chat. We could also add it to the prompt template of an agent instead. Please note that for more complex workflows, Theia AI also makes it very easy to create custom agents from scratch.

Creating and using reusable prompt fragments for common workflows.

Note that prompt fragments can recursively reference other fragments, variables and tool functions, which is pretty useful for reusable additions to standard prompts, such as adding access to MCP servers (see below “MCP Config View and Improvements”). Overall, this feature simplifies the process of managing complex or repetitive prompt content and enhances the flexibility of AI-powered features in Theia.

Allow project specific prompt locations

In addition to the new prompt fragments (see previous section), with Theia 1.60 users can now specify workspace-relative directories, individual files, and relevant file extensions for prompt templates and fragments. Once configured, these templates are accessible via a shorthand format (e.g., #prompt:filename) in both the chat interface and agent prompt editors.

This feature supports two main use cases:

  1. Augmenting prompts with project-specific information: Developers can create a dedicated file—such as project-info.prompttemplate—to include domain knowledge, architectural decisions, or coding guidelines. When referenced via #prompt:project-info, this information can guide AI behavior and improve prompt relevance.
  2. Creating reusable project-specific prompts: Teams can maintain a collection of shortcut prompts for common actions like “generate a test according to specifics,” enabling consistent and efficient communication with AI agents across the project. See an example for this use case in the previous section. You can also override the prompts of the default agents with project specific versions.

In future releases, we may include preconfigured defaults such as #project-info.prompttemplate for specific agents like Coder or Architect.

Model Context Protocol (MCP) Config View and Improvements

Theia 1.60 now includes a new configuration view for MCP (Model Control Protocol) servers within the AI Configuration view. This new view provides users with a clearer overview of the MCP server landscape and their current states. Specifically, it lists all configured MCP server settings and displays the server status using well-defined states such as Running, Starting, Errored, or Not Running.

In addition to status information, the view shows all tools associated with each server. These tools can be easily copied for use in chat-based interfaces or prompt templates. When copying tools, you can choose to get a combined prompt fragment (resolving to a list of all available tools), a list of available tools (so you can review and restrict the used tools) or single tools (in case you want to only use a particular tool). Users can also start or stop individual MCP servers directly from the interface.

The following video shows the new view in action. We embed the tools from two example servers, the MCP Git server and the MCP search server into the chat. Later, we make the search tool part of the prompt of the universal agent, so we do not have to mention it in the chat anymore, but allow the agent to generally search if requested.

Using the new MCP configuration view to manage servers and integrate tools.

For a more detailed example on how to use MCP in Theia AI see:

👉 Let AI Commit (to) Your Work - With Theia AI, Git, and MCP

Under the hood, several improvements were made to support this feature. These include the introduction of a notification mechanism for server state changes, a shared interface for frontend services, and a more robust status handling system for MCP servers. A new endpoint has also been added to provide detailed server descriptions, including tool data and error messages. These improvements simplify the integration of MCP servers into your custom tools and IDEs based on Theia AI. status reporting mechanism instead of re-fetching tool data.

Customizable Welcome Messages for AI Chat View

Theia 1.60 introduces support for customizable messages in the chat view. This feature allows applications to provide their own interactive welcome message as well as disabled-AI message (shown if AI features are deactivated) through dependency injection. The default messages previously hardcoded in the chat view have been relocated to the ai-ide package and are now injected, making it possible to override them according to specific branding or user experience requirements.

In the AI-powered Theia IDE, users will see either a general welcome message or a message indicating that AI features are disabled, depending on the current AI preference settings. The new welcome message (see screenshot below) is shown when the chat view is opened and no active conversation is available. However, as mentioned, you can now fully customize this for your own custom tool based on Theia AI.

Customizable welcome screen shown when opening the AI chat view.

Put all AI Prompts under MIT License

With the 1.60 release, all prompt templates in Theia AI and the AI-powered Theia IDE have been extracted into individual files and placed under the MIT license. The motivation for this change is to avoid licensing complications and give users and tool builders full freedom to customize prompts in their applications, without necessarily contributing back any adaptations. Since in the Theia IDE, these templates are even user-editable at runtime through a UI, applying a permissive license like MIT ensures legal clarity and adaptability for downstream users. Of course, we are still happy if users contribute any improvements on the existing prompts or ideas for new agents. However, if users or tool builders adapt prompts to domain or project-specific needs, it might be neither feasible nor valuable to have the customized versions commonly available. We hope this encourages more innovation by making it easier for developers to adapt and share prompt templates without legal constraints.

Continued Enhancements for Theia, Theia AI and the AI-Powered IDE

In addition to the main features included in this release, we introduced several refinements across Theia, Theia AI, and the AI-powered Theia IDE. These improvements focus on usability, transparency, and better user control when working with AI capabilities.

To make interactions with the AI chat more efficient, a new shortcut syntax has been added for including the current file in the context of the conversation. Instead of using the full variable name #currentRelativeFilePath, users can now use #_f, making the input shorter and more convenient to write #15252.

We also updated the default language model to gpt-4.5-preview, which offers improved performance and capabilities over the previously used models. This model can now be selected directly in the configuration view without additional setup #15090. See also:

👉 Theia AI and the AI-powered Theia IDE support GPT-4.5-preview by default!

👉 Why Theia supports any LLM!

For better organization and usability of AI chat sessions, Theia now uses a language model to generate descriptive names for each session and tracks the last interaction date. This makes it easier to navigate between conversations and maintain an overview of recent activities. Users still have the option to manually assign names if preferred #15116.

Automatic chat session naming with timestamps for better organization.

Finally, we adjusted the default behavior of AI-based code completions. Instead of triggering suggestions automatically, completions are now invoked manually via Ctrl+Alt+Space. This change was made to reduce distraction and data transfer to the LLM unless explicitly requested by the user. The automatic mode remains configurable through preferences #15333.

For a complete overview of all Theia AI updates in this release, please refer to the collecting epic for 1.60.

As always, the 1.60 release contains much more than described in this article, e.g. the support for the VS Code extension API has been upgraded to support 1.98.2. All these features and improvements (in total 100) were the result of one month of intensive development. Eclipse Theia follows a monthly release schedule. We are looking forward to the next release due next month, stay tuned! To be notified about future releases, follow us on LinkedIn or follow Theia on X and subscribe to our mailing list.

If you are interested in building custom tools or IDEs based on Eclipse Theia, EclipseSource provides consulting and implementation services for Eclipse Theia, for AI-powered tools, as well as for web-based tools in general.

Furthermore, if you want to extend Theia, Theia AI or the Theia IDE with new features or strategically invest into the project, EclipseSource provides sponsored development for Theia, too. Finally, we provide consulting and support for hosting web-based tools in the cloud.

👉 Get in contact with us, to discuss your use case!

👉 Follow us on LinkedIn!

Jonas, Maximilian & Philip

Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource, specializing in consulting and engineering innovative, customized tools and IDEs, with a strong …