Theia AI Sneak Preview: Open and Transparent AI-driven Code Completion

September 18, 2024 | 5 min Read

Do you use AI code completion and are you interested in what data is actually sent and received to the underlying LLM. And do you feel that sometimes you would like to add your own 2 cents to the prompt and influence how the LLM is generating results?

In this article we provide a sneak preview about Theia AI, a fully open AI framework for building AI capabilities into custom tools and IDEs. We will highlight two Theia AI’s core capabilities: full control of LLM communication and dynamic management of prompts. We use the code completion feature of the Theia IDE as an example to demonstrate these capabilities.

Theia AI is an open and flexible technology that enables developers and companies to build tailored AI-enhanced custom tools and IDEs. Theia AI significantly simplifies this task by taking care of base features such as LLM access, a customizable chat view, prompt templating and much more, and lets tool developers focus on engineering prompts for their use cases and integrate them seamlessly in Theia’s editors and views, as well as in the tool provider’s custom editors and views. Theia AI is part of the Theia Platform and is ready to be adopted by tool builders wanting to be in full control over their AI solutions. Learn more about the vision of Theia AI.

Theia IDE is a modern and open IDE built on the Theia platform. With version 1.54 Theia IDE will integrate experimental AI support based on Theia AI to showcase AI-powered functionalities in a highly customizable, transparent and open setting. Learn more about the Theia IDE.

AI-driven code completion is conquering the world of IDEs, as it is much more powerful compared to traditional approaches that are based on language services. In the screenshot below, we show a very simple example. Due to the method’s name (the “context”), the underlying AI is actually capable of guessing the line correctly. Even if the example is simple, traditional code completion cannot do this.

However, as outlined in a previous article, most solutions actually make a secret of how and what they communicate with the underlying LLM to enable this kind of code completion. In Theia AI, which the example code completion feature is based on, you have full control over the communication. This allows you to monitor exactly what is sent and received from the underlying LLM, which is very helpful while iteratively refining prompts. Even more, you can optionally make this communication transparent to your end users, which we do for the code completion in the Theia IDE.

As shown in the screenshot below, users of the Theia IDE can actually monitor which data is sent and received while triggering the code completion.

As you can see, the code completion prompt is currently very simple in its experimental version. It could be optimized, but already provides pretty good results in many cases.

However, as fine-tuning prompts is usually an essential part of building good AI integrations, Theia AI, the underlying framework, provides a flexible and powerful prompt management system. This allows you to dynamically fine-tune prompts even at runtime, which is incredibly useful for iterating on features such as the code completion. In the Theia IDE, we go even one step further and use Theia AIs optional capability to make prompts accessible for end users.

Therefore, the user can actually review how exactly the LLM is prompted to complete the code (see screenshot below). And because just “reviewing” is boring, the Theia IDE also allows the developer to tweak the prompt to match their coding preferences, the project style guidelines, or other context that may be relevant to their projects. In the screenshot below, we demonstrate this by adding a simple additional instruction to our code completion prompt to “always extract Strings to constants”.

This change is immediately effective, as we can see in the following screenshot:

Open communication and prompt editing are capabilities supported by Theia AI out of the box. Therefore, in the Theia IDE, this is not available only for code completion, but for any other AI feature in the Theia IDE, such as the terminal assistant and the global chat. In practice, it is amazing how simple tweaks can tremendously improve the results in a certain context and even unlock entirely new workflows, if you just open the door for users to change the prompt templates and review the underlying traffic with the underlying LLM.

If you want to build your own tool or IDE based on Theia AI, it is up to you whether you also want to provide this level of transparency and flexibility to your users or whether you prefer to keep the prompts and LLM traffic of your tool hidden to the user. But especially during development of your tool, it enables testers and developers to tweak and optimize the prompts without recompiling the tool, leading to fast turn-around times in streamlining your tool’s AI capabilities.

We will show more showcases demonstrating the core principles of Theia AI within the next few days, so stay tuned and follow us on Twitter.

If you want to sponsor the project or use Theia AI to create your own AI solution, please get in contact with us. In particular, we are also looking for LLM providers who want to make their language models available via Theia AI.

EclipseSource is at the forefront of technological innovation, ready to guide and support your AI initiatives based on Theia AI or any other technology. Our comprehensive AI integration services provide the specialized know-how necessary to develop customized, AI-enhanced solutions that elevate your tools and IDEs. Explore how we can assist in integrating AI into your tools with our AI technology services. Reach out to begin your AI integration project with us.

Jonas, Maximilian & Philip

Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource. They work as consultants and software engineers for building web-based and desktop-based tools. …