Introducing Interactive AI Flows in Theia AI

February 13, 2025 | 5 min Read

In today’s AI-powered development environments (IDEs) and tools, chat interactions are no longer just about answering questions. They have evolved into dynamic dialogues where the AI can ask users for clarifications, present options, and guide them through complex tasks—all within a single response. This mirrors real-world decision-making in development workflows, making the AI more intuitive and practical.

Imagine your IDE or specialized tool guiding you through choices, such as confirming configuration changes or selecting the best approach for resolving an issue in your engineering tool. Popular AI-native IDEs, like Codeium’s Cascade mode in Windsurf, demonstrate how these interactive flows enhance usability. Now, this capability is available in Theia AI—and thus ready to be used in your custom tool or in a tailored IDE for your domain.

Example for an Interactive Workflow in Theia AI

To illustrate this approach, let’s look at Interactive AI Flows in action. Here’s a simple example: a storytelling agent where the AI asks the user how to continue a story and incrementally generates it based on their choices.

Notice how the AI engages the user with concise questions, enabling them to select an option with a simple click and then seamlessly continue the work on the task based on the user decision. This streamlined interaction keeps workflows efficient and intuitive, particularly for complex tasks requiring user input at specific stages to proceed with the overall task completion. By fostering this interactive decision-making and user-driven guidance of the agent, AI becomes a powerful assistant for tackling intricate challenges within your specialized IDE or tool.

A Peek Under the Hood

Let’s explore the implementation of the storytelling agent shown above. First, we create a custom chat agent with the system message below, explaining the overall task, as well as how to structure the interaction points with the user. Please refer to the Theia AI documentation, for more details on how to create custom agents.

You are an agent demonstrating how to generate questions and continue the conversation based
on the user's answers.

First, answer the user's question or continue their story.
Then, generate an interesting question along with 2-3 possible answers, which will be presented
to the user as multiple-choice options.

Use the following exact format to define questions and answers:

<question>
{
    "question": "YOUR QUESTION HERE",
    "options": [
        { "text": "OPTION 1" },
        { "text": "OPTION 2" }
    ]
}
</question>

The user will select an answer, and you will continue the conversation accordingly.

Next, we add a content matcher to our in chat agent implementation in order to detect <question> elements in the model’s response and replace them with an interactive UI component, asking the user for input.

@postConstruct()
addContentMatchers(): void {
    this.contentMatchers.push({
        start: /^<question>.*$/m,
        end: /^<\/question>$/m,
        contentFactory: (content: string, request: ChatRequestModelImpl) => {
            const question = content.replace(/^<question>\n|<\/question>$/g, '');
            const parsedQuestion = JSON.parse(question);
            return new QuestionResponseContentImpl(
                parsedQuestion.question,
                parsedQuestion.options, request,
                selectedOption => {
                    this.handleAnswer(selectedOption, request);
                }
            );
        }
    });
}

In our agent implementation, whenever the language model completes its response, we check if it includes a question to the user. If so, we keep the response open by setting the state to waitForInput.

protected override async onResponseComplete(request: ChatRequestModelImpl): Promise<void> {
    const unansweredQs = unansweredQuestions(request);
    if (unansweredQs.length < 1) {
        return request.response.complete();
    }
    request.response.addProgressMessage({
        content: 'Waiting for input...',
        show: 'whileIncomplete'
    });
    request.response.waitForInput();
}

In the QuestionResponseContentImpl UI content part created above, we’ve passed a callback as argument. Below is the code that is triggered when the user selects an option: as soon as the user selects an answer, we update the response and continue the response, incorporating the user’s choice. For instance, we may trigger another LLM request, or perform an action in the tool, depending on the intended agent’s behavior.

protected async handleAnswer(
        selectedOption: { text: string; value?: string },
        request: ChatRequestModelImpl
    ): Promise<void> {
    const progressMessage = lastProgressMessage(request);
    if (progressMessage) {
        request.response.updateProgressMessage(
            { ...progressMessage, show: 'untilFirstContent', status: 'completed' }
        );
    }
    request.response.stopWaitingForInput();
    
    // Continue the agent's behavior, e.g. by triggering another LLM request
    // with the selected answer
    const messages = await this.getMessages(model, true);
    messages.push({
        type: 'text',
        actor: 'user',
        query: `The user selected option: ${selectedOption.text}.`
    });
    const languageModelResponse = await this.callLlm(
        languageModel,
        messages,
        tools.length > 0 ? tools : undefined,
        request.response.cancellationToken
    );
    // ...
}

By combining custom LLM response content rendering with this structured interaction model, we enable complex, multi-step interactions—such as guided workflows, confirmations, and decision trees—all within a single AI-powered conversation. For more details on how to implement custom chat agents with specialized user interfaces for LLM responses, setting up custom tool functions, and much more, refer to the Theia AI documentation. The full code of the chat agent introduced above, is also available as a Theia API example.

Beyond the Simple Example

This storytelling agent is just a basic example. In real-world applications, Interactive AI Flows can handle much more sophisticated interactions. The AI integration can suggest different approaches before implementing them, or let the user drive complex processes performed by the AI step-by-step. Combining different techniques available in Theia AI, including function calling, custom user interfaces in LLM responses, multi-agent collaboration, and more, developers can create highly dynamic and collaborative AI tools tailored to specific domains. This approach brings Theia AI on par with AI-native IDEs like Codeium Windsurf and Cursor—but with the flexibility to meet your specific needs in your domain.

Learn More

To explore more about Theia AI, check out:

Interested in building AI-powered tools? EclipseSource provides consulting and implementation services backed by our extensive experience in tool development. We also specialize in tailored AI assistance for web- and cloud-based tools and professional support for Eclipse Theia and VS Code.

Get in touch with us to discuss your specific use case!

Jonas, Maximilian & Philip

Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource, specializing in consulting and engineering innovative, customized tools and IDEs, with a strong …