Enhancing Modeling Tools with AI: A Leap Towards Smarter Diagrams with Eclipse GLSP

April 12, 2024 | 7 min Read

The integration of Artificial Intelligence (AI) with IDEs, as exemplified by tools like Github Copilot, Codeium, Tabine, ChatGPT and more, has opened new horizons in software development. It is evident that AI will have the same transformative potential for graphical languages and modeling tools too. Imagine for example just describing a business process with a few words and an AI Assistant creates the corresponding BPMN model for you! At EclipseSource, we work on various integrations of AI into domain-specific tools for our customers and this process poses unique challenges. In this blog post, we explore how LLMs and GPTs can enhance GLSP-based diagram editors, making them not just modeling tools but intelligent partners in the modeling process.

What is GLSP?

Eclipse GLSP is an open-source framework designed for crafting custom diagram editors using modern web technologies. It’s known for its rich feature set that includes everything from node and edge manipulation to inline label editing and advanced functionalities like validation, copy/paste, and undo/redo. GLSP editors can be seamlessly integrated with various platforms such as Eclipse Theia, VS Code, Eclipse desktop, and plain web applications. The flexible architecture and its protocol based editing model also make it a perfect peer for interacting with AI technologies from within a GLSP diagram editor (see screenshot below).

Why Enhancing Diagram Editors with AI?

When we think about the activity of modeling, it is a lot about transforming knowledge into structured data formats (e.g. a diagram conforming to a modeling language). The input of this transformation are often requirements, given as natural language, which are then augmented by the “modeler” with common and domain-specific knowledge. Modern LLMs on the other hand are generally really good in transforming knowledge into other formats and in augmenting user requests with knowledge, which is available to them. More concretely, integrating AI with diagram editors can elevate user efficiency and productivity in the following dimensions. We will show examples for some of these use cases later in this article.

  • Model Comprehension and Discovery: Enabling users to ask questions about a diagram; not just structurally but also about its underlying meaning, facilitating faster model navigation and understanding.
  • Model Completion: Allowing the AI to suggest changes to a diagram in natural language, like adding nodes and edges or improving labels and diagram overall quality based on the current diagram state.
  • Model Scaffolding: Empowering users to generate diagrams from scratch based on natural language descriptions of the aspects to be modeled.

Of course, this is just a preliminary selection of generic use cases, the “tip of the iceberg”. AI integrations could also be applied to more specific use cases such as detecting errors or the potential for optimization in a model. In our experience, once the challenge of integrating a diagram editor with an underlying AI is tackled, new use cases are often discovered very fast. This journey is worth taking, the right integration of AI can save hours of work for your users and also significantly improve the quality of model artifacts.

AI Integration Challenges in Modeling Tools

While AI has made significant inroads into IDEs for supporting coding, its integration into non-textual editors like graphical, domain-specific editors is not yet common and poses unique challenges. Most LLMs are designed as general-purpose language predictors, inherently crafted to generate and interpret text tokens. Modeling tools however involve graphical diagrams, metamodels, their semantics and language rules. This demands for a nuanced approach to effectively harness AI’s capabilities. In a nutshell, we have to “teach” the underlying AI the diagram semantics and how to interact with it.

The integration strategy varies based on the use case, necessitating considerations such as domain-specific knowledge, the complexity of the modeling language, and the range of editing operations. In our experience, successful strategies usually start with targeted prompt engineering to enable contextual awareness about the diagram state and its modeling language, as well as enhancing the LLM with the capability to interact with the editor. Once the general feasibility of a use case is proven, fine-tuning an LLM and integrating external data sources allow the AI to even better understand the language rules and idioms and be incrementally optimized.

Prompt Engineering and Interactive Capabilities

As highlighted above, the starting point for equipping diagram editors with AI involves the two cornerstones of prompt engineering and provide the LLM with interactive capabilities. Let’s dive into these two initial steps more in detail:

  • Prompt engineering: This technique tailors generative AI outputs to align with specific requirements, ensuring the AI is primed with knowledge about the diagram language, including node and edge types and their semantics, alongside a diagram state exchange format. Careful prompt engineering typically involves establishing a text protocol between the LLM and the tool integration component sitting between the actual user and the LLM. Such a protocol enables the tool—the diagram editor in this case—to coordinate with the LLM before actually presenting results to users or invoking actions in the tool. For instance, the tool can split complex tasks into smaller ones, specify steps to complete a certain task, give the LLM more “time to think” by applying inner monologue techniques, producing intermediate results first that are hidden from the actual user, before the LLM generates its final output for the user based on its intermediate results.
  • Interactive LLM Capabilities: This entails defining and integrating a diagram editor API that enables LLM interaction with the editor — from querying diagram states to executing manipulations like adding or modifying elements. While the interaction between the tool and the LLM on this technical API protocol should be invisible to users, the effect of invoked tool actions should be transparent to users, allowing for seamless user continuation while the AI operates, including highlighting of changed elements, maintaining the user’s undo/redo functionality and viewport. A successful tool integration should be positioning the AI as a true co-pilot rather than a peripheral chatbot.

These foundational elements merely mark the beginning of a successful AI journey in modeling tools. Building upon this groundwork, we can explore incremental optimizations, considering various challenges such as the complexity of the underlying modeling language, model size, availability of training data, among other factors. Strategies such as refining the LLM with embeddings or fine-tuning, or delegating certain functions to external components will enhance response times and reduce resource demands. Additionally, managing the context window is essential for handling large diagrams. These strategies illustrate just a few of the potential avenues for development from this starting point, which certainly depend on the unique requirements and characteristics of a specific use case. So be prepared for an incremental journey when starting your own custom AI integration project.

Showcasing the Potential

In the following we present a series of demo videos, derived from a production scenario. The used diagram editor supports a flow chart, e.g. to model the behavior of a coffee machine. The demo videos demonstrate the endless capabilities for enhancing GLSP-based diagram editors through AI in several scenarios, including diagram comprehension, AI-guided editing and enhancement, and diagram scaffolding from natural language inputs. Please note that in the presented examples, the underlying AI only knows the semantics of the flow chart and how to interact with the diagram editor. It is not fine-tuned about any specific domain (such as coffee machines).

Describing Diagrams and Their Meanings


Answering Diagram-related Queries


Enhancing Diagrams with Nodes and discover nodes based on a semantic description


Scaffolding New Diagrams From Natural Language Inputs


Embracing the Challenge for Great Rewards

Integrating AI with GLSP-based editors involves navigating a complex landscape of challenges, including scalability for large diagrams, performance and resource requirements, adherence to language rules and semantics, and incorporation of domain-specific knowledge. These challenges demand a thoughtful selection of strategies, optimizations, and techniques tailored to specific use cases. Despite the intricate nature of this endeavor, the amazing potential for transformative advancements for your users makes it a worthwhile pursuit with the right expertise and innovative solutions. Based on our experience with customer projects integrating AI, it is usually a good idea to start early and design an incremental development process to continuously improve the system based on findings and ideally user feedback.

EclipseSource is at the forefront of technological innovation, ready to guide and support your AI initiatives. Our comprehensive services in AI integration are designed to provide the specialized know-how necessary to develop customized, AI-enhanced solutions that elevate your modeling tools and IDEs to new levels of efficiency and innovation. Explore how we can assist in integrating AI into your tools with our AI technology services. Reach out to begin your AI integration project with us.

Jonas, Maximilian & Philip

Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource. They work as consultants and software engineers for building web-based and desktop-based tools. …