Jonas Helming, Maximilian Koegel and Philip Langer co-lead EclipseSource. They work as consultants and software engineers for building web-based and desktop-based tools. …
Strategies towards web/cloud-based tools and IDEs
February 25, 2019 | 19 min ReadThere is currently a big hype surrounding web- and cloud-based tooling. New projects and frameworks are popping up and existing projects get more traction both in- and outside the Eclipse ecosystem, e.g. Eclipse Che, Eclipse Theia, Visual Studio Code, Atom, Eclipse Drigible, and Eclipse Orion.
While the adoption of web-based tools in the real world is still relatively low, almost every tool project will eventually deal with the question of how to migrate to a web-based platform. We recently gave a talk at EclipseCon about how to develop a mid- and long-term strategy for this migration. As we received many comments and questions about this, we decided to dedicate an article to the topic:
If, when and how to migrate tools and IDEs towards a web- and cloud-based solution
Be careful, this topic is a typical “it depends” decision. Some aspects are pretty subjective, too. We will therefore share some generic patterns and experiences we found useful in projects. Additionally, we describe a process that you can apply in your custom project to get to mid- and long-term strategy. But before we go into more detail, let us define:
What are web/cloud-based tools and IDEs?
When talking about web-based tool or IDEs, there are two stages you could consider:
- Stage one: run locally
- Stage two: run remotely
Stage one means, that your tool, especially the UI is based on web-technology, e.g. HTML and CSS, but it runs all components locally and there is no remote server required. For example VS Code is based on web technology, but by using Electron, it is still a desktop application and all its components run locally not on a remote server. Therefore it still comes with some properties of a desktop tool like the necessity of installing it on your local machine. Eclipse Theia also supports this local “mode”. Usually, web-based tools already split a server and client part, even though both are still deployed locally. Therefore, stage one is a good preparation for stage two.
Stage two means that your tool is based on web technology and runs many components remotely hosted on a server. As in stage one, the UI is obviously still based on web technology but this time the UI connects to remotely hosted components to implement important functionality. This means that the UI is usually running in a browser but there is a server hosting tools and workspaces. A workspace holds the data users create (e.g. the source code). Additionally, workspaces can host the specific tools required to develop, build, and run a project. Eclipse Che provides such a workspace server (among other features); Eclipse Theia allows you to build tools with a UI part running in the browser and a backend part hosted on a remote server. Stage two could also be referred to as “cloud-based” tooling. In this article, we focus on stage two, but most considerations are applicable for stage one, too.
“If” - Should you migrate to web-based tooling?
Engineers tend to think and talk about the “how” a lot, and not so much about the “if”. However, migrating a tool chain to the web is not like a typical technical decision, it is deciding about a paradigm shift. In any tool project of a reasonable size, this migration is a major endeavor. Therefore, before caring about the how or the when, you should be very clear about the motivation, meaning the advantages you gain from such as migration, as well as the risks. On a very high level, the typical two major advantages you gain from a web-based solution are:
- Modern UI technology: You can use HTML, CSS and Typescript to implement your UI and additionally you have access to all frameworks innovating very quickly in the web area.
- Simplified deployment/installation/access: The final vision of cloud-based tooling is to allow a developer to start coding by only accessing a specific URL.
In turn, two major shortcomings of migrating an existing solution to a web-based solution are shown below:
- Cost: Migrating to a web-based stack is not like switching to a new version of Eclipse. It typically requires re-working significant parts of your tool, especially in the UI. Additionally, not all tool frameworks from the desktop have a similar counterpart in the web yet.
- High Volatility: Tools typically have a long maintenance lifecycle. As web frameworks are evolving very quickly, it is difficult to choose the right ones for implementing a tool. This is true for base technologies (such as React, Vue.js, etc), but also tool-specific technologies.
Only by knowing the advantages, you can evaluate them against the risks and cost. You will even need to do that on a per use case basis, as the result can be very different for specific parts of your tool chain. That means, you will gain more advantages with less costs by migrating some parts compared to others.
As this evaluation is a complex topic for its own, we have already published a dedicated article about “web-based vs. desktop-based tools”.
In a nutshell, the conclusion on this topic is:
- Not all tools / use cases do necessarily benefit from being available in the web
- There are sometimes other ways to grasp typical advantages you expect to gain with a web-based tool. As an example, if you want to minimize the set.up effort for using a tool, you can potentially automate that on the desktop, too. Often enough those alternatives are cheaper compared to migrating a full tool chain
- Web-technologies are not the “holy grail”, they pose their specific issues and risks like any technology does
- The desktop today still is a couple of years in the lead in terms of available features and framework support for building tools
- “Desktop Eclipse” is not deprecated
- Some use cases are very efficient when implemented in the web
- … and some provide a lot of benefit to tool users if they are available in the web. Finding those low hanging fruits is probably a good start!
So let us assume you have identified some use cases in your tool chain and you want to migrate to the web. This leads us to the next question:
“When” should you migrate to web-based tooling?
Migrating an existing tool of any reasonable size will not happen overnight. Nor will it typically be possible to define a complete timeline right away. The first most important step is to define a strategy for any type of migration. As this strategy will immediately influence your further development, we suggest that you do this right away, meaning NOW. Such a strategy defines which use cases are expected to be migrated to the web, when this is supposed to be done and how the overall architecture should look like. Deciding on a point when to migrate which use cases roughly is often essential to make the right architectural decisions SHORT-TERM. Only when you know that you will migrate you can take this fact into consideration for architectural decisions. And while the addition cost to do so is often close to zero, the additional cost of taking a wrong decision now that will hamper a migration further in the future can be huge.
As an example, you might develop new components on the desktop already in a service oriented way, so that they are reusable in a future web version. Please see the last section of this article for typical patterns, which are influenced by the need to being compatible with a web-based version.
MID-TERM, you might want to start to migrate specific use cases of your toolchain. Before you do that, you typically need to prepare the architecture of your tool for this. This means, that you proactively refactor parts of your existing tool to support the web/remote case. Those changes should typically be done in a way that the existing tool remains fully functional.
If this process goes well, you can LONG-TERM migrate all use cases of your toolchain, completely switch to a web-based tool and deprecate the existing solution. Please note that this final step does not only rely on technical aspects. Once you provide a web-based version of some parts of your tool, you will need to watch user feedback. Not every user might be happy with a new version of the tool. And as a web-based tool is really a paradigm shift, this is not only driven by technical criteria.
Now that we have defined what to do now, short-, mid- and long-term, that still leaves a big question mark on the question of “when”, as those terms can mean anything on a concrete timeline. As stated before, we believe it is essential to define a strategy right now. All other steps depend on several criteria, some of which will delay further steps, while others will create pressure.
The question “when” to start any migration obviously depends on the result of evaluating the benefits for the use cases of your tool chain and your ability to invest effort to harvest them.
This is obviously driven by cost and this depends on the complexity of you toolchain. While your tool use cases will typically not change too much, it might be a good time to rethink some use cases, whether they are still valid, valuable and therefore even need to be migrated. Besides the feature-set of your tool, the costs are also influenced by the available framework support for web-based tools. Implementing tools in the web/cloud is still a fairly new thing, not all frameworks you are used to from the desktop have an equivalent there. This is likely to change over time, as a lot of development is currently going on in this area. Therefore, you should watch the development of web-based tool components closely to find the perfect timing for implementing your use cases. At a certain point in time, this might even turn around, meaning some components you want to use will only be available on the web stack.
Besides these technical criteria, there are also external and organizational aspects in defining a good timeline. You might be driven by competitive pressure. Additionally, some existing desktop components might produce technical pressure, e.g. if their technologies get deprecated or their maintenance costs become to high. Finally, you also need to consider the experience of your current development team. It will take time for your team to grasp experience with new technologies or to onboard new people. As you likely need to maintain the existing solution for quite a while, your team needs to balance between two skill sets. While this poses a challenge, it can be also a good thing, as many developers like to learn new things and prefer some variety in their daily work.
To sum up, scheduling this migration is not so different to planning the overall future of your project. Migrating “low hanging fruits” and creating minimal viable products (MVPs) will help you to control the risk, remove pressure and gain experience. This directly dives into the last question discussed in this article:
“How” to migrate tools from desktop to the web/cloud?
We could easily devote several articles, if not even a book to this question, but let us focus on some key takeaways on strategic level of abstraction. The first most important thing to consider when you plan the migration of your tool chain to the web is to go iteratively.
That means in a nutshell that you migrate use case by use case starting with the low hanging fruits (high value and low cost). Your first goal should be to create a minimal viable product of your web-based version and already deploy those product increments. Make those increments available to your users and collect feedback.
Once you have accomplished this, you repeat the process in another iteration. While doing so, you ideally keep your existing desktop version fully functional. Further, both “versions” of a specific use case share as much code as possible, so you do not double your maintenance costs. Please note, that this usually means to refactor the existing desktop version, too, e.g. to offer services.
While this process seems like a truism in software engineering, we cannot emphasise enough its importance for the migration from desktop-based to web-based tools or IDEs. So let us motivate the importance of an iterative process a bit more in detail.
First of all, you want to avoid rewriting everything from scratch. While it looks appealing to start on a green field, we usually tend to underestimate the effort, which was necessary to implement an existing system and therefore also the effort to reimplement it. Therefore, you should preserve the investments you already made. Even more?, if you start with a new implementation in parallel, you will need to maintain the existing tool, until the new tool is ready to replace it. This will double your maintenance costs from day one. If you are not in the lucky position to have an “infinite budget”, there is a high risk of failing with this big bang approach.
Besides the obvious cost/effort risk of a reimplementation from scratch, there are also organizational and user-related risks. While it usually sounds appealing to users at first glance to migrate an existing tool to run in the browser, there might be caveats in usability. Additionally, some use cases might even conceptually change in the cloud. At the end you typically strive towards user satisfaction, so watch users feedback on any use case you migrate to the web. An incremental process makes it possible to react to this feedback and adapt the requirements accordingly.
Finally, you will also need to onboard your development team and allow them to gather experience with the new technology stack. Probably, you will need to revise some design decisions, once your team has learned more about web-based tools. In most scenarios, it is not a good idea to have completely separated teams for the web- and the desktop version. While the required technology skills are different, you usually need experts for a specific use case. In addition, if you split completely, the “desktop team” will feel like the one being deprecated in the future. Again, all this is much easier achieved in an iterative process than in big bang.
Besides the process, **the “**how” is obviously also a lot about technical questions and decisions. A full collection of those would definitely go beyond the scope of this article. Nevertheless, we would like to give a concise collection to a least cover the question:
What are good technical patterns for migrating tools to the web?
Without any claim that this is a complete listing, in the following, we want to mention a few technical patterns, which we found to be useful when conducting a migration of a desktop tool to the web.
Backporting
You can apply this strategy short-term, especially for UI components that you need to develop from scratch in the current desktop version of your tool. Now instead of using your regular UI toolkit (JavaFX, SWT, Qt, …) to create the UI of your new feature, you instead implement the UI using HTML and TypeScript. This view is then embedded into the existing desktop tool using a browser component. This approach allows you to reuse the new UI component in a future web version. Additionally, it allows you to use UI frameworks specific to the browser, e.g. the powerful visualization library “D3”.
This approach has two typical challenges: First, the look and feel / styling of the new component has to match the existing application, second, integration with the existing UI, e.g. for drag and drop or the selection is not trivial. Because of these challenges, this strategy works especially well for read-only views, which have a specific style by nature. Good examples for such views are reports or views, which show the results of something like a build or the execution of tests.
When applying this pattern, make sure that any non-UI part of the feature is extracted to services, so it could be deployed on a server in a future web-based tool. This drives us directly into the next pattern for migrating to the web:
Extract services
This is not a new pattern, but it is especially useful for a migration to web-based tools. The basic idea is to extract the business logic into headless, independently deployable services. Independent means essentially that there are no dependencies to the UI and that the service can be accessed in a technology agnostic way (e.g. as a REST services). This can be achieved by defining an API or even a protocol to interact with the service. The language server protocol (LSP) is the prototypical example for this pattern. It extracts all the logic to calculate language assistance within an editor into an independant service. This service can be accessed from different editors (i.e. UIs) via a defined protocol. Applying a service-oriented approach typically also enables you to:
Reuse
We developers often prefer to reimplement things from scratch. This is definitely more fun and often enough it leads to a cleaner result. In turn, we often underestimate the necessary effort. Typical tool projects were developed with “many person decades” of effort, those features and the business logic cannot be re-implemented over night. Additionally , when migrating to the web, there will be enough work for implementing the new UI parts anyway. If you reimplement everything, this will usually delay the migration, increase the risk and often enough it will fail. Therefore, try to reuse everything which is possible and makes sense. This can also include developing some “adapter technologies” or refactoring existing code to be reusable. You can still reimplement things later on, but reusing existing code will typically lead to a feature complete product in the web more early.
Declarative artifacts / Modeling
A generic strategy, which often allows you to reuse existing implementations in a future web version as if they are implemented in a declarative way or if they are modeled somehow. Those declarative descriptions or models are usually much more technology independant than written code. They are either interpreted by a technology dependent component or specific code is generated from them. If you migrate to another technology stack, you ideally just need to replace the interpreter or the generator, but not rewrite all project-specific declarative descriptions.
As an example, constraints on a data model could either be directly implemented or they could be declaratively describe in a constraint language. In the second case, you can very likely reuse them, even locally in the browser. As another example, if you use a declarative approach to develop your form-based UI (such as EMF Forms), you can have a smooth transition to the browser by interpreting the same UI descriptions with web technology (as done with JSON Forms).
Single-sourcing
If you look at all the patterns so far, the overarching scheme is always to use as much as possible the of same code/artefacts for a web version of your tool as for a desktop version (a.k.a. Single sourcing). One dimension of this is initial reuse to lower the initial effort. The other dimension of this is to continuously lower the maintenance costs by sharing a code base. This allows you to maintain both solutions at the same time for a certain period. As described before, this lowers your overall risk and allows the web-based version to incrementally grow features-wise.
Build standalone
When migrating any parts, especially the UI of your tool, you often want to integrate them into something existing, meaning an existing IDE or platform. This way, you do not need to implement all features from scratch. As an example, if your tool supports Code Generation for Java, you typically do not want to implement full tool support for Java, but rather reuse that and focus on the code generator only. This pattern contributed to the great success of the Eclipse platform, because it enabled the creation of tools with very little effort. Consequently, there are some similar approaches uprising for the web, too.
However, the market is not as consolidated and stable when it comes to web-based tooling as it used to be on the desktop. Even if we believe in a great future of Eclipse Che and Eclipse Theia, there are, and likely will be other approaches and ecosystems such as VS Code. Therefore, you should avoid strictly binding you implementation to a specific platform. Of course, this is not possible for all features, but it typically is for two essential parts of a tool, which usually account for most of the value anyways:
The UI, which you often have to redevelop for a web-based solution. It is usually straightforward to embed a custom UI into any existing platform as all of them are essentially based on HTML, CSS and JavaScript.
Services, which you ideally extract from an existing implementation. Those services shall be as independent anyway, so avoid binding them to a certain platform. If you need to access some platform services, you might want to encapsulate this access.
Besides lowering the risk of binding your solution to a specific platform or ecosystem, this “stand-alone” pattern has even another advantage, you can even “deploy standalone”. That means, you can provide a slim and standalone version of specific features, that are not embedded into any existing platform. This allows you to provide some use cases in a very user friendly way, as an example, you could deploy a “review” feature stand alone, so that it can be used by nontechnical users without the overhead of an IDE.
Conclusion
In conclusion, developing a good strategy depends on the specific needs of your custom project. Not every pattern will fit, so treat the content of this article as an overview of what worked in other projects and as a catalogue of ideas to check against your project. In a nutshell the most important best practises are:
- Get clear about the benefits and motivation first on a per use cases basis (see also this blog post)
- Prioritize depending on the relative value (i.e. value / cost) and pick low hanging fruits first
- Apply an iterative approach and ideally keep the desktop tool alive to lower the risk
- Reuse existing components, where it makes sense, and ideally single source them between web and desktop
- Prefer stand alone components over binding your code strictly to any platform
- Spend the time to develop a midterm and long term strategy - Then start small to gain your first experiences
After thinking about all those strategic decisions, a good first practical thing to do is building “proof of concept” components (PoCs). These will allow you to evaluate the feasibility and the required effort more in detail. Also you will learn and gain experiences with the underlying technologies.
Last but not least, you do not have to walk down this road alone. Among other vendors EclipseSource provides you with support for tool projects. With our focus on creating tools, we combine experts for web- and cloud-based tools together with Eclipse, DSL, IDE and modeling experts. Please have a look at our service offering for support around Eclipse Theia, for web-based tools or tools in general. Please get in contact with us, in case you have any questions or want to learn more about how we can support you.
The following video shows a shorter version of this article, a talk given at EclipseCon 2018: