p2 Metadata Analyzer

Never judge a man until you have walked a mile in his shoes.  This advice can be applied to the art of writing software.  At a high-level software seems so easy, but when you start working on a project, truly understand the requirements and start writing code under a variety of constraints, the true nature of the beast reveals its ugly head.

Throughout the past 6 months I have had the good fortune of working with the p2 team.  I’m not sure if I’ve walked a “mile in their shoes”, but I have read enough bug reports, suggested enough crazy ideas, and been humbled through a number of code reviews that I am starting to understand the complexities of writing and delivering a provisioning system.

I have been working on “The Publisher”, the part of p2 that is responsible for putting things into the repositories.   One of the challenges faced with the publisher is that people are unlikely using the latest version.  A number of bug reports have been filed with the following steps:

  1. D/L the latest RC / IBuild
  2. Connect to site XYZ
  3. Trying to install ABC
  4. Boom — things failed

In many projects this would be a show stopper. In p2, this can be closed as a duplicate of a bug filed last year, or a regression, or a frustration, or poor job of handling incorrect metadata.  Just because the user was running with the latest build doesn’t mean the repository they are installing from was generated with a recent tool.  From the outside the answer is simple, update your tools! (Now walk a mile in the shoes of the release engineer).

With Galileo fast approaching it has become evident that everyone has a different build story. Some teams are using the metadata generator in their builds.  Some teams are using an old version of the publisher, while others  simply export from the UI and take whatever repository eclipse gives them.  While it would be great if every team switched their builds to use the publisher RC3 (the build just finished), I know this isn’t going to happen.    Build scripts are too hard to get right and there are too many other problems left to fix. This is especially true when things “appear” to work.

To help determine if there are any problems with the metadata generated by Galileo release train, we have written a Metadata Analyzer tool.  The tool scans Galileo (or any repository you give it), and looks for known problems.  In particular, it searchers the repositories for problems have have been recently fixed in p2.  While there were a few problems in RC1, I’m happy to report that other than a few missing copyright notices, the Galileo repository is now stable.  If you are interested in the metadata tool, it is currently attached to the following bug report. Once we ship RC4 I’ll try to give this tool a proper home.

The tool is very simple and written using Declarative Services.  In fact, the crux of the tool can be summed up in 9 lines:

  try {
   for (Iterator iter = ius.iterator(); iter.hasNext();) {
     IInstallableUnit iu = (IInstallableUnit) iter.next();
   service.postAnalysis(); // Yes, this should be done in a finally block.
  } catch (Exception e) {

Each analysis step is simply another service, and that service is called for each InstallableUnit. There is also a pre and post method for setup / teardown of the service.

An example of a service is:

 public void analyzeIU(IInstallableUnit iu) {
   if (iu.getId().contains("feature.jar")) {
     ITouchpointData[] touchpointData = iu.getTouchpointData();
     if (touchpointData.length == 0) {
       System.out.println("[ERROR] No unzip touchpoint for: " + iu.getId());
     } else {
       boolean found = false;
       for (int i = 0; i < touchpointData.length; i++) {
         ITouchpointInstruction instruction = touchpointData[i].getInstruction("zipped");
         if (instruction.getBody().equals("true"))
           found = true;
       if (!found)
         System.out.println("[ERROR] No unzip touchpoint for: " + iu.getId());

This service checks that each feature will be “unzipped” when installed.

Because everything is just a service you can take the tool as it is, or add your own analysis steps. If anyone has ideas for other tests we can / should run, please comment on the bug report.