Archive

Archive for July, 2014

┬ÁServices and BPM engine architecture

4 July 2014 Leave a comment

For almost 2 years now, I’ve been playing around with different ideas about BPM engine architectures — in my “free” time, of course, which wasn’t that often.

The initial idea that I started with was separating execution from implementation in the BPM engine. However, that idea isn’t self-explanatory, so let me explain more about what I mean.

If we look at the movement (migration) of an application from a single (internal) server to a cloud instance, this idea is evident: the application code is our implementation while the execution of the application is moving from a controlled environment to a remote environment (the cloud) that we have no control over.

A BPM engine, conceptually, can be split into the following idea’s:

  1. The interpretation of a workflow representation and the subsequent construction of an internal representation
  2. The execution, node for node, of the workflow based on the internal representation
  3.  The maintainance and persistence of the internal workflow representation

A BPM engine is in this sense very similar to an interpreted language compiler and processor, except for the added responsibility of being able to “restart” the process instance after having paused for human tasks, timers or other asynchronous actions.

The separation for execution from implementation then ends up meaning that the code to execute a certain node, whether it’s a fork, join, timer or task node, is totally separated and independent of the code used to parse, interpret and maintain the internal workflow representation. In other words, the engine shouldn’t have to know where it is in the process in order to move forward: it only needs to know what the current node is.

While is this to some degree self-evident, this idea also lends itself to distributed systems easily.


Recently, I came across an interesting talk Fred George on Microservices at Baruco 2012. He describes the general idea behind Microservices: in short, microservices are 100 line services that are meant to be disposable and (enormously) loosely coupled. Part of the idea behind microservices is that the procedural “god” class disappears as well as what we typically think of when we think of the control flow of a program. Instead, the application becomes an Event Driven Cloud (did I just coin that term? ;D ). I specifically used the term cloud because the idea of a defined application disappears: microservices appear and disappear depending on demand and usage.

A BPM engine based on this architecture an then be used to help provide an overview or otherwise a translation between a very varied and populated landscape of microservices and the business knowledge possessed by actual people.

But, what we don’t want, is a god or service: in other words, we don’t want a single instance or thread that dictates what happens. In some sense, we’re coming back to one of the core idea’s behind BPM: the control flow should not be hidden in the code, it should be decoupled from the code and made visible and highly modifiable.

At this point, we come back to the separation of execution from implementation. What does this translate to in this landscape?

One example is the following. In this example, I’m using the word “event” very liberally: in essence, the “event” here is a message or packet of information to be submitted to the following service in the example. Of course, I’m ignoring the issue of routing here, but I will come back to that after the example:

  1. An “process starter” service parses a workflow representation and produces an event for the first node.
  2. The first workflow node is executed by the appropriate service, which returns an event asking for the next node in the process.
  3. The “process state” service checks off the first node as completed, and submits an event for the following node.
  4. Steps 2 and 3 repeat until the process is completed.

There are a couple of advantages that spring to mind here:

  • Extending the process engine is as simple as
    • introducing new services for new node types
    • introducing new versions of the “process starter” and “process state” services
  • Introducing new versions of the process engine becomes much easier
  • Modifying the process definition of an ongoing process instance becomes a possibility!

However, there are drawbacks and challenges:

  • As I mention above, we do run into the added overhead of routing the messages to the correct service.
  • Performance is a little slower here, but then again, we’re doing BPM: performance penalties for BPM are typically found in the execution of specific nodes, not in the BPM engine itself.
    • This is similar to databases where the choice between a robust database and a cache solution depend on your needs.

In particular, reactive programming (vert.x) seems to be a paradigm that would lend itself to this approach on a smaller scale (within the same server instance, so to speak), while still allowing this approach to scale.

 

 

 

Categories: Cloud, jBPM, Other