In order to finish my work on a persistence layer for Designer, one of the things I need to understand is how Designer interacts with Guvnor. At the moment, Designer basically uses Guvnor for persistence: everything that you modify, create and save in Designer is saved in the Guvnor instance that Designer runs in.
However, there’s been more and more interest in being able to run Designer standalone: running Designer independently and without a ‘backing’ Guvnor insance. What I’m working on is inserting a persistence “layer” in the Designer architecture so that users can choose whether to use Guvnor for this persistence or whether to use a standalone persistence layer for this (such as a a database) — or some combination of this.
But in order to do that work, it’s important to understand exactly how Guvnor interacts with Designer: what does Guvnor tell Designer and what does Designer store in Guvnor, and how and when does the communication about this happen?
And now I can explain that interaction to you.
Let’s start at the very beginning: you’ve installed Guvnor and Designer on your application server instance and everything is running. You have this brilliant idea for a new BPMN2 process and so you click, click around to create a new BPMN2 asset and in doing so, open Designer in a new IFrame within Guvnor.
Now, when you open Designer, if I understand correctly, Designer takes a bunch of the standard, default “assets” that it will use later and goes ahead and stores them in Guvnor. Some of these are stored in the global area and others are stored in the particular package that your asset will belong to. But how does Designer actually know what the package and asset name is of what it’s editing?
Let me focus on a detail here: when you actually open Designer in Guvnor, you’ll be opening a specific URL, that will look something like this:
This results in the UUID that you see after
AssetEditorPlace: being sent to Designer (on the server-side). Unfortunately, Guvnor doesn’t have a method for looking up an asset based on its UUID, so Designer needs to figure out which the package and asset name of the asset it’s working with. Designer needs this information in order to interact with Guvnor. That means Designer does the following:
- Designer first requests the list of all packages available in Guvnor.
- After that, Designer then requests the list of all assets in each package.
- Then Designer keeps searching list of assets until it finds the UUID it’s been given.
- This way, it can figure out what the package name and asset name (title) is of the asset (process) it’s editing.
Naturally, I didn’t believe Tiho at first when he said this. My second reaction was to submit a Jira to fix this (GUVNOR-1951). I’llmake this better as soon as I finish this persistence stuff (or maybe as part of it.. hmm. Depends on how quickly Guvnor adds the needed feature.)
In any case, at this point, you have your blank canvas in Designer and Designer knows where it can store it’s stuff in Guvnor.
You go ahead and create your brilliant BPMN2 process with the help of all of Designer’s awesomely helpful features.
And then you need to save the process, what next?
This here on the right is a screen print of a find in Tiho’s IntelliJ IDE of the Guvnor (GWT) Java class showing where the code for this call is. The call shown is the
ORYX.EDITOR.getSerializedJSON(), in case you don’t have superhuman vision and can’t read the text in the picture.
Again, for those of you without superhuman vision, the screen print fragment above is the Guvnor GWT code that makes sure that the retrieved JSON is translated to BPMN2.
And that’s how it all works!
Of course, I’m no GWT expert, so I might have glossed over or incorrectly reported some details — please do let me know what you don’t understand or if I made any mistakes (that means you too, Tiho :) ).
Regardless, the above summarizes most of the interaction between Guvnor and Designer. The idea with the persistence layer I’m adding to Designer is that much of this logic in Designer will be centralized. Having the logic in a central place in the code in Designer will then allow us to expose a choice to the user (details pending) on where and how he or she wants to store the process and associated asset data.
Lastly, there are definitely opportunities to improve the performance of this logic. For example, using Infinispan as a cache on the server-side, even when Designer is used with Guvnor is an idea that occurred to me, although I haven’t thought enough yet about whether or not it’s a good idea. We’ll see what else I run into as I get further into this..
I recently learned about the Event Sourcing (and CQRS) design patterns. Unfortunately, reading through Martin Fowler’s bliki definition just didn’t help. What helped the most were two things:
CQRS relies on Event Sourcing in order to work.
Event Sourcing requires the following:
- You have an event store in which you store events.
- You have Aggregate Roots, which is a way of structuring data structures so that you can always link the fields and attributes and other associated data with the underlying “root” data structure. For example, the ‘pages’ in a ‘book’ would never change without the ‘book’ data structure knowing that.
Event Sourcing means that every time you change any data — any state, in fact, which also happens to be data — you generate an event.
Matt’s contact manager example application does just that: in all of the methods which change any data, an event is 1. created, 2. filled with the new data, and 3. added to the event store. It’s a little more complicated than that, but only because you also have to take into account the transactional logic necessary when changing data or state.
What’ s interesting about this is that you can then recreate state by “replaying” the events — from the beginning, if you have to. Replaying the events means that all of the incremental changes made are then reapplied after which you get your current state back!
CQRS is a design pattern that makes distributed and concurrent systems easier to design and maintain. The diagram above illustrates the problem particularly well: The user decides what she wants, resulting in a change being acted out on the state. That change then is also translated into an event which updates the report, which the user can then query if she’s curious.
By separating your “write” data model from your “read” data model you can respond to queries much more quickly and you can also make sure to use costly database transactions for operations that need it, for example. You do then run the risk that the data you query is not 100% accurate with regards to when it was queried: your “read” data model is always updated after your “write” data model.
Why am I writing about this? Well, CQRS certainly doesn’t really apply to my work, but Event Sourcing does. Being able to rewind (and re-forward) state is something that I think a BPM engine should do. I’m also curious if Event Sourcing could help when architecting any BPM engines that deal with ACM.
I’m waiting for eclipse to start up again, so I’ll post a quick note about the JTA manager lookup mechanism that’s changed in Hibernate 4 and AS7.
Those of you who’ve read the How do I migrate my application from AS5 or AS6 to AS7 guide on the Jboss AS 7 documentation site have probably come across the following quote:
Configure the datasource for Hibernate or JPA
If your application uses JPA and currently bundles the Hibernate JARs, you may want to use the Hibernate that is included with AS7. You should remove the Hibernate jars from your application. You should also remove the “hibernate.transaction.manager_lookup_class” property from your persistence.xml as this is not needed.
Unfortunately, while you may not need the
hibernate.transaction.manager_lookup_class property, you do need a property apparently.
If you don’t set any property (with regards to a JTA transaction manager), you’ll end up getting a stack trace that looks like this:
java.lang.NullPointerException at org.hibernate.engine.transaction.internal.jta.JtaStatusHelper.getStatus(JtaStatusHelper.java:73) at org.hibernate.engine.transaction.internal.jta.JtaStatusHelper.isActive(JtaStatusHelper.java:115) at org.hibernate.engine.transaction.internal.jta.CMTTransaction.join(CMTTransaction.java:149) at org.hibernate.ejb.AbstractEntityManagerImpl.joinTransaction(AbstractEntityManagerImpl.java:1208)
If you even go as far as googling the above stack trace, you’ll run into HHH-7109, which then mentions the
hibernate.transaction.jta.platform. Oh, except that it’s a new, undocumented property”. Great..
There is in fact, a whole chapter on transactions, for developers, here.
One of the problems with running jBPM on jBoss AS7 was that there was a fair amount of (re)configuration necessary to get everything working.
This was enough of a problem that me and Maciej worked together last week to create assembly files (which are used by the mvn-assembly-plugin) which will create wars that will run on EE6 servers, with an emphasis on AS7 of course.
The main changes are as follows:
- The persistence configurations for jBPM are JPA 1 — in the EE6 jars, we’ve added JPA 2 based persistence configurations for both the human-task war and the jbpm-console (server) war.
- The Hibernate 3 dependencies have been eliminated from the EE6 wars — in fact, there are no Hibernate (or other ORM) dependencies in the war. That works well with AS7, but may prove a problem with other EE 6 application servers (glassfish, websphere). I think that once the Hibernate 4 framework goes through another couple versions and is stabler, we can then add those dependencies into the wars in order to make sure that those wars work on all possible application servers.
- Unfortunately, Hibernate 3 has a lot of transitive dependencies that were also being included in the wars — mostly dependencies that were introduced via the dom4j dependency. These dependencies were also removed from the EE6 jars.
The new wars have the same name as the “normal” Java EE 5 wars, except that they have “EE6” as the maven classifier. That means that their names are something similar to the following:
I had this picture lying around (in a git branch) and I figured I might as well make it available to others. It might not be totally accurate anymore with regards to the methods, but it is an accurate depiction of the framework.
What you see here is how persistence is structured in human-task: to start with, we have a
TaskService instance. When a human-task server or service instance gets a request (via a method call or message) to perform an operation, the
TaskService instance creates a
TaskServiceSession is our “request session”: it encapsulates the logic state and persistence state needed when handling the request. By logic state, I mean the variables and retrieve information needed to handle the task operation, and by persistence state, I mean the persistence context (entity manager) and transaction control state.
Actually, in order to seperate the persistence logic from the task logic, the persistence logic has been put into the
TaskPersistenceManager. That makes it easier to maintain (I hope?) and at least, for me, easier to read.
So far I’ve described the left hand side of the diagram:
The upper right hand side describes how the
TaskServiceSession instance is created: via a
Factory, of course. However, we wanted to provide Spring compatibility, so we have to implementations: one for “normal” situations, and one when using the human-task code with Spring. When we use Spring, we create a spring bean using the TaskSessionSpringFactoryImpl and inject that into our TaskService bean. Otherwise, the default is to use the (normal)
Transactions are also a concern that can differ: on the bottom right hand side, you’ll see we have 3 implementations of the TaskTransactionManager class: one for JTA, one for Local transactions and one for Spring transactions.
If you’re wondering what Spring transactions are, well, Spring provides an abstraction around local or JTA transactions. For example, Spring ‘local’ transactions actually have a synchronization point, even though normal local transactions don’t support that functionality.
Well, that’s about everything, I guess!
When I was in college, I saw the movie “The Matrix” a couple times. I only watched it 3 times maybe, but I never watch movies multiple times (Okay, except for my favorite movie ever, but that’s another story).
In any case, I would get so psyched to code after watching that. When I had a project I needed to code for school that I just wasn’t enjoying that much — or when I just wasn’t motivated to work, I’d watch the Matrix and it would just psyche me up to build the worlds I was building in my code.
I think it was the sense of discovering and creating your own worlds that “The Matrix” conveys — the way it brought me in touch with that feeling in myself, and how I’ve always enjoyed that.
The (“leaked”) Valve handbook does that for me now — if I had theoretically read it, of course. But if I had read it, I would say that it is infused with that joy of creating. And I can’t help but be contaminated by that when I read it.
Hey you.. listen up!
I know, I know, I also felt the pain. Git: “Huuhh?? I have to commit and push? Uh.. And what’s the difference between git fetch and git pull?”
I know. But here’s something that will help: SVN and CVS are CVCS’s: Centralized Version Control Systems. Git is much closer to a file system. And please, don’t use Git like you used SVN or CVS.
The picture on the left is from a interesting talk on language developments from the creator of Clojure, Rick Hickey. You can find the talk here.
But it also (coincidentally) has a decent representation of how the Git ‘file’ system works. We can look at the two “trees” in this picture as branches in a Git repository.
However, what I really want you to take away from the minutes spent reading this is that you need to stop using Git as if it was SVN or CVS. You need to stop using merge unless you wanted to shout at everyone who looks at that merge commit later that “YES, I MERGED STUFF HERE!”.
This is the leap that Git has achieved: Git has made commits into a form of documentation. Commits with Git are so transparent and easy to manage and follow that everyone sees the commits — and wants to look at them.
However, most people use merge because they run into the following:
My teammate/colleague just committed something and I can no longer push to the repository!
So what do we do? What we do not do, is merge. Okay?
The first thing we do do is copy the name of the commit we’ve made. Now that the changes have been committed, that commit now exists on the “Git filesystem”, regardless of whether or not it’s part of a branch.
$ git commit -m"Added tests for new Fromungulator logic" [master 0ca7df5] Added tests for new Fromungulator logic 4 files changed, 3 insertions(+), 1 deletions(-)
0ca7df5 is of course your commit hash — or, if we’re thinking in terms of the Git file system, it’s the reference to the “git file” that contains your commit. Of course, if your colleague went ahead and pushed a commit to the repository before you could, you’ll see this:
$ git push origin master To firstname.lastname@example.org:mrietveld/frunubucation.git ! [rejected] master -> master (non-fast-forward) error: failed to push some refs to 'email@example.com:mrietveld/frunubucation.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes (e.g. 'git pull') before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details.
They lie.. Do not, I repeat, do not then go do this:
git pull origin master
(unless you’ve setup your branch with auto-rebasing. There’s a funny post about that here.)
There are lots of ways to avoid merge bubbles, but the fastest is probably to do the following:
git fetch origin git rebase origin/master
Rebase! Man, I love git! Rebase is the equivalent to doing this:
git reset --hard HEAD^1 git merge --ff origin/master git cherry-pick 0ca7df5
And once you’re done, you’ll want to push that commit to the origin repository:
git push origin master
Why do we nog cause merge bubbles? Because:
- Merge bubbles make it hard to determine which commit caused which change in the code
- Merge bubbles make it hard to retroactively fork off a branch from a different commit.
The first reason, determining which commit caused what, is really enough, though. That’s one of the main reasons you’re using a versioning system!