Archive

Archive for the ‘jBPM’ Category

µServices and BPM engine architecture

4 July 2014 Leave a comment

For almost 2 years now, I’ve been playing around with different ideas about BPM engine architectures — in my “free” time, of course, which wasn’t that often.

The initial idea that I started with was separating execution from implementation in the BPM engine. However, that idea isn’t self-explanatory, so let me explain more about what I mean.

If we look at the movement (migration) of an application from a single (internal) server to a cloud instance, this idea is evident: the application code is our implementation while the execution of the application is moving from a controlled environment to a remote environment (the cloud) that we have no control over.

A BPM engine, conceptually, can be split into the following idea’s:

  1. The interpretation of a workflow representation and the subsequent construction of an internal representation
  2. The execution, node for node, of the workflow based on the internal representation
  3.  The maintainance and persistence of the internal workflow representation

A BPM engine is in this sense very similar to an interpreted language compiler and processor, except for the added responsibility of being able to “restart” the process instance after having paused for human tasks, timers or other asynchronous actions.

The separation for execution from implementation then ends up meaning that the code to execute a certain node, whether it’s a fork, join, timer or task node, is totally separated and independent of the code used to parse, interpret and maintain the internal workflow representation. In other words, the engine shouldn’t have to know where it is in the process in order to move forward: it only needs to know what the current node is.

While is this to some degree self-evident, this idea also lends itself to distributed systems easily.


Recently, I came across an interesting talk Fred George on Microservices at Baruco 2012. He describes the general idea behind Microservices: in short, microservices are 100 line services that are meant to be disposable and (enormously) loosely coupled. Part of the idea behind microservices is that the procedural “god” class disappears as well as what we typically think of when we think of the control flow of a program. Instead, the application becomes an Event Driven Cloud (did I just coin that term? ;D ). I specifically used the term cloud because the idea of a defined application disappears: microservices appear and disappear depending on demand and usage.

A BPM engine based on this architecture an then be used to help provide an overview or otherwise a translation between a very varied and populated landscape of microservices and the business knowledge possessed by actual people.

But, what we don’t want, is a god or service: in other words, we don’t want a single instance or thread that dictates what happens. In some sense, we’re coming back to one of the core idea’s behind BPM: the control flow should not be hidden in the code, it should be decoupled from the code and made visible and highly modifiable.

At this point, we come back to the separation of execution from implementation. What does this translate to in this landscape?

One example is the following. In this example, I’m using the word “event” very liberally: in essence, the “event” here is a message or packet of information to be submitted to the following service in the example. Of course, I’m ignoring the issue of routing here, but I will come back to that after the example:

  1. An “process starter” service parses a workflow representation and produces an event for the first node.
  2. The first workflow node is executed by the appropriate service, which returns an event asking for the next node in the process.
  3. The “process state” service checks off the first node as completed, and submits an event for the following node.
  4. Steps 2 and 3 repeat until the process is completed.

There are a couple of advantages that spring to mind here:

  • Extending the process engine is as simple as
    • introducing new services for new node types
    • introducing new versions of the “process starter” and “process state” services
  • Introducing new versions of the process engine becomes much easier
  • Modifying the process definition of an ongoing process instance becomes a possibility!

However, there are drawbacks and challenges:

  • As I mention above, we do run into the added overhead of routing the messages to the correct service.
  • Performance is a little slower here, but then again, we’re doing BPM: performance penalties for BPM are typically found in the execution of specific nodes, not in the BPM engine itself.
    • This is similar to databases where the choice between a robust database and a cache solution depend on your needs.

In particular, reactive programming (vert.x) seems to be a paradigm that would lend itself to this approach on a smaller scale (within the same server instance, so to speak), while still allowing this approach to scale.

 

 

 

Advertisements
Categories: Cloud, jBPM, Other

New REST API for jBPM 6

24 October 2013 5 comments

Most of the team has been working very hard for the last couple months on the Drools/jBPM/Optaplanner 6.0 release. One thing that’s changed with this release is that the umbrella project has given a new name KIE. You can find more about that elsewhere.

I wanted to quickly introduce the new REST API — and ask for feedback, should you be so inclined. Some of the community has been kind enough to already submit jira’s with suggestions, which is great!

I’ve been documenting the REST API here: https://github.com/droolsjbpm/droolsjbpm-integration/wiki/Rest-API

If you use the (new) jbpm-console war or the kie-wb war, the REST API is available via those war’s.

Again, if you have any ideas or suggestion, please feel free to leave a comment or submit a jira.

Thanks!

Categories: jBPM

Persistence testing in jBPM

27 October 2012 Leave a comment

In the last year or so, I’ve started 3 different persistence-related testing initiatives for jBPM. This is a quick summary of what’s already been created and what the plans are going forward.

Maven property injection cross-database testing

 
I’ve described the basic mechanisms and infrastructure of this testing infrastructure here:
jBPM 5 Database testing wiki page

The main jBPM project pom contains maven properties that are injected into resource files that are placed in (test) resource directories for which maven (property) filtering has been turned on.

In turn, when maven processes the test resources, the properties in the filtered files are replaced with the values placed in the pom. The resource files filtered are mostly a persistence.xml file and a datasource.properties file used to create data sources.

Unfortunately, there a still a couple problems:

  • To start with, this is only really completely implemented in the drools-persistence-jpa and jbpm-persistence-jpa modules. It needs to be fully implemented or “turned on” in a number of other important modules, such as jbpm-bam, jbpm-bpmn, jbpm-human-task-core and jbpm-test.
  • Some developers have encountered problems in their environments due to the fact that the persistence.xml file (in src/test/filtered-resources) contains properties (like ${maven.jdbc.url}) instead of real values. I’m not sure what’s going on there, but I’d like to fix that problem if I can.
  • Lastly, this infrastructure doesn’t help us test with different ORM frameworks. The problem here is that it’s practically impossible to test using a specific ORM framework (for example Hibernate 3.3) while the other ORM frameworks (Hibernate 4.1, OpenJPA) are in the classpath. But if you want to test with a specific ORM framework, the first thing you need to do is to have it in the classpath. So how do I test against multiple (“specific”) ORM frameworks?
    • While maven profiles are an answer to this, maven profiles add lots of complexity to a setup. I don’t want to make a maven profile for every different ORM framework that we need to test against, instead, I’d like to just make a maven profile that turns on the cross-database and cross-ORM framework testing
  • In the coming months, it looks like we’ll try to make sure that this framework is turned on and executed in all of the modules were it’s applicable. While I expect that I’ll eventually remove the maven filtering being used here, I think I’ll probably try to keep the property based control: being able to run the test suite on a different database simply by injecting (settings.xml) or otherwise changing (in the pom.xml) properties is valuable.

    Backwards compatible BLOB testing

     
    One new that I encountered when learning the jBPM 5 (and Drools) code is the use of BLOB’s in the database to store session and process instance state (and work item info). One of the great advantages of storing process instance state in a BLOB is that it avoids the complicated nest of tables that a BPM engine would otherwise bring with it — see the jBPM 3 database schema for a good example. :/

    Using a BLOB also has the advantage that changes can be made to the underlying data model without having an impact on the database schema that the (end) user will use while working with jBPM 5. This, more than any other reason, is really _the_ reason to use BLOBs. I still like to think about how much easier that makes the work of developers working on Drools and jBPM. (Considering the complexity of the Drools RETE network, it makes even more sense to use a BLOB.)

    However, when the serialization (or “marshalling”) code was originally developed, the choice was made for a hand-crafted serialization algorithm, instead of relying on pure Java serialization. This is also with good reason given that Java serialization is slow and the hand-crafted serialization algorithm was a lot faster.

    At the beginning of this year, for reasons having to do with the evolution of both Drools and jBPM 5, Edson (Tirelli) replaced all of the serialization code for Drools and jBPM 5 with protobuf. That was definitely an improvement, mostly because protobuf makes forward and backwards compatibility way easier.

    However, it made some of the “marshalling testing framework” work I had done up until that moment not usable anymore: the marshalling testing framework relies on generated databases from previous versions to be able to test for backwards and forwards compatibility. Switching to protobuf unfortunately broke all backwards compatibility at the cost of making sure that backwards compatibility from then on would be ensured.

    At the moment, what needs to be done is the following:

    • The databases for the different branches (5.2, 5.3, 5.4 and now 6.0) of jBPM need to be regenerated.
    • The code for the marshalling framework (as well as other persistence testing classes) needs to be cleaned up a little bit.

    Multiple ORM framework (ShrinkWrap based) testing

     
    I ran into a specific Hibernate 4 problem last month and it was unfortunately something that could also have affected compatibility with Hibernate 3. This meant that I needed to run the test suite multiple times with Hibernate 3 and then Hibernate 4 to check certain issues.

    As a result, I ended up building the framework in the org.jbpm.dependencies [github link] package. This framework creates a jar containing the tests to be run with the specified database settings (persistence.xml) as well as the specific ORM framework that the tests should be run with. It then runs the tests in the jar.

    This is eventually the framework that I want to have used within all persistence related jbpm modules. The code itself should probably be moved to the jbpm-persistence-jpa module.

Categories: jBPM, Persistence

How Guvnor and Designer talk to each other

4 October 2012 5 comments

I just spent a good hour talking with Tihomir about Designer.

Specifically, he explained how the interaction between Designer and Guvnor works.

In order to finish my work on a persistence layer for Designer, one of the things I need to understand is how Designer interacts with Guvnor. At the moment, Designer basically uses Guvnor for persistence: everything that you modify, create and save in Designer is saved in the Guvnor instance that Designer runs in.

However, there’s been more and more interest in being able to run Designer standalone: running Designer independently and without a ‘backing’ Guvnor insance. What I’m working on is inserting a persistence “layer” in the Designer architecture so that users can choose whether to use Guvnor for this persistence or whether to use a standalone persistence layer for this (such as a a database) — or some combination of this.

But in order to do that work, it’s important to understand exactly how Guvnor interacts with Designer: what does Guvnor tell Designer and what does Designer store in Guvnor, and how and when does the communication about this happen?

And now I can explain that interaction to you.

Let’s start at the very beginning: you’ve installed Guvnor and Designer on your application server instance and everything is running. You have this brilliant idea for a new BPMN2 process and so you click, click around to create a new BPMN2 asset and in doing so, open Designer in a new IFrame within Guvnor.

Now, when you open Designer, if I understand correctly, Designer takes a bunch of the standard, default “assets” that it will use later and goes ahead and stores them in Guvnor. Some of these are stored in the global area and others are stored in the particular package that your asset will belong to. But how does Designer actually know what the package and asset name is of what it’s editing?

Let me focus on a detail here: when you actually open Designer in Guvnor, you’ll be opening a specific URL, that will look something like this:

http://localhost:8080/drools-guvnor/org.drools.guvnor.Guvnor/Guvnor.jsp?#AssetEditorPlace:972fc35a-a3cc-430c-b460-71477931ca5b

This results in the UUID that you see after AssetEditorPlace: being sent to Designer (on the server-side). Unfortunately, Guvnor doesn’t have a method for looking up an asset based on its UUID, so Designer needs to figure out which the package and asset name of the asset it’s working with. Designer needs this information in order to interact with Guvnor. That means Designer does the following:

List of assets for the defaultPackage package

List of assets for the defaultPackage package

  • Designer first requests the list of all packages available in Guvnor.
  • After that, Designer then requests the list of all assets in each package.
  • Then Designer keeps searching list of assets until it finds the UUID it’s been given.
  • This way, it can figure out what the package name and asset name (title) is of the asset (process) it’s editing.

Naturally, I didn’t believe Tiho at first when he said this. My second reaction was to submit a Jira to fix this (GUVNOR-1951). I’llmake this better as soon as I finish this persistence stuff (or maybe as part of it.. hmm. Depends on how quickly Guvnor adds the needed feature.)

In any case, at this point, you have your blank canvas in Designer and Designer knows where it can store it’s stuff in Guvnor.

You go ahead and create your brilliant BPMN2 process with the help of all of Designer’s awesomely helpful features.

And then you need to save the process, what next?

Saving the process in Guvnor

Saving the process in Guvnor

Right, you click on a menu in Guvnor, and then it gets complicated. To start with, clicking on “Save Changes” or “Save and Close” in the Guvnor menu calls some client-side JavaScript in Guvnor that then calls some client-side JavaScript from Designer.

The ORYX.EDITOR.getSerializedJSON() call

This here on the right is a screen print of a find in Tiho’s IntelliJ IDE of the Guvnor (GWT) Java class showing where the code for this call is. The call shown is the ORYX.EDITOR.getSerializedJSON(), in case you don’t have superhuman vision and can’t read the text in the picture.

This calls JavaScript in Designer that retrieves the JSON model of the BPMN2 process in the canvas. (Designer actually stores the BPMN2 process information on the client side in a JSON data structure, which gets translated to BPMN2 XML on the server side.)

Once the Guvnor JavaScript (client-side) has gotten this JSON representation of the BPMN2 process back, it then sends a request to a Designer servlet that translates the JSON to BPMN2. Guvnor doesn’t really care about JSON — it certainly can’t read it and doesn’t know what to do with it, so it relies on Designer to translate this JSON to BPMN2 that it can store in its repository.

The Guvnor to Designer JSON to BPMN 2 "Translator" call

Again, for those of you without superhuman vision, the screen print fragment above is the Guvnor GWT code that makes sure that the retrieved JSON is translated to BPMN2.

Once the Guvnor client-side JavaScript (derived from Guvnor GWT code) has gotten the BPMN2 back, it then sends that XML back to the Guvnor server-side to be stored in the repository (under the correct package name and asset title).

And that’s how it all works!

Of course, I’m no GWT expert, so I might have glossed over or incorrectly reported some details — please do let me know what you don’t understand or if I made any mistakes (that means you too, Tiho :) ).

Regardless, the above summarizes most of the interaction between Guvnor and Designer. The idea with the persistence layer I’m adding to Designer is that much of this logic in Designer will be centralized. Having the logic in a central place in the code in Designer will then allow us to expose a choice to the user (details pending) on where and how he or she wants to store the process and associated asset data.

Lastly, there are definitely opportunities to improve the performance of this logic. For example, using Infinispan as a cache on the server-side, even when Designer is used with Guvnor is an idea that occurred to me, although I haven’t thought enough yet about whether or not it’s a good idea. We’ll see what else I run into as I get further into this..

Categories: jBPM

EE6/AS7 jBPM wars

18 September 2012 Leave a comment

One of the problems with running jBPM on jBoss AS7 was that there was a fair amount of (re)configuration necessary to get everything working.

This was enough of a problem that me and Maciej worked together last week to create assembly files (which are used by the mvn-assembly-plugin) which will create wars that will run on EE6 servers, with an emphasis on AS7 of course.

The main changes are as follows:

  • The persistence configurations for jBPM are JPA 1 — in the EE6 jars, we’ve added JPA 2 based persistence configurations for both the human-task war and the jbpm-console (server) war.
  • The Hibernate 3 dependencies have been eliminated from the EE6 wars — in fact, there are no Hibernate (or other ORM) dependencies in the war. That works well with AS7, but may prove a problem with other EE 6 application servers (glassfish, websphere). I think that once the Hibernate 4 framework goes through another couple versions and is stabler, we can then add those dependencies into the wars in order to make sure that those wars work on all possible application servers.
  • Unfortunately, Hibernate 3 has a lot of transitive dependencies that were also being included in the wars — mostly dependencies that were introduced via the dom4j dependency. These dependencies were also removed from the EE6 jars.

The new wars have the same name as the “normal” Java EE 5 wars, except that they have “EE6” as the maven classifier. That means that their names are something similar to the following:

  • jbpm-human-task-5.4.0.Final-EE6.war
  • jbpm-gwt-console-server-5.4.0.Final-EE6.war
Categories: jBPM

Quick tour of human-task persistence infrastructure

17 July 2012 Leave a comment
human-task persistence framework

Human-Task Persistence Framework UML

I had this picture lying around (in a git branch) and I figured I might as well make it available to others. It might not be totally accurate anymore with regards to the methods, but it is an accurate depiction of the framework.

What you see here is how persistence is structured in human-task: to start with, we have a TaskService instance. When a human-task server or service instance gets a request (via a method call or message) to perform an operation, the TaskService instance creates a TaskServiceSession.

The TaskServiceSession is our “request session”: it encapsulates the logic state and persistence state needed when handling the request. By logic state, I mean the variables and retrieve information needed to handle the task operation, and by persistence state, I mean the persistence context (entity manager) and transaction control state.

Actually, in order to seperate the persistence logic from the task logic, the persistence logic has been put into the TaskPersistenceManager. That makes it easier to maintain (I hope?) and at least, for me, easier to read.

So far I’ve described the left hand side of the diagram: TaskService, TaskServiceSession and TaskPersistenceManager.

The upper right hand side describes how the TaskServiceSession instance is created: via a Factory, of course. However, we wanted to provide Spring compatibility, so we have to implementations: one for “normal” situations, and one when using the human-task code with Spring. When we use Spring, we create a spring bean using the TaskSessionSpringFactoryImpl and inject that into our TaskService bean. Otherwise, the default is to use the (normal) TaskSessionFactoryImpl.

Transactions are also a concern that can differ: on the bottom right hand side, you’ll see we have 3 implementations of the TaskTransactionManager class: one for JTA, one for Local transactions and one for Spring transactions.

If you’re wondering what Spring transactions are, well, Spring provides an abstraction around local or JTA transactions. For example, Spring ‘local’ transactions actually have a synchronization point, even though normal local transactions don’t support that functionality.

Well, that’s about everything, I guess!

Categories: jBPM

JPA 2 with Drools and jBPM

3 April 2012 3 comments

At the moment, I’m finishing off making Drools/jBPM ready to be able use JPA 2 and Hibernate 4.x. This post is in some sense a draft for the official documentation, but again, see about for why I write blog posts.

 


Using jBPM with JPA 2

 

If you’re reading this, it’ll be because you want to use Drools or jBPM with JPA 2 and maybe even with a different ORM framework. Awesome! That means you’re using jBPM and that you’re using jBPM the way you want to and are not being forced by the software to make choices you don’t want to. I heartily approve of both.

First off, if you’re using JPA 2 with drools and JBPM, you’ll need to change the all the instances of "1.0" and ..._1_0.xsd to "2.0" and ..._2_0.xsd. Don’t forget those schemaLocation values that are off beyond the right edge of your screen.

Second off, you’ll have to modify your persistence.xml and change the mapping for the ProcessInstanceInfo class. In your persistence.xml, you’ll have to remove the following line:

  <mapping-file>META-INF/ProcessInstanceInfo.hbm.xml</mapping-file>

Once that’s done, you’ll have to add a JPA2 mapping for that class to your persistence unit. If you want to, you can just change the following line in your persistence.xml:

  <mapping-file>META-INF/JBPMorm.xml</mapping-file>

to

  <mapping-file>META-INF/JBPMorm-JPA2.xml</mapping-file>

The JBPMorm-JPA2.xml file is included with the jbpm-persistence-jpa jar: if you’re curious about the entity mapping, you can find it there.

Lastly, depending on your setup, you might have to change the TransactionManagerLookup class that you’re using. This line:

<property name=”hibernate.transaction.manager_lookup_class” value=”org.hibernate.transaction.BTMTransactionManagerLookup” />

needs to be changed to:

<property name=”hibernate.transaction.jta.platform” value=”org.hibernate.service.jta.platform.internal.BitronixJtaPlatform” />

And that’s it!


Developer bits and ranting

 

The following is mostly explaining how I got to the solution that I did get to. :)

In any case, regarding enabling and testing the JPA 2 functionality in Drools/jBPM and to start with: there are XML metadata overrides. I’ve known about XML overrides for persistence annotation data for a while, but while I’ve almost used them to solve a some problems, it never really happened. Part of this has to do with the fact that the configuration for jBPM is still evolving.

In the past year, the persistence.xml in jBPM has been extracted from the jars so that users can specify their own persistence units. This  means it’s also how users will be able to specify JPA 1.0 or 2.0 in their persistence units.

Another hurdle here has been the fact that Drools/jBPM is stuck with Hibernate 3.3.x and 3.4.x jars at the moment — and those versions do not support JPA 2. This means futzing around with dependencies and maven profiles. It’s not that big of a deal but also not that elegant — but I challenge anyone who says programming that gets things done should always be elegant. It’s a goal, not a requirement.

Also, the largest Hibernate 3.3/3.4 compatibility issue is by far the @CollectionOfElements annotation: ahead of it’s time, but it’s unfortunate that Hibernate didn’t elect to stay backwards compatible by translating @CollectionOfElements to @ElementCollection underwater once Hibernate became JPA 2 compliant.

So, what I’ve done is the following:

  • I’ve created a hibernate-4 profile in the relevant projects that use persistence (drools-persistence-jpa, jbpm-persistence-jpa, and jbpm-bam, to start with).
  • This hibernate-4 profile replaces the hibernate 3.x dependencies with hibernate 4.x dependencies, and since some hibernate jars changed names between versions, it adds the correct 4.x jars to replace the 3.x jars.
    • This means that, for example, the hibernate-annotations 3.x jar will be there when it’s not needed, but that’s not that big a deal. (It looks like hibernate-annotations disappeared as soon hibernate started supporting JPA 2.0).
    • Part of what I’m doing is also making sure that no hibernate jars are actually needed in the code — we want jBPM (and Drools) to be ORM framework independent and hard-baking hibernate dependencies in won’t help that.
  • This hibernate-4 profile also uses the maven-replacer-plugin to fix all kinds of things in the persistence.xml and orm.xml files.

Lastly, before I get to the meat of this post (using JPA 2 with Drools/jBPM), I have one more fact (or rant..). Why is the order of attribute types defined in entity-mappings? WHY?!? To give you an illustration, the following mapping is incorrect:

<entity class="forest.bee.hive">
  <element-collection name="honey" target-class="double" />
  <basic name="worker" />
  <version name="queen">
  <basic name="pupa" />
</entity class>

It’s invalid because all <basic> elements must come before all <version> elements which must come before all <element-collection> elements. WTF? I once had to mediate a ridiculously overheated discussion about XSD standards — and I do realize that it’s a complicated if arcane topic — but stil, wasn’t it possible to decouple the order? Okay, maybe it was impractical in terms of the parsing modifications then necessary.

In any case, if you’re writing your own JPA 2 XML entity-mappings, stop, and do it using annotations. Otherwise, if you really must know, the order (as specified in the xsd) is:

  1. description
  2. idorembedded-id
  3. basic
  4. version
  5. many-to-one
  6. one-to-many
  7. one-to-one
  8. many-to-many
  9. element-collection
  10. embedded
  11. transient

What was also interesting were the precise semantics of that metadata-complete attribute present in the <entity> element. If you do a websearch on “metatdata-complete jpa”, you’ll quickly learn that metadata-complete=true means that the annotation information for the class is ignored in this case and that the container/persistence-context will only use the XML information.

However, I wanted to do as little work as possible and was curious as to what would happen if metadata-complete=false. And the web was silent.. until I found this little sentence on a sun blog/technical post about JEE 6:

As is the case for web fragments, you use the <metadata-complete> element in the web.xml file to instruct the web container whether to look for annotations. If you set <metadata-complete> to false or do not specify the <metadata-complete> element in your file, then during deployment, the container must scan annotations as well as web fragments to build the effective metadata for the web application. However, if you set <metadata-complete> to true, the deployment descriptors provide all the configuration information for the web application. In this case, the web container does not search for annotations and web fragments.

This quote is from Introducing the JEE 6 Platform: Part 3 and the bold lettering has been added by me to emphasize my point: if metadata-complete=false then the container will use both the annotation data and the XML data. That means that if you want to override an annotation with XML information, you must use metadata-complete=true to ensure that the container does not use the annotation metadata. This must be partially why we can get away with using the ProcessInstanceInfo.hbm.xml mapping file and leaving a @PreUpdate annotation in the file.

Lastly, I’d also like to add that I’ve had to do some minor hacking to get all of this done. The problem, in short, is the following:

  1. The main problem is that jBPM 5 contained a class that used the Hibernate 3.x exclusive @CollectionOfElements annotation.
  2. Leaving this in the code meant that we could not compile the code with Hibernate 4 (since 4 doesn’t contain that class).
  3. Which means that using a ProcessInstanceInfo.hbm.xml file — a hibernate mapping file — to configure the ProcessInstanceInfo class. Because the mapping is now in the xml, the code will compile with Hibernate 4.
  4. However, this means we have to map all fields of ProcessInstanceInfo in the ProcessInstanceInfo.hbm.xml file: Hibernate will otherwise unfortunately complain about duplicate (hibernate) mappings if we use both (JPA or Hibernate) annotations and a Hibernate mapping file.
  5. Luckily, we can leave the @PreUpdate annotation in ProcessInstanceInfo, since Hibernate 3.x doesn’t have a simple annotation that is equivalent.
  6. But Hibernate 3.3/3.4 doesn’t support byte array blob objects. This means writing a BlobUserType class that extends the Hibernate UserType in order to be able to do this: we don’t want users to have to change their schema’s. Most annoyingly (and in slight contradiction to the quote from sun above), Hibernate will complain if we use JPA (field) annotations and a hbm.xml file on/for the same class.
  7. When users use JPA 2 (possibly with Hibernate 4), they then need to make sure not to use the ProcessInstanceInfo.hbm.xml and instead to include a JPA 2 entity mapping for the ProcessInstanceInfo class — which is currently commented out in the JBPMorm.xml file.

And that’s what has happened.

 

Categories: jBPM, Persistence