java phase - Data consistency in XA transactions




spring hibernate (5)

I have a some experience with a bit of different environment based on Weblogic JMS and Oracle 11g. In this answer I suppose that it is working exactly the same. I hope my answer will help you.

In our case there was a "distant" system which was obligatory to notify based on the different events happend inside the local system. The other system also red into our database so the use-case seems almost identical to your problem. The sequence of the events was exacly the same as yours. On the test systems there was not a single faulire. Everyone thought that it will work but some of us doubted if it is the correct solution. As the software hit production some of the BPM processes run unpredictably. So a simple answer to your question: yes it is possible and everyone should be aware it.

Our solution (in my opinion) was not a well planned one, but we recognised that the little time window between the two commit is braking the system, so we added some "delay" to the queue (if I remember it was like 1-2 minutes). It was enough to finish the other commit and read consistent data. In my point of view it is not the best solution. It is not solving the syncronisation problem (what if an oracle transaction is longer than 1-2mins?).

Here is a great blog post that is worth to read and the last solution seems the best to me. We implemented it in an other system and it is working way better. Important to notice that you should limit the retries (re-reads) to prevent "stuck" threads. (With some error reporting.) With this restrictions I was not able to find better solution so far, so if anyone got some better option I am looking forward to hear it. :)

Edit: typos.

Suppose we have a database (e.g. Oracle) and a JMS provider (e.g. HornetQ) participating in an XA transaction. A message is sent to a JMS queue and some data are persisted in the database in the same distributed transaction. After the transaction is committed, a message consumer will read the persisted data and process them in a separate transaction.

Regarding the first XA transaction, the following sequence of events may be executed by the transaction manager (e.g. JBoss)

  1. prepare (HornetQ)
  2. prepare (Oracle)
  3. commit (HornetQ)
  4. commit (Oracle)

What happens if the message consumer starts reading the data after commit is completed in HornetQ, but is still being executed in Oracle? Will the message consumer read stale data?

The question can be generalized to any kind of multiple resources participating in XA transactions, i.e. is there a possibility for a small time window (when commit phases are executed) in which a reader from another concurrent transaction can get an inconsistent state (by reading committed data from one resource and stale data from another one)?

I would say that the only way for transactional resources to prevent this is to block all readers of affected data once the prepare phase is completed until the commit is issued. This way the example message consumer mentioned above would block until data is committed in the database.


I will insert a status field, so after the each step, if it was successful, the status will be updated, and the reader should check the status before performing an operation.


Unfortunately XA transactions don't support consistency. When mapped to CAP theorem XA solves Availability and Partition tolerence across multiple datastores. In doing so it must sacrifice on Consistency. When using XA you have to embrace eventual consistency.

In any event creating systems that are CP or AP is hard enough that regardless of your datastore or transactional model you will face this problem.


Yes. It is possible for an external system to receive and consume the messages you send before the DB actually commits, even if the transaction fails and rolls back afterwards.

For the last two years I've been doing maintenance and development of a distributed system using XA transactions with WebSphere MQ as JMS provider and Oracle 11g as backing DB.
One of its batch jobs would read offline messages from the DB, send them to JMS and mark them as sent in the DB -- all as part of the same XA transaction. If any of the messages or the DB failed, the transaction would be rolled back.

Some times, a message would be too large for JMS and cause the send() to fail and the whole transaction to rollback(), leaving the DB unchanged.
However, an external consumer was still receiving and processing every single message sent before the rollback. I knew because they would send me an email for every message processed, and I was getting lots of emails about messages that weren't marked as sent in the DB (because the transaction had been rolled back).

If this external system was to somehow SELECT COUNT(*) the number of messages sent by my system, it would read 0 messages sent, despite having already consumed hundreds of them.

So, yes it is possible for an external system to read stale data even when using XA transactions.


String s="srinivas";

s=s.replaceFirst(Character.toString(s.charAt(0)), 
                 Character.toString(Character.toUpperCase(s.charAt(0))));

//s Value is "Srinivas" now




java oracle jboss hornetq xa