Reading Time: 3 minutes

Recently I had a situation where I needed to get confirmations from an external system (webservice) at frequent intervals and add them to a database. All available confirmations were retrieved in a batch when the service was invoked. A confirmation was sent to the Service after a successful retrieval.

Of course, Quartz came to mind. Piece of cake I thought and implemented the scenario using Mule ESB. Everything seemed to work as planned until we started to notice duplicates of confirmations in the database. After some investigation it turned out that occasionally there were thousands of confirmations retrieved and inserted into the Database. Since inserting thousands of records took longer time than the scheduled interval, a new request for confirmations was made before the retrieval process could complete.

latest report
Learn why we are the Leaders in management and

So how do we prevent this?

First of all we have to make the quartz connector thread pool only hold one thread, otherwise a new thread could start the same job even though an active job is running.

This is easily done:

Is that enough? No, since Mule is so well designed, all message processors has it's own thread pool so even if Quartz only serves one thread that one appears to be finished when the next message processor in the flow takes over. Since that processor has it's own thread pool it can handle multiple jobs coming from Quartz.

So what we need to do is to make sure that the whole flow is using the same thread all the way through. This is easily achieved using processingStrategy:

Setting the processingStrategy to synchronous makes sure that one thread is used through out the flow and Quartz can't create a new one before that one is finished.

No more duplicates…