Java – Distributed Concurrency Control

concurrencyjavaload balancingscalingterracotta

I've been working on this for a few days now, and I've found several solutions but none of them incredibly simple or lightweight. The problem is basically this: We have a cluster of 10 machines, each of which is running the same software on a multithreaded ESB platform. I can deal with concurrency issues between threads on the same machine fairly easily, but what about concurrency on the same data on different machines?

Essentially the software receives requests to feed a customer's data from one business to another via web services. However, the customer may or may not exist yet on the other system. If it does not, we create it via a web service method. So it requires a sort of test-and-set, but I need a semaphore of some sort to lock out the other machines from causing race conditions. I've had situations before where a remote customer was created twice for a single local customer, which isn't really desirable.

Solutions I've toyed with conceptually are:

  1. Using our fault-tolerant shared file system to create "lock" files which will be checked for by each machine depending on the customer

  2. Using a special table in our database, and locking the whole table in order to do a "test-and-set" for a lock record.

  3. Using Terracotta, an open source server software which assists in scaling, but uses a hub-and-spoke model.

  4. Using EHCache for synchronous replication of my in-memory "locks."

I can't imagine that I'm the only person who's ever had this kind of problem. How did you solve it? Did you cook something up in-house or do you have a favorite 3rd-party product?

Best Answer

you might want to consider using Hazelcast distributed locks. Super lite and easy.

java.util.concurrent.locks.Lock lock = Hazelcast.getLock ("mymonitor");
lock.lock ();
try {
// do your stuff
}finally {
   lock.unlock();
}

Hazelcast - Distributed Queue, Map, Set, List, Lock

Related Topic