It is very desirable for microservices to embrace the ACID 2.0 paradigm, where transactions can come in out of order, and the end result will still be in the same state as if the transactions were processed in the correct order.
Imagine you have an update transaction come in for a Person’s name. If that Person doesn’t yet exist, one thing you could do would be to create a “dummy” record for that Person, using the updated value you just received. Then, if a create transaction comes in later, you will need to figure out how to either munge that create into an update (without overwriting the ‘newer’ name value from the update), or somehow delete the record, play the create transaction, and then replay the update transaction. Another option would be to put the update transaction into a retry loop, until you get the create transaction, and then play the update after the record was created. But then you are stuck with the problem of the Person not being in the database for a while, and how many times do you retry before giving up?
Event sourcing models typically handle these scenarios easier; it is really one of the big use cases for an event sourcing model. The transactions are stored as basically an insert-only log of what happened, and that log is easy to add a new log entry into, even if the timestamp on it is older than other entries already in the log. However, there is still likely to be a more transactional store somewhere along the line, and since it is not typically feasible to rebuild your query whenever a new transaction is written to the log, you will still need to wrestle with this issue. In this scenario maybe the business is fine with the name update being “lost” for a day or two, at which point the transaction log can be used to update the relational store to get it back into synch.
Originally Posted on my Blogger site June of 2017