Skip to main content

Oracle transactions in the new world

If the new world of BigData, NoSQL and streaming has sparked your interest, you may have noticed one peculiarity - the lack of proper transactions in these contexts (or transactions at all!) Yes, durability is retained, but the other properties of ACID (Atomicity, Consistency, Isolation, Durability) leave a lot to be desired.

One might think that in this new world perhaps applications are built in such a way that they no longer need it,  and in some cases this may be true. For example, if a tweet or an update to Facebook gets lost, then who cares, and we can simply continue on. But there is of course more important data that still requires transaction support and some NoSQL databases have limited support for this nowadays. However, this is still far from a full implementation, the likes of which everyone takes for granted in the Oracle database (e.g. you cannot modify just arbitrary rows in arbitrary tables in a single transaction). Of course, the huge benefit is that these databases are much easier to scale, as they are not bogged down by lock/synchronization mechanisms that ensure data consistency.

But recently there seems to be much more interest in marrying the worlds together; by this I mean, the old 'proper' RDBMS (Oracle) world and the new BigData/NoSQL/streaming  ('Kids from the Valley') one. So the question then follows, working from the old to the new; how do you feed data from a database, built on an inherently transactional foundation, into one that has no idea about them?

Mind you, such interoperability issues are not a new thing...anyone remember that old problem of sending messages from PL/SQL or triggers? In that case any message (or email) was sent when requested, but the encapsulating transaction could be rolled back or tried again. This lead to messages that were not supposed to be sent, along with messages sent multiple times. The trick there was to use dbms_job in the workflow. This package (unlike the new dbms_scheduler) just queued the job and the job coordinator saw it only after the insert into the queue is committed – i.e, when the whole transaction commits.

There are two basic approaches to addressing this issue for a data feed between the systems:
1. You can revert to the 'old and proven' batch processing method (think ETL). Just select (e.g. using Sqoop) the data that arrived since the last load, and be sure to change your application to provide enough information so that such query is possible at all (e.g. add last-update timestamp columns).

2. Logical replication or change data capture. There is an overwhelming trend (and demand)   toward near-real-time, and people now want and and expect data with low seconds latency. In this approach changes from the source database are mined and sent to the target as they are happening.

So the second option sounds great - nice and easy – except that it’s NOT...
The issue is that any change in the database happens as it is initiated by the user/application, but until the transaction is committed you cannot be sure if the change will be persistent, and thus whether anyone outside of the database should see it.

The only solution here is to wait for the commit, and you can be more or less clever with what you do until the commit happens. You can simply wait for the commit and only then begin parsing the changes; or you can do some/all pre-processing and just flush the data out when you see the actual commit.

For this pre-processing option, as is often the case, things are actually more complicated in real life – and we don’t have to contend just with simple commits/rollbacks at the end of the transaction, but also need to handle savepoints. These are used much more often than you would think; for example, any SQL implicitly issues a savepoint, so that it can roll itself back if it fails. The hurdle is that there is no information in the redo as to when a savepoint was established, and which savepoint a rollback roll backs to.

In the end, things turn out well with the commit/rollback mechanism, except that a queue of yet-uncommitted changes must be maintained somewhere (memory, disk) when running pre-processing or that transactions are shipped only after they ended, adding to lag (especially for long/large transactions) with the wait for commit approach.

A side note: replication in the ‘old RDBMS’ world can also introduce another layer of complexity. Such logical replication can actually push changes into the target even before they are committed - and ask the target to roll them back if necessary. But due to the issues discussed above, this is actually pretty tricky and many products don't even try (Streams, Oracle GoldenGate), although others support this (Dbvisit Replicate).

Comments

Popular posts from this blog

ORA-27048: skgfifi: file header information is invalid

I was asked to analyze a situation, when an attempt to recover a 11g (standby) database resulted in bunch of "ORA-27048: skgfifi: file header information is invalid" errors. I tried to reproduce the error on my test system, using different versions (EE, SE, 11.1.0.6, 11.1.0.7), but to no avail. Fortunately, I finally got to the failing system: SQL> recover standby database; ORA-00279: change 9614132 generated at 11/27/2009 17:59:06 needed for thread 1 ORA-00289: suggestion : /u01/flash_recovery_area/T1/archivelog/2009_11_27/o1_mf_1_208_%u_.arc ORA-27048: skgfifi: file header information is invalid ORA-27048: skgfifi: file header information is invalid ORA-27048: skgfifi: file header information is invalid ORA-27048: skgfifi: file header information is invalid ORA-27048: skgfifi: file header information is invalid ORA-27048: skgfifi: file header information is invalid ORA-00280: change 9614132 for thread 1 is in sequence #208 Interestingly, nothing interesting is written to

Multitenant and standby: recover from subsetting

In the previous post we learnt how to exclude a PDB (or a datafile) from the standby database recovery. Of course, that might not be the real end goal. We may just want to skip it for now, but have the standby continue to be up-to-date for every other PDB, and eventually include the new PDB as well. Again, standard Oracle pre-12c DBA knowledge is helpful here. These files are just missing datafiles and thus a backup can be used to restore them. The new 12c features add some quirks to this process, but the base is just sound backup and recovery. Backup So let's start with a proper backup: rman target=/ Recovery Manager: Release 12.1.0.2.0 - Production on Mon Nov 16 12:42:38 2015 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved. backup database; connected to target database: CDB2 (DBID=600824249) Starting backup at 16-NOV-15 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=193

Multitenant and standby: subsetting

In the previous post we looked at managing new PDBs added to a standby database, by copying the files to the DR server, as required. However, there is another possible approach, and that is to omit the PDB from the standby configuration altogether. There are two ways of achieving this: 1. Do it the old-school way. A long time before 12c arrived on the scene one could offline a datafile on the standby database to remove it. The same trick is used in TSPITR (tablespace point-in-time recovery), so that you don't need to restore and recover the entire database if you are only after some tablespaces. 2. 12.1.0.2 adds the option to automatically exclude the PDB from standby(s). And 12.2 adds the option to be more specific in case of multiple standbys. For the sake of curiosity I started by setting standby file management to manual again. What I found is that there was very little difference, and the steps to take are exactly the same - it’s just the error message that is slightly