Column

Data protection: Moving up the stack?


Delegates (including myself) took away at least one thing from a recent Disaster Recovery and Data Protection Summit in Tampa, Fla. -- nobody has a definition of continuous data protection

Requires Free Membership to View

(CDP) that makes a lot of sense. Sure, we all know intuitively that any ongoing process of data replication is CDP, but how you do it can greatly vary.

At the summit, I moderated a panel discussion with Revivio, Arsenal Digital, XOsoft, Neverfail and Kashya, which was recently acquired by EMC. (Softek was also supposed to be on the panel, but their speaker bailed due to other commitments.) In case some of the names are unfamiliar, Softek is in the data replication game: Point at the data you want to move and the target you want to move it to, and it goes. This can be set up to occur automatically if you want, creating a mirror of sorts.

Kashya is replicating through write copying. Their appliance sits in a Fibre Channel (FC) fabric and splits the writes from the applications you designate, so that they target different devices, even if one target is local and the other is remote. EMC needed them because its virtualization play, Invista, doesn't have the copy-during-write functionality that its competitors sport. I suspect that this was a design decision intended to protect their cash cow SRDF. But a lot of pundits, including myself, hammered Hopkinton, Mass. (home of EMC and other tech companies) on the obvious gap, and that might have contributed to the purchase of Kashya.

Revivio started by building cool software to replace EMC point-in-time (PIT) mirror splits, which are idiotic in the extreme. (Why use your most expensive disk to make PIT copies of the rest, especially when you don't discover errors have occurred in your database in a sufficiently timely way to prevent all splits from being corrupted?) Revivio's Time Addressable Storage would simply log all byte changes to external SATA disk, leveraging cheap capacity and the error recovery routines that are now commonplace in most respectable database software packages. Their CDP play is to do this replication over distance to a remote set of disks.

XOsoft and Neverfail are in a slightly different game. Both seek to replicate not only data, but also system images so you can failover to a more or less identical copy of a system at a remote site if your local server goes south on you. Neverfail is a bit different in that it permits you to conceal the "shadow servers" and to transfer IP addresses to the shadows if the primary fails. With XOsoft, you have to give the remote servers IP addresses of their own. Also, Neverfail replicates registries in Windows servers between primary and shadow. For its part, XOsoft provides some robust tools for creating failover scenarios that offer flexibility in recovering both Windows and non-Windows environments. However, neither one replicates patches claiming that doing so would create the potential for disaster in both your primary and backup systems.

Replicating entire system images and application software is a bit further up the software stack than replicating data written to a storage frame. Arsenal Digital is quick to point out that all of these products require specialized software and the smarts to operate them. They suggest doing replication through a service provider, which is their gig, so you can capture the data that's really important -- not the stuff in the data center, but the stuff on laptops, home office PCs and out in the branches where the real work is being done. There's a lot of truth in those words.

In retrospect, I should have put both DataCore Software and Zetera on the panel, too. Like Kashya, DataCore's SANMelody product provides the means to target two disks, local and remote, with every write. Only, DataCore's product doesn't require a FC fabric, they support iSCSI as well.

If you prefer to build your storage infrastructure on Zetera's Storage over IP protocol (basically, UDP and IP), you can simply target writes to two or more different storage nodes identified by their IP addresses using multicasting, a standard function of the IP but one that is not supported by TCP-based protocols like iSCSI. I like this approach a lot and use it in our test labs.

To my way of thinking, the best data replication approach (all of which, even routine tape backups or snapshots, are "CDP" in the generic sense) seems to align with the kind of data you are replicating. For high-performance transaction processing systems, a Revivio-style CDP makes a lot of sense, for local and remote restores quick-in-a-hurry. For retention data -- that is, everything but high-performance databases -- the product you choose should be based on three main criteria:

  1. How quickly you need access to data and what kinds of deltas or differences you are willing to tolerate. This is called your recovery time objective and your recovery point objective, though if EMC has its way, you will need to put a trademark symbol after "recovery point" -- an expression they are claiming they own.
  2. The visibility the product offers. Can you check, non-disruptively, to ensure that things are replicating as planned?
  3. How much jingle you have in your pocket. Cost is king, since no company I know of wants to spend a lot on disaster recovery.

I might add a fourth selection criterion, come to think of it. That is, what does the product require in terms of ongoing management and oversight? Neverfail and XOsoft, and even Softek, Kashya, DataCore and Zetera, all require some knowledge and skills to set up and maintain over time. The underlying replication strategies will need to be reviewed and changed over time as the business and IT infrastructure change. At least with Arsenal Digital, you could potentially offload some of the heavy lifting to someone else. Arsenal has a trustworthiness that has been validated in every interview I have ever conducted with their personnel.

The alternative is to standardize infrastructure on something like Zetera-enabled Bell Micro Hammer arrays. They are reliable, efficient and more cost-effective than just about any storage out there, and by using them in a copy-on-write model, you can design the angels of resiliency into the architecture itself. We plan to showcase both that architecture and a DataCore virtualized iSCSI product in the next DR Summit, to be held in November in the Miami/Ft. Lauderdale area.

About the author: Jon William Toigo is a Managing Partner for Toigo Productions. Jon has over 20 years of experience in IT and storage.