Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
In 2006, we saw the celebration of the 50th anniversary of the magnetic hard disk drive (HDD). And, in 2007, we have the 20th anniversary of the publication of the University of California Berkley RAID white paper A Case for Redundant Arrays of Inexpensive Disks (RAID)." Given the focus on virtualization (server, storage, networks and I/O), continuous data protection (CDP), grids, deduplication, iSCSI, VTLs, NAS and object-based storage, it's no wonder RAID has become lost in the crowd. Or, has it?
In general, RAID has matured from a concept to prototype to early adoption product to standard functionality. It has become transparently available in storage products ranging from the largest enterprise to consumer-oriented products. The relevance of RAID remains given the performance needs and dependencies on data being available. With the introduction of ultra-large 300 GB, 500 GB, 750 GB and terabyte-sized disk drives, coupled with the continued drop in price, RAID remains relevant to mirror or parity protect data stored on disk.
RAID has fallen out of the limelight as a feature that vendors spend a lot of time talking about, which speaks to its level of maturity. In fact, some have even declared RAID as a dead or "zombie" technology that is no longer relevant even though it continues to grow in adoption on enterprise, midrange, small and midsized businesses (SMB) and small office/home office (SOHO) storage systems, adapter cards and via RAID on Chip (ROC), and RAID on motherboard (ROMB) deployments (Figure-1). In other words, with continued enhancements, RAID has reached the plateau of productivity having left the hype cycle evidenced by the industry association RAID Advisory Board (RAB) falling by the side of the road several years ago.
Figure-1: - Various types of storage and RAID controllers (Chips, Adapters, Systems)
RAID should be thought of as a tool to enhance and delivery various level of storage services, including performance and availability. On one hand, disk drives continue to become more reliable with larger capacities. However, more data is being stored, which raises exposure risks when a disk drive fails. In addition to the availability aspect of RAID and sibling data protection technologies, including local and remote mirroring or replication to other storage systems, RAID also provides performance enhancements to help address the ever growing storage performance I/O gap between available capacity and performance.
Two of the most common knocks on RAID-6 and multidrive parity is the performance impact during writes or rebuilds along with the cost of an extra disk drive for parity compared to RAID-5. RAID-6 does in fact use twice the number of drives for parity compared to RAID-5. However, it still uses fewer drives than a mirror set or hybrid mirror and parity scheme like RAID-51.
RAID-6 (dual parity) and other multidrive parity schemes, including Network Appliance Inc.'s (NetApp) RAID-DP, that are targeted at reducing data availability exposure during rebuild of large capacity Fibre Channel, SAS and SATA disk drives, have come under pressure lately, similar to how RAID-5 was criticized 10 years ago. The issue with RAID-6 and any multidrive parity scheme is that of the performance overhead when doing parity calculations, writing data or rebuilding from a failed disk drive.
Keep the parity and "protection overhead" story in perspective to the level of service, cost and protection needed for your applications. Regarding the performance impact, different RAID controllers (ROCs, adapters and external controllers) handle RAID-6, other RAID levels and application workloads differently. Some controllers have offload and performance acceleration for RAID-5 and RAID-6 rebuild or write operations, some are optimized for OLTP, while others are optimized for large sequential stream applications.
Check out the entire RAID Handbook.
About the author: Greg Schulz is founder and senior analyst with the IT infrastructure analyst and consulting firm StorageIO Group. Greg is also the author and illustrator of Resilient Storage Networks (Elsevier) and has contributed material to Storage magazine and other TechTarget venues.
This was first published in May 2007