What you’ll learn in this tip: The computer industry has overcome a multitude of technological challenges in the past several years. Networking and memory speeds have increased, processors have gotten faster and 10 Gigabit Ethernet (10 GbE) has allowed for a massive increase in bandwidth. However none of these advances will remain beneficial if
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
There’s no question I/O is the next frontier the computer industry must conquer. We’ve met the compute challenge reaffirming Moore’s Law over and over again as the industry doubles processing power every 12 months to 18 months. Memory speeds have also kept pace with the CPU, so processors and RAM can feed each other at similar speeds. In the realm of networking, the technologies seem to enjoy a big kick every three years to five years. With 10 GbE in the volume implementation stage and 40 Gig parts already available, we’re swimming in bandwidth.
But all those advances may be held up by one laggard: I/O. It’s been causing havoc with application performance and putting a dent in productivity for years.
To see why, we need to get down to basics. I/O is the transfer of data to or from a device that’s handled by the file system or operating system making an I/O call. For data storage, it’s a SCSI call that goes through the host bus adapter (HBA)/network interface card (NIC)/converged network adapter (CNA) and over the network to the SCSI device. The command is processed by the array’s storage controller, and data is extracted from the disks and placed on the network or data is written to the disks. An I/O problem generally means it takes too long to read from or write to the disks. This I/O gap has existed for at least three decades, and it gets wider every year with little being done about it.
Flash memory promises to be the best solution to the I/O problem we’ve seen in three decades.
NAND flash memory chips may be the solution that finally closes the I/O gap. Flash memory promises to be the best solution to the I/O problem we’ve seen in three decades. With solid-state storage, we’re able to feed the compute/memory complex, traversing the network and the buses, with data moving at speeds that can keep the entire system working at peak efficiency.
The implications are profound. In many ways, VMware and other hypervisors are in a near stall state: the easy applications are already running as virtual machines (VMs), but the I/O problem must first be resolved to get mission-critical applications into the server virtualization fold. No IT shop will move a latency-sensitive, I/O-bound application that’s been fine-tuned to run on a dedicated server to the VM environment until the I/O characteristics are equal or better on the VM side.
Demanding apps -- like high-performance computing (HPC), seismic, bio, pharmaceutical, media and entertainment, and weather forecasting -- are hobbled or stopped dead in their tracks because of the I/O issue. The implications for solving the I/O dilemma are enormous.
While solid-state is seen as the solution, where and how it’s used to solve the I/O problem is still uncertain. The easiest way to get into the game is solid-state storage in hard disk form factor that can plug into a traditional array. That’s the easiest way to get a quick boost, but it’s only a partial solution. The array’s controller, which was never designed for such powerful devices, quickly becomes a bottleneck. Dell, EMC, Hewlett-Packard, Hitachi, IBM and practically all data storage vendors offer this alternative.
To deal with this “constriction” in the storage array controller, a number of vendors have developed controllers that are designed from the ground up to deal with solid-state drives (SSDs). These arrays can accelerate latency-sensitive apps by 4x to 10x. Vendors in this category include GridIron Systems, SolidFire, Violin Memory and others.
Flash can also be used to grab the I/O before it reaches a storage array and cut out all the travel up/down the buses to the HBA/NIC/CNA and across the network fabric. For this alternative, the flash memory is implemented on a PCIe card that’s slotted into the server. Latency-sensitive data is kept in the flash memory on the card and the relevant I/O is trapped by the card and handled locally. The driver in this case is crucial and is generally supplied by the card vendor. Performance is outstanding because no extraneous latency-adding devices get in the way. Think of this approach as memory being made to look like a disk. The biggest downside is that the I/O performance is only available to the server that holds the solid-state card. Vendors in this category include IO Turbine (which is in the process of being acquired by Fusion-io), Fusion-io, VeloBit and others.
Still another approach is to use flash memory as a cache that front ends a storage array, such as NetApp’s Flash Cache. Here the cache is designed to be smart enough to hold that portion of the data that’s frequently requested by the applications. Instead of a small number of SSD drives that the array itself could hold, the benefit of flash is now applicable to data in the entire array.
Variations on these approaches are coming fast and furious. For example, we’ve seen products from vendors like Alacritech and Avere Systems that use a combination of DRAM, flash and hard disk drives (HDDs) in a package that front ends a number of NAS boxes to essentially breathe new life into old NAS systems. The variations are as many as there are creative entrepreneurs, and it remains to be seen which approaches will ultimately prevail.
It’s high time we solved the I/O problem. HDDs have been the mainstay in data storage shops for as long as one can remember. But fundamental changes are occurring now that will forever change how I/O is done. The performance and productivity gains you’ll see won’t just be incremental, they’ll blow your mind.
BIO: Arun Taneja is founder and president at Taneja Group, an analyst and consulting group focused on storage and storage-centric server technologies.
This was first published in November 2011