Column

Don't let storage platform complexity get the best of you

Right now, many storage vendors offer a central storage management console that manages their multiple different storage arrays. However, the primary functions of these consoles are generally to facilitate the management of their storage platforms, such as providing the ability for users to create volumes or

Requires Free Membership to View

logical unit numbers (LUN), presenting the volumes to attached servers, and monitoring and managing performance on the arrays. Yet, these tools begin to break down as the number of storage arrays grow, since they are not necessarily designed to simplify storage networks -- only manage the storage arrays attached to them.

That's where the problems -- and the risks -- of using multiple storage arrays start to emerge. As more servers attach to the storage network and use volumes on different storage arrays, the more difficult it becomes to track where that server's data resides.

This jigsaw puzzle of tracking where a server's data is placed is what puts the data at risk. However, the risk is not necessarily in the complexity but the absence of any intelligence to automate the data management. The intelligence that is needed is one that automatically tracks where the data resides and facilitates its transparent migration between storage arrays.

In this respect, any shop that shares multiple storage platforms between host systems without putting in place some sort of comprehensive storage virtualization technology treads on dangerous ground. Since few, if any, companies have implemented such comprehensive storage virtualization technology, some companies may be tempted to purchase multiple storage platforms from the same vendor to help mitigate the risk associated with managing the storage arrays and moving the data between different storage arrays.

While this sounds good in theory, this approach may be problematic, depending on which vendor you're dealing with. For instance, EMC (Corp.) includes its Enginuity code with its Symmetrix/DMX platforms; its Flare operating system on its midrange Clariions; its DART operating system on its network attached storage (NAS) gateway, Celerra; while its content-addressed storage (CAS) product, Centera and virtual tape Clariion disk libraries use still different underlying code streams. This situation may become further complicated as EMC introduces its InVista virtualization product and recent purchase of NearTek [Inc.] casts doubt on the future of its virtual tape library (VTL) product.

So, my question to a vendor like EMC is this: How are users supposed to manage their data on these different storage platforms in a risk averse manner? For instance, if users elect to use EMC's NAS product, Celerra, they must, out of necessity, introduce one or more of EMC's other storage platforms, since Celerra acts as a front-end NAS gateway for other EMC storage platforms, such as a Clariion. If users want a tiered storage strategy, users may need to introduce yet another EMC storage platform, such as its DMX. Storing data in a compliant format may force users to bring in a fourth platform, Centera, while adding in backup to virtual tape requires users to introduce EMC's VTL platform. While one, three or even 10 of these devices may be manageable, when users start storing data on tens of storage arrays with all of these different software features, it becomes questionable how well their data is protected due to the inherent complexity this number of storage arrays introduces.

To counter this complexity, some vendors are starting to take a different approach that doesn't require users to introduce a different storage platform each time they need a new software feature. Referred to as universal storage platforms, the same underlying storage operating system is placed on all storage platforms from that vendor. This allows administrators to use the same interface to manage the physical storage array, as well as migrate data between different physical storage arrays.

Universal storage platforms also lay the foundation for vendors to introduce other software features that organizations may need either now or in the future, like CAS, clustered storage, deduplication, NAS, VTL and virtualization of external arrays. This allows vendors to introduce all of these features on one common platform and give users the ability to start with a single storage array but grow it so they can manage other physical storage arrays in their infrastructure and migrate data between them more easily in the future. A number of companies already offer some of these functions in their products, including HDS, HP, NetApp and Pillar Data Systems.

As the size of storage infrastructures continues to grow, so will the number of storage arrays. However, organizations without a larger vision on how they will manage all of the data on these storage arrays will create a complex configuration that puts their data and ultimately their business at risk. Universal storage platforms provide a means to centralize and simplify the management and migration of data between different storage arrays. And while universal storage platforms introduce their own considerations and new levels of complexity, they take organizations down a path of centralized, simplified storage management, rather than down no road at all.

About the author: Jerome M. Wendt is the founder and lead analyst of The Datacenter Infrastructure Group, an independent analyst and consulting firm that helps users evaluate the different storage technologies on the market and make the right storage decision for their organization.