Article

Thin provisioning, coined by 3Pardata, raises eyebrows

Alex Barrett, Trends Editor, Storage

Storage managers in the market for a new SAN array may be tempted by a new feature of some arrays -- thin provisioning (TP)

Requires Free Membership to View

pioneered by 3PARdata Inc. -- but not all users who have tried it are sold on the concept.

Thin provisioning, sometimes called oversubscription, is a way to quickly get capacity to an application. It's a feature of some virtualized storage arrays where the storage controller allocates storage to the application but releases it only when the capacity is required. When utilization of that storage approaches a predetermined threshold, the array automatically expands the volume without involving the storage administrator.

Thin provisioning helps storage administrators break the old habit of overprovisioning storage -- a common practice used to protect against having to continually grow volumes and LUNs. Vendors with thin provisioning include 3Par as well as Cloverleaf Communications Inc., DataCore Software Corp., LeftHand Networks Inc., Network Appliance Inc. and Pillar Data Systems Inc..

Some users love thin provisioning. Warren Habib, chief technology officer at Fotolog Inc., an online photo blog site in New York City, has seen his storage administration duties dwindle to next to nothing since his firm rearchitected its infrastructure and added a 3Par InServ array. Before putting in the InServ array, Fotolog's file serving was handled by 120 standalone file servers with internal storage, and "keeping up with storage growth was impossible," Habib said. Now, whenever Fotolog's InServ sees the file server volumes running out of space, it automatically grows them by 32 GB. "I didn't have to check on the system capacity for three months," he said.

But other users haven't had such idyllic experiences. Nick Poolos, systems/network specialist at The Ohio State University's Fisher College of Business, saw the NTFS file systems mounted on a Compellent Storage Center SAN grow abnormally large when using the array's Dynamic Capacity feature.

At issue, according to Bob Fine, Compellent's product marketing manager, is Microsoft's "undelete" feature, which just marks blocks to be released without actually erasing them. Rather than reusing released blocks, NTFS prefers new, unused blocks, which caused the thin-provisioned volume to swell to its maximum allocated size. Fine says doing a periodic disk defragmentation may be a workaround. He adds that most users will never witness this problem because most environments grow gradually and don't delete much data.

Arun Taneja, founder, president and consulting analyst at the Taneja Group, Hopkinton, Mass., warns of another possible thin-provisioning pitfall: Some file systems, for performance reasons, like to spread their metadata across all the space they have been assigned.

"In many cases, if you allocate 100 GB to the application, the application (or the file system associated with it) will mark all the entire 100 GB with metadata," Taneja writes. "If the application behaves in this fashion, realize that 100 GB is gone from the storage pool and is no longer available. This defeats the entire purpose of TP in the first place."

Those reasons likely contributed to thin provisioning's rank at the bottom of TheInfoPro's recent Wave 7 index of hot technologies, reports Rob Stevenson, managing director, storage practice at TheInfoPro. As a former storage administrator, Stevenson's personal experience taught him that "thin provisioning was application sensitive and introduced another pain into the infrastructure certification process." He found that "each file system, relational database and NAS appliance allocated storage differently, and it typically confused the thin-provisioning product."

Some storage vendors think thin provisioning is a downright bad idea. "If your goal with a SAN is ease of use and consolidation, you shouldn't be telling administrators to lie and then have to manage the lie," said Eric Schott, director of product management at IP SAN vendor EqualLogic Inc. If, from a host perspective, it looks like there's a lot of available space, a user might decide to use it. And if the array can't provision new storage fast enough, the application will crash.

Furthermore, Schott said, the underlying reason for thin provisioning -- that it's a pain to grow storage volumes -- isn't true anymore. Expanding a disk is a relatively simple process with most modern operating systems, he said. However, he concedes that thin provisioning could be useful if you're running some legacy operating systems.

Nevertheless, SAN vendors offering thin provisioning report that their users have embraced it. Compellent's Fine says that approximately 80% of Storage Center users have Dynamic Capacity in place. The remaining 20%, he says, aren't using it because "old habits die hard," not because there's anything wrong with it.