vSphere 5 storage: New storage management functionality, vSphere Storage Appliance

Podcast

vSphere 5 storage: New storage management functionality, vSphere Storage Appliance

Last year, VMware released the latest version of its virtualization platform, vSphere 5, which among other things offered storage functionality enhancements to the previous version of the suite, vSphere 4. vSphere 5 also includes new storage management features, such as Storage Distributed Resource Scheduler (DRS), Profile-Driven Storage and a new set of vStorage APIs. Additionally, the latest vSphere comes with the new vSphere Storage Appliance, a virtual storage appliance (VSA) that supports higher RAID levels and is targeted at small and medium-sized businesses (SMBs).

In this interview, VMware expert Mike Laverick digs into vSphere 5 storage features, VMware’s storage enhancements to vSphere 4 and how the changes ease storage management for the administrator. Find out what Storage DRS and Profile-Driven Storage are and how they work, the latest news around vStorage APIs, and the benefits of the new vSphere Storage Appliance.

Play now:
Download for later:

VMware vSphere 5 offers new storage features

  • Internet Explorer: Right Click > Save Target As
  • Firefox: Right Click > Save Link As

What new storage management features does VMware’s vSphere 5 have to offer that were not available in vSphere 4, and how does it improve upon the previous version’s?

Laverick: I think one of the key things to take away is the new platform that includes the new version of VMware’s file system, VMFS-5. The good news is that you can now do a seamless upgrade of one version of VMFS to another without having to power off the virtual machines (VMs). But from a storage perspective, this smashes through the old limitations of VMFS, and now allows for a single VMFS volume of 64 TB in size, and allows the guest operating system (OS) access [to] a LUN directly from the array over a 64 TB size as well. [With Storage vMotion] the ability to move a virtual machine (VM) from one data store to another has been massively improved. They no longer use the Changed Block Tracking (CBT) mechanism, and they now have a mirroring architecture [that] allows them to do a kind of block-level update without driving as many IOPS to the array as they used to. 

Finally, another interesting development is the inclusion of a Fibre Channel over Ethernet (FCoE) software adapter. You do need the NICs that support FCoE and its needs to be enabled in the BIOS; until this you previously had to purchase rather expensive CNAs to get FCoE to work, and they now have a software adapter that enables that. It sits alongside the iSCSI software adapter implementation, but it differs in the fact that you need special network cards enabled in the BIOS for it to work properly.

Could you talk a bit more about Storage DRS and Profile-Driven Storage and how they work?

Laverick: It’s important to see that storage profiles and Storage DRS are both sides of the same coin; the two work together. [Storage] Profiles allows the VMware administrator to install a little bit of software from the storage partner, be that EMC, Dell or NetApp, and then it allows them to categorize the storage by different attributes -- RAID level, what sort of snapshotting support there is and also what replication there is available as well. These profiles are intended to allow the system administrator to categorize the storage such that when they go to create new virtual machines, they could choose different kinds of filters so you can select a profile that would only show volumes that are replicated and hide volumes that weren’t replicated.

That also feeds into the Storage DRS feature. In case your listeners aren’t familiar, it essentially just does for storage what VMware did for CPU and memory. When the virtual machine is created, VMware looks at the data stores that are available and can place the virtual machine on the storage of the best type, and it can also move the virtual machines from one data store to another to improve its performance. With that said, storage teams will know that a lot of storage vendors now offer multi-tiering of storage, between SSD and SAS and SATA, so the general recommendation is -- if you’re using multi-tiering, or auto-tiering as it’s sometimes called -- to use [Storage] DRS for the initial placement, but let the array handle the moving of the hot blocks, as it’s called, from one layer of storage to the other.

What’s new in vSphere 5 storage around vStorage APIs?

Laverick: The big news is that the hardware acceleration that used to be available just for block-based storage -- that’s Fibre Channel and iSCSI -- is now being extended to NFS, so the time it takes to copy a virtual machine -- say, when you have to provision a new virtual machine from a template -- is massively reduced. Also exposed to the vSphere layer is Thin Provisioning Stun. Basically, it’s a SCSI primitive that will send a message to the system administrators of the vSphere layer about volumes that are thinly provisioned and potentially running out of space. In the past, it used to be that you’d just run out of space, and then have all manner of problems to resolve. The other thing that’s kind of related to that is something called space reclamation of thin-provisioned volumes as well. So when data is deleted out of a virtual disk, [which] used to be not reclaimed by the storage array, the new APIs allow that disk space to be recovered.

And then on a minor level, there is a new kind of tag that allows you to identify storage which is SSD-based, basically allowing the system administrator, again in vSphere, to ID the storage quickly when they’re actually deploying a new virtual machine.

vSphere 5 comes with a new virtual storage appliance that is available separately. How does the vSphere Storage Appliance work and what are its advantages?

Laverick: As any appliance from VMware, it’s a virtual appliance; it comes with a kind of management head, and then every ESX host gets a VSA, which allows you to share out the local storage that’s inside each ESX host and make it available over NFS. For the moment, it’s being squarely pitched at the SMB market, largely because the volume of storage you can actually make available to it is pretty limited by the amount of storage actually in the ESX host. Some people have looked at this and said that the overhead -- in terms of running the VSA (RAID level and stuff like that) -- actually results in quite low disk utilization. But I understand there is a new iteration of VSA that will add higher levels of RAID support, RAID 5 and RAID 6, which should result in the same level of protection, but with a larger amount of addressable storage.

The other thing that I think is worth saying is that the [vSphere Storage Appliance] probably would have had a bigger impact in the early days of virtualization, when smaller shops were trying to get into virtualization but couldn’t because of the storage requirements. But nowadays there’s a lot of entry-level storage arrays out there that are very good value for money and very good at performance. So it’s a bit open at the moment whether the VSA will get the kind of traction that VMware is hoping for. But it’s an interesting development, at least, that VMware is trying its best to answer the demands of the SMB market: cost-efficient storage [that meets] all the requirements -- things like vMotion, [high availability] and DRS.

This podcast was originally published on SearchVirtualStorage.com.


This was first published in April 2012