In a virtual server environment, the interaction between hypervisor and the storage hardware that supports it is complicated. In an effort to simplify that interaction and make it more efficient, VMware developed the vStorage APIs for Array Integration
With VAAI, storage array vendors can directly integrate their storage hardware and applications with vSphere. VAAI enables certain storage tasks, such as cloning, to be offloaded to the storage array, which can complete them more efficiently than the host can. Rather than use host resources to perform the work (which was required prior to VAAI), the host can simply pass the task onto the storage array, which will perform it while the host monitors the progress of the task. The storage array is purposely built to perform storage tasks and can complete requests much faster than the host can.
What the vStorage APIs for Array Integration do
There are currently three areas where VAAI enables vSphere to act more efficiently for certain storage-related operations:
- Copy offload. Operations that copy virtual disk files, such as VM cloning or deploying new VMs from templates, can be hardware-accelerated by array offloads rather than file-level copy operations at the ESX server. This technology is also leveraged for the Storage vMotion function, which moves a VM from one data store to another. VMware’s Full Copy operation can greatly speed up any copy-related operation, which makes deploying new VMs a much quicker process. This ca n be especially beneficial to any environment where VMs are provisioned on a frequent basis or when many VMs need to be created at one time.
- Write same offload. Before any block of a virtual disk can initially be written to, it needs to be “zeroed” first. (A disk block with no data has a Null value; zeroing a disk block writes a zero to it to clear any data that may already exist on that disk block from deleted VMs.) Default “lazy zeroed” virtual disks (those zeroed on demand as each block is initially written to) do not zero each disk block until it is written to for the first time. This causes a slight performance penalty and can leave stale data exposed to the guest OS. “Eager-zeroed” virtual disks (those on which every disk block is zeroed at the time of creation) can be used instead, to eliminate the performance penalty that occurs on first write to a disk block and to erase any previous VM data that may have resided on those disk blocks. The formatting process when zeroing disk blocks sends gigabytes of zeros (hence the “write same” moniker) from the ESX/ESXi host to the array, which can be both a time-consuming and resource-intensive process. With VMware’s Block Zeroing operation, the array can handle the process of zeroing all of the disk blocks much more efficiently. Instead of having the host wait for the operation to complete, the array simply signals that the operation has completed right away and handles the process on its own without involving the host.
- Hardware-assisted locking. The VMFS file system allows for multiple hosts to access the same shared LUNs concurrently, which is necessary for features like vMotion to work. VMFS has a built-in safety mechanism to prevent a VM from being run on or modified by more than one host simultaneously. vSphere employs “SCSI reservations” as its traditional file locking mechanism, which locks an entire LUN using the RESERVE SCSI command whenever certain storage-related operations, such as incremental snapshot growth, occur. This helps to avoid corruption but can delay storage tasks from completing as hosts have to wait for the LUN to be unlocked with the RELEASE SCSI command before they can write to it. Atomic Test and Set (ATS) is a hardware-assisted locking method that offloads the locking mechanism to the storage array, which can lock at individual disk blocks instead of the entire LUN. This allows the rest of the LUN to continue to be accessed while the lock occurs, helping to avoid performance degradation. It also allows for more hosts to be deployed in a cluster with VMFS data stores and more VMs to be stored on a LUN.
Vendor support for VAAI
Currently, the vStorage APIs for Array Integration provide benefits only for block-based storage arrays (Fibre Channel or iSCSI) and do not support NFS storage. Vendor support for VAAI has been varied, with some vendors, such as EMC, embracing it right away and other vendors taking longer to integrate it into all their storage array models. To find out which storage arrays support specific vStorage API features, you can check the VMware Compatibility Guide for storage/SANs.
Using the VMware Compatibility Guide for storage/SANs, you can search for your storage array to determine whether it supports VAAI and, if so, which of those APIs are supported.
The guide is searchable and shows information about each storage array such as which multipathing plug-ins are supported as well as which VAAI features are supported. If your storage array currently does not support VAAI, check with the vendor to see if it plans to add support for it. You may need to upgrade to a newer release of vSphere or a newer-model storage array that supports VAAI.
The vStorage APIs for Array Integration are enabled by default in vSphere 4.1 (but not supported in vSphere 4.0), and as long as a storage array supports them, they will be active. But you may want to disable VAAI functions if you are experiencing problems that may be caused by storage array incompatibilities or for testing purposes so you can compare performance statistics with VAAI enabled and with it disabled. You can disable each function individually by using the following advanced host settings from the Configuration > Software >Advanced Settings menu in the vSphere client:
- To disable copy offload, set DataMover.HardwareAcceleratedMove to 0.
- To disable write same offload, set DataMover.HardwareAcceleratedInit to 0.
- To disable hardware-assisted locking, set VMFS3.HardwareAssistedLocking to 0.
You can disable the VAAI settings via the Configuration > Software > Advanced Settings menu in vSphere.
The performance improvements that VAAI provides for specific storage operations are pretty dramatic and make a compelling case for leveraging the APIs. VMware is continually improving the vStorage APIs with each release of vSphere; expect to see more API integration in the areas of NFS enhancements, snapshot offload and array management in future releases.
Eric Siebert is a VMware expert and author of two books on virtualization.
This article was originally published on SearchVirtualStorage.com.
This was first published in June 2011