iSCSI in VMware environments: Best practices

Tip

iSCSI in VMware environments: Best practices

iSCSI and its sister IP protocol, NFS, have quickly become popular storage protocols for use in VMware environments. While there's ongoing debate

Continue Reading This Article

Enjoy this article as well as all of our content, including E-Guides, news, tips and more.

about which protocol fits best with VMware, the decision comes down to what the implementer is most comfortable with and which protocol they can afford. In many cases, that choice is iSCSI.

For the purposes of this discussion, we'll assume iSCSI is the protocol chosen for VMware and that there's a base understanding of how to implement and operate it in a VMware environment. Let's examine some best practices for leveraging iSCSI in VMware environments.

Best practice #1: Remember that iSCSI is a storage network

Even though iSCSI runs on IP, it's probably running data traffic for a business-critical application in the environment and should be treated that way. In a smaller data center, mixing the protocol with normal traffic is probably acceptable. But for a data center that has any real size at all, iSCSI should be on its own network.

More on iSCSI
Read one IT consultant's ideas on using iSCSI in a VMware sandbox

Read more about iSCSI storage in enterprise data centers

Check out our guide to iSCSI

Learn about iSCSI vs. Fibre Channel in virtual environments

An iSCSI network can't be viewed as just something else running on the local-area network (LAN) and should be on its own network. This includes separate physical ports for iSCSI traffic and IP traffic in the host itself. While virtual LANs (VLANs) can help divide traffic, they can only do so much. If one protocol suddenly demands the majority of available I/O bandwidth, this can cause significant issues. An exception can be made for high-bandwidth 10 Gb Ethernet (10 GbE) connections, but users should still look for cards with some sort of QoS capabilities built into them.

Current technology allows physical network interface cards (NICs) and host bus adapters (HBAs) to be divided into virtual cards with independent channels. Some of these channels are hardware-based circuits that physically segregate traffic, so there's no potential of an I/O storm affecting the performance of other virtual machines (VMs) or protocols that share the card. These cards also allow for maximum utilization of the 10 GbE bandwidth.

Best practice #2: Consider using only software initiators in iSCSI

There are two iSCSI initiator choices. The first is a software initiator, which is typically a device driver built into the OS for performing the SCSI-to-IP translation. The initial concern with software initiators was that since IP already places a load on the host CPU, adding the SCSI to IP translation could be too much. However, as CPU processing power has increased, this concern has waned. Software initiators have also matured and are now proving to be quite efficient. In VMware, the iSCSI load places no more than a 5% overhead on the CPU.

The other type of iSCSI initiator is a hardware-based iSCSI HBA. Hardware initiators allow users to boot directly from the card, thereby eliminating the need for local storage and allowing advanced encryption capabilities. Hardware initiators can also boost performance because the HBAs should offload processing from the host.

Best practice #3: Consider using jumbo frames

Using a jumbo frame means setting the Ethernet frame to a larger size than the default frame size. This reduces the amount of frames the protocol interacts with in iSCSI, and decreases the amount of CPU that must be involved in the SCSI-to-IP translation. Jumbo frames also lessen the load on physical hosts, which is especially important when using software initiators.

In VMware ESX 3.5, jumbo frames were only supported at the VM level and at the kernel level, initially dampening their appeal. However, with VMware vSphere 4, jumbo frames are supported at both levels so maximum benefit can be achieved.

When implementing jumbo frames, there are several important configurations to consider:

  • All hardware in the chain must support jumbo frames.

  • Jumbo frames must be enabled as the vSwitch and vNICs (virtual versions of the physical hardware created by the hypervisor) are created.

  • The capability can't be turned on later. If there's a move to jumbo frames later, then those virtual components must be deleted and recreated.

BIO: George Crump is the lead analyst at Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments.

This was first published in April 2010

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.