Implementing iSCSI storage
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
Price -- iSCSI is typically cheaper to implement than FC storage -- and performance -- increased for iSCSI by the use of 10 Gbps Ethernet -- are often named as chief benefits of iSCSI storage. On the other side of the coin, iSCSI deployments frequently bring additional CPU overhead. However, VMware has been rewriting the iSCSI software initiator stack to make more efficient use of CPU cycles, and this has resulted in significant efficiency and throughput improvements compared to VMware Infrastructure 3 (VI3).
Best practices for using iSCSI storage in a VMware environment
Once iSCSI disks have been configured, they're ready to be used by virtual machines (VMs). The best practices listed here should help you get the maximum performance and reliability out of your iSCSI data stores in a VMware environment.
- The performance of iSCSI storage is highly dependent on network health and utilization. For best results, always isolate your iSCSI traffic onto its own dedicated network.
- You can configure only one software initiator on an ESX Server host. When configuring a vSwitch that will provide iSCSI connectivity, use multiple physical NICs to provide redundancy. Make sure you bind the VMkernel interfaces to the NICs in the vSwitch so multipathing is configured properly.
- Ensure the NICs used in your iSCSI vSwitch connect to separate network switches to eliminate single points of failure.
- vSphere supports the use of jumbo frames with storage protocols, but they're only beneficial for very specific workloads with very large I/O sizes. Also, your back-end storage must be able to handle the increased throughput by having a large number (15+) of spindles in your RAID group or you'll see no benefit. If your I/O sizes are smaller and your storage is spindle-bound, you'll see little or no increase in performance using jumbo frames. Jumbo frames can actually decrease performance in some cases, so you should perform benchmark tests before and after enabling jumbo frames to see their effect. Every end-to-end component must support and be configured for jumbo frames, including physical NICs and network switches, vSwitches, VMkernel ports and iSCSI targets. If any one component isn't configured for jumbo frames, they won't work.
- Use the new Paravirtual SCSI (PVSCSI) adapter for your virtual machine disk controllers as it offers maximum throughput and performance over the standard LSI Logic and BusLogic adapters in most cases. For very low I/O workloads, the LSI Logic adapter works best.
- To set up advanced multipathing for best performance, select Properties for the iSCSI storage volume and click on Manage Paths. You can configure the Path Selection Policies using the native VMware multipathing or third-party multipathing plug-ins if available. When using software initiators, create two VMkernel interfaces on a vSwitch; assign one physical NIC to each as Active and the other as Unused; use the esxcli command to bind one VMkernel port to the first NIC and the second VMkernel port to the second NIC. Using Round Robin instead of Fixed or Most Recently Used (MRU) will usually provide better performance. Avoid using Round Robin if you're running Microsoft Cluster Server on your virtual machines.
BIO: Eric Siebert is an IT industry veteran with more than 25 years of experience who now focuses on server administration and virtualization. He's the author of VMware VI3 Implementation and Administration (Prentice Hall, 2009).
This article originally appeared in Storage magazine.
This article was previously published on SearchStorage.com.
This was first published in November 2010