Podcast

Using NFS to support a virtual server environment

Using NFS to support your virtual server environment provides advantages for IT managers in terms of cost and complexity. But there are disadvantages, such as the lack of support for multi-pathing and the fact that the vStorage APIs for Array Integration (VAAI) don’t support NFS.

In this podcast interview, VMware expert Eric Siebert discusses the best way to use NFS for virtual server environments. Discover the benefits and complications that can arise when implementing network-attached storage (NAS) in your virtual server platform, the outlook on VAAIs, lack of support for NFS, how NFS performance compares with that of iSCSI and Fibre Channel, and how to set up an NFS device to get the best performance from it.

Listen to the podcast on NFS for virtual servers or read the transcript below.

What are the benefits of using NFS to support a virtual server platform?

Cost is a big one. Having shared storage is almost a must with virtualization if you want to take advantage of some of the more advanced features, such as high-availability [HA] and vMotion, that require shared storage. The cost of implementing a typical Fibre Channel solution for shared storage is typically pretty high. A NAS solution, on the other hand, can greatly reduce the amount of the expense of implementing a shared-storage solution because NAS uses common NICs instead of expensive Fibre Channel adapters. [NAS also] uses traditional network components instead of expensive Fibre Channel switches and cables.

Complexity is another [benefit]. When setting up a NAS solution, it’s typically much easier compared to a SAN solution. And specialized storage administrators aren’t usually necessary in many cases to do it, so the technology is a lot simpler than setting up a SAN. Many server or virtualization admins usually set up a NAS without any sort of special training. [Also, the] overall management of a NAS is typically easier compared to a more complicated SAN.

What disadvantages or complications can you run into with using NAS for a virtual server platform? When would you say it’s a bad idea to use NFS?

NFS may be a bit different because of the file level protocol. But that’s not really a bad thing. Overall, it’s a good and effective solution for all, but there are a few caveats that you should be aware of when using it. The first is if you want to boot directly from a storage device with having diskless servers. That’s not supported by NFS. NFS also uses a software client that was built into the hypervisor, instead of a hardware I/O adapter. Because of that there’s a bit of CPU overhead as the hypervisor must use the software client to communicate with the NFS device. Normally this isn’t too big of an issue, but on a host it can cause degradation of performance because the CPUs are being shared by the VM. That can really slow down your storage. They have a really high, really busy storage device that does a lot of transactional I/Os. In that case I would say that maybe a Fibre Channel solution may be more attractive.

Some vendors don’t recommend using NFS storage for certain transactional applications that are sensitive due to latency that can occur with NFS. However, this is dependent on many factors such as host resources, configuration, as well as the performance of the NFS device that you’re using. It really shouldn’t be a problem for a well-built and properly sized NFS system.

Finally, NFS doesn’t support using multi-pathing from a host to an NFS server. So typically you can set up multiple paths from the host device for fault tolerance and for doing load balancing. With NFS, only a single TCP session will be open to an NFS data store, which can limit its performance. This can be alleviated by using multiple smaller data stores instead of fewer, larger data stores, or by using 10 Gigabit Ethernet [GbE], where the available throughput from the single session will be much greater. It doesn’t really affect high availability, which still can be achieved with NFS using multiple NICs in the virtual switch.

I understand there’s a lack of support for NFS in VAAI. What kind of impact does this have on users who want to use NFS for their virtual server environment?

Well, the vStorage APIs for Array Integration are still a fairly new technology, and [are] continually evolving with each new release of vSphere. A lot of the vendors are still kind of late to the game to being able to support it. But currently [VAAI] only supports VMFS data stores, and it doesn’t support NFS storage. But while NFS is not supported yet, some of the NFS solutions, like those from NetApp, have similar features for loading and offloading that can provide some of the same benefits as the vStorage APIs, but it seems that support for NFS always seems to lag behind compared to support for block-level storage and vSphere. So I think it’s pretty much a matter of time before vStorage APIs for Array Integration catch up and start supporting NFS as well.

How does the performance of NFS stack up to that of iSCSI and Fibre Channel?

It really depends on the architecture and the types of storage devices that you use for NFS. But overall NFS performance is pretty close to iSCSI. They’re both pretty similar in software clients and network-based protocols. Fibre Channel, on the other hand, is really hard to beat. And while NFS can come close to that kind of performance level that Fibre Channel provides, Fibre Channel is really a king when it comes to performance. It’s really hard for those other types of protocols to match the level of Fibre Channel.

It’s tough to say that NFS performs poorly. It does provide good performance, and in most cases it should be able to handle most workloads. The important thing with NFS is to not let the CPU become a bottleneck. 10 GbE can also provide a big performance boost for NFS if you can afford it—to bring it up to the level of performance that you can get with Fibre Channel.

How do you set up an NFS device so you get the best performance from it?

As I mentioned, the first thing is having enough CPU resources available so that the CPU never becomes a bottleneck of the NFS protocol processing. That one is fairly easy to achieve by simply making sure you don’t completely overload your virtual host CPUs with too many VMs. Network architecture is a big one; performance of NFS is highly dependent on network health and utilization. So you should basically isolate your NFS traffic onto a dedicated physical NIC that [isn’t] shared with any other virtual machines. You should also isolate the storage network so that it’s dedicated to the host and the NFS servers [and is not] shared with any other network traffic at all.

Your NICs are basically your speed limit. So if you’re using a 1 GB NIC, that’s adequate for most purposes. But to take NFS to the next level and experience the best-possible performance, 10 GbE is really the best ticket to that.

Finally, the type of NFS storage device that you’re connected to can make all the difference in the world. Just like any storage device, you really have to size your NFS servers to meet the storage I/O demands. So for your virtual machines, don’t use an old physical server running a Windows NFS server and expect it to meet the demand of the workload of busy VMs. In general, the more money you spend on NFS solutions, the better performance you’ll get out of it. There are many high-end NFS solutions out there, like those from NetApp, that will meet the demands of most workloads. So basically, buy the solution that will meet your needs and make sure the NFS server does not become a bottleneck. NFS is different from block storage devices; you should really architect and configure it accordingly to leverage its strength.

This article was originally published on SearchVirtualStorage.com.


This was first published in May 2011