PCI Express extender approach to I/O virtualization

Podcast

PCI Express extender approach to I/O virtualization

Most vendors in the I/O virtualization space take the PCI Express extender approach. With this implementation, the server’s PCIe bus is extended by a cable to a gateway box that houses standard I/O adapters, which are shared among the servers attached to the cage. Users may find this approach beneficial in terms of cost and performance. 

In this podcast interview, George Crump, lead analyst for Storage Switzerland, discusses using PCIe bus extenders to implement I/O virtualization. Find out what PCIe bus extenders are and how they’re implemented, how this approach to I/O virtualization differs from that involving software virtualization with an InfiniBand or Ethernet card, PCIe bus extenders’ advantages and disadvantages for users, the necessary steps and requirements to implementing them, and vendors in the space of PCIe-based I/O virtualization.

Listen to the podcast on the PCIe bus extenders approach to I/O virtualization or read the transcript.

Play now:
Download for later:

PCI Express extender approach to I/O virtualization

  • Internet Explorer: Right Click > Save Target As
  • Firefox: Right Click > Save Link As

 

SearchVirtualStorage.com: How does the PCIe bus extenders approach to I/O virtualization work?

Crump: The name is probably the most appropriate in the industry. It’s exactly what it is. You put a PCIe card of sorts into your existing server. Generally, it’s relatively small in size and relatively inexpensive. And then you run, essentially, a PCIe cable out to a gateway-type box, and then within that gateway you put multiple cards that you can then share across any server that’s attached to that gateway.

SearchVirtualStorage.com: How does that approach differ from I/O virtualization involving software virtualization?

Crump: Architecturally, there are a lot of similarities. There’s almost always a gateway box with I/O virtualization. Typically with software, what the vendors are doing is that they’re using software on top of, maybe, an InfiniBand card or a 10 Gigabit card to provide that connectivity. There are a couple of vendors doing it over Ethernet now, and there are a couple of vendors doing it over InfiniBand.

SearchVirtualStorage.com: What are the benefits and drawbacks to PCIe-based I/O virtualization devices for users?

Crump: No. 1, it’s probably going to provide the best cost/performance of any of the solutions. The problem you’re going to have with a 10 GB type of solution is that you’re going to have to pay for the 10 GB card, of course, and 10 GB may not be enough. With PCIe you’re getting the full performance of the PCIe bus, plus generally the extension cards are about 50% less than the others. So it definitely should be a cost advantage and a performance advantage.

From a drawbacks perspective, probably the big problem is that PCIe was never designed for networking. InfiniBand was, and of course Ethernet was. So, not only are you trying to network across something that wasn’t really designed to network, but also, even if all that works—which in our experience it does—you’re implementing yet another network into your environment, and I don’t know if the data center really wants to do that.

SearchVirtualStorage.com: What are the key steps to implementing PCIe bus extenders?

Crump: Well, you’re almost always going to have server downtime because you need to put the card in and go. In theory you should be able to do that without a reboot, but I think most people are going to want to plan some maintenance. Obviously, you’re going to have to rack and stack the gateway and then decide which cards to put in the gateway. Probably the key step to me is more the selection of the card itself. What cards are you going to share in that gateway? So generally they’re going to be high-performance Ethernet cards and things like that.

SearchVirtualStorage.com: What are the requirements that IT shops need to address before implementing PCIe bus extenders?

Crump: I think the big one is what kind of cards you want to share. Typically, these are going to be very high-bandwidth cards. So you can take a 10 GB or 16 GB Fibre Channel connection and share that among six or seven servers, and things like that. That’s probably the No. 1 thing: to look at what you want to share.

The second thing is to look at the capabilities of the cards that are going in these gateways. Some have the ability to present themselves as multiple cards built right into the hardware, and that’ll be helpful. Otherwise, you’re going to need to count on your operating system or hypervisor to provide that support—and today many don’t.

SearchVirtualStorage.com: What vendors are in the space of PCIe-based I/O virtualization? How successful have these implementations been?

Crump: The three that I’m most familiar with are Xsigo, VirtenSys and Aprius. Aprius, as I understand it, has not made it and they’re selling off their assets. Xsigo is mostly now more down the software category. They’re trying to do this over Ethernet, and they were originally an InfiniBand solution. The pure PCIe player that I’m most familiar with is VirtenSys, and they seem to be doing pretty good. The customers that I’ve been exposed to—and there are obviously limited numbers—seem to be pretty happy with what they have.

This story was previously published on SearchVirtualStorage.com.


This was first published in July 2011